FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
PROGRAM FOR FRIDAY, AUGUST 12TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 134A (GuttmanFest)
Location: Taub 6
09:00
Model Finding for Exploration
09:30
On Orderings in Security Models
10:00
Benign Interaction of Security Domains
PRESENTER: Flemming Nielson
09:00-10:30 Session 134B (PAAR)
Location: Taub 7
09:00
The Logic Languages of the TPTP World and Proofs and Models in the TPTP World

ABSTRACT. The TPTP World is a well established infrastructure that supports research, development, and deployment of Automated Theorem Proving (ATP) systems. This talk provides an overview of the logic languages of the TPTP World, from classical first-order form, through typed first-order form, up to typed higher-order form, and beyond to non-classical forms. The logic languages are described in a non-technical way, and are illustrated with examples using the TPTP language. The second part of the talk describes the current (underdeveloped?) representation and use of proofs and models in the TPTPWorld. Topics covered include the SZS ontology; the representation of derivations and finite models; the TSTP solution library; tools for examining and manipulating the derivations; uses of individual solutions, and the TSTP as a whole; projects that have used the TPTP format and tools for proofs and models; the proposed "Tons of Data for Theorem Provers" library; future work and directions to improve the status quo.

 

10:00
Automated Reasoning in Non-classical Logics in the TPTP World
PRESENTER: Alexander Steen

ABSTRACT. Non-classical logics are used in a wide spectrum of disciplines, including artificial intelligence, computer science, mathematics, and philosophy. The de-facto standard infrastructure for automated theorem proving, the TPTP World, currently supports only classical logics. This paper describes the latest extension of the TPTP World, providing languages and infrastructure for reasoning in non-classical logics. The extension integrates seamlessly with the existing TPTP World.

09:00-10:30 Session 134C: 1st invited talk and paper presentation (SC^2)
Location: Taub 3
09:00
Non-Linear Real Arithmetic with Transcendental Function Symbols: Undecidable but Easy?

ABSTRACT. Recent years have seen an incredible boost in the power of SMT solvers. This includes the ability to decide non-linear real arithmetic using algorithms inspired by cylindrical algebraic decomposition, and the ability to handle transcendental function symbols by interval, linearization and heuristic techniques. However, despite this success, today's SMT solvers still do not succeed for many real-number benchmarks problems that human intuition would consider easy to solve.

In the talk, I will present some thoughts on the formalization of this notion of "easy to solve", and some mathematical and algorithmic tools that promise to be useful for arriving at SMT solvers that are able to efficiently solve problems that are easy according to such a formalization. Especially, I will concentrate on the problem of proving satisfiability in the presence of transcendental function symbols.

The talk will be based on joint work with Peter Franek, Enrico Lipparini, and Piotr Zgliczynski.

10:00
Cylindrical Algebraic Coverings for Quantifiers
PRESENTER: Gereon Kremer

ABSTRACT. The cylindrical algebraic coverings method was originally proposed to decide the satisfiability of a set of nonlinear real arithmetic constraints. We reformulate and extend the cylindrical algebraic coverings method to allow for checking the validity of arbitrary nonlinear arithmetic formulas, adding support for both quantifiers and arbitrary Boolean structure. Furthermore, we also propose a variant to perform quantifier elimination on such formulas.

09:00-10:30 Session 134D: Celebrating 20 years of the SMT workshop (SMT)
Location: Taub 2
09:00
Panel discussion "SMT: Past, Present and Future"
PRESENTER: Cesare Tinelli
10:00
CDSAT for nondisjoint theories with shared predicates: arrays with abstract length

ABSTRACT. CDSAT (Conflict-Driven Satisfiability) is a paradigm for theory combination that works by coordinating theory modules to reason in the union of the theories in a conflict-driven manner. We generalize CDSAT to the case of nondisjoint theories by presenting a new CDSAT theory module for a theory of arrays with abstract length, which is an abstraction of the theory of arrays with length. The length function is a bridging function as it forces theories to share symbols, but the proposed abstraction limits the sharing to one predicate symbol. The CDSAT framework handles shared predicates with minimal changes, and the new module satisfies the CDSAT requirements, so that completeness is preserved.

09:00-10:30 Session 134E (VeriProp)
Location: Taub 9
09:00
Probabilistic Hyperproperties

ABSTRACT. Four decades ago, Lamport used the notion of trace properties as a means to specify the correctness of individual executions of concurrent programs. This notion was later formalized and classified by Alpern and Schneider to safety and liveness properties. Temporal logics like LTL and CTL were built based on these efforts to give formal syntax and semantics to requirements of trace properties. Subsequently, verification algorithms were developed to reason about individual executions of a system.

However, it turns out that many interesting requirements are not trace properties. For example, important information-flow security policies (e.g. noninterference, observational determinism) or service level agreements (e.g. mean response time, percentage uptime) cannot be expressed as properties of individual execution traces of a system. Rather, they are properties of sets of execution traces, also known as hyperproperties. Temporal logics such as HyperLTL and HyperCTL∗ have been proposed to provide a unifying framework to express and reason about hyperproperties.

This talk is devoted to special class of hyperproperties: we ask the question what are hyperproperties in the context of systems with random behavior. We will discuss what are relevant probabilistic relations between independent executions of a system, how we can formally express them in a temporal logic, and how we can decide the truth of such statements.

09:50
Widest Paths and Global Propagation in Bounded Value Iteration for Stochastic Games
PRESENTER: Ichiro Hasuo

ABSTRACT. Solving \emph{stochastic games} with the reachability objective is a fundamental problem, especially in quantitative verification and synthesis. For this purpose, \emph{bounded value iteration (BVI)} attracts attention as an efficient iterative method. However, BVI's performance is often impeded by costly \emph{end component (EC) computation} that is needed to ensure convergence. Our contribution is a novel BVI algorithm that conducts, in addition to local propagation by the Bellman update that is typical of BVI, \emph{global} propagation of upper bounds that is not hindered by ECs. To conduct global propagation in a computationally tractable manner, we construct a weighted graph and solve the \emph{widest path problem} in it. Our experiments show the algorithm's performance advantage over the previous BVI algorithms that rely on EC computation.

10:10
Safe Sampling-Based Planning for Stochastic Systems with Uncertain Dynamics
PRESENTER: Thom Badings

ABSTRACT. Autonomous systems that operate in safety-critical settings must be able to behave safely under stochastic disturbances and imprecise model parameters. Due to their uncertain and stochastic nature, such autonomous systems may be captured by probabilistic programs. A typical reach-avoid planning problem is to reach a desirable goal state while avoiding certain unsafe states. However, due to stochastic disturbances and imprecise model parameters, the outcome of an action by the controller of the autonomous system may be uncertain. Moreover, the set of safe states is typically non-convex, making it infeasible to find global solutions to reach-avoid problems in general.

In this research, we employ abstraction-based methods to solve reach-avoid problems for continuous-state systems with partially unknown and/or stochastic dynamics. Given a partial model specification and a reach-avoid problem, we abstract the system into a discrete-state model that captures the uncertainty about the dynamics by probabilistic transitions between discrete states. We resort to sampling techniques to compute probably approximately correct (PAC) bounds on these transition probabilities, which we capture in a so-called interval Markov decision process (iMDP). We synthesize policies using a tailored version of the probabilistic model checker PRISM for iMDPs. We then convert the policy into a feedback controller for the continuous system, for which we give high-confidence guarantees on the probability that the reach-avoid problem is satisfied. With a diverse set of benchmarks, we show the robustness and practical applicability of our method for solving realistic reach-avoid problems.

09:30-10:30 Session 135A: Libraries (Coq)
Location: Taub 5
09:30
A Coq Library for Mechanised First-Order Logic
PRESENTER: Dominik Kirst

ABSTRACT. We report about an ongoing collaborative effort to consolidate several Coq developments concerning metamathematical results in first-order logic into a single library. We first describe the framework regarding the representation of syntax, deduction systems, and semantics as well as its instantiation to axiom systems and tools for user-friendly interaction. Next, we summarise the included results mostly connected to completeness, undecidability, and incompleteness. Finally, we conclude by reporting on challenges experienced and anticipated during the integration. The current status of the project can be tracked in a public fork of the Coq Library of Undecidability Proofs.

10:00
QuantumLib: A Library for Quantum Computing in Coq
PRESENTER: Jacob Zweifler

ABSTRACT. The INQWIRE QuantumLib is a library for representing and reasoning about quantum computing in the Coq proof assistant. The library includes a deep thread of linear algebra definitions and proofs that emphasize mathematical facts from the perspective of quantum computing. We include much support for different quantum objects and operations, allowing users to simulate many different types of quantum programs. QuantumLib now provides the mathematical foundation for a range of other quantum verification projects. QuantumLib emphasizes both computation and proof (unlike libraries such as Mathematical Components or CoqEAL that focus on one or the other) and is specialized to the use-case of verified quantum computing in Coq.

09:30-10:30 Session 135B: Invited talk (UNIF)
Location: Taub 8
09:30
Unification types in modal logics

ABSTRACT. The unification problem in a modal or description logic is to determine whether given formulas have unifiers in that modal or description logic. When an instance of the unification problem has minimal complete sets of unifiers, it is either of type unitary, or finitary, or infinitary, depending on the cardinality of these sets. Otherwise, it is of type nullary. In this talk, we will survey the known results and we will present the recent advances about the unification types of modal and description logics.

10:00-10:30 Session 136 (WST)
Location: Taub 4
10:00
The Termination Competition

ABSTRACT. In this talk, we will present and discuss the results of the international termination and complexity competition.

10:30-11:00Coffee Break
11:00-12:30 Session 137A: Automation and Tooling (Coq)
Location: Taub 5
11:00
Trakt: a generic pre-processing tactic for theory-based proof automation
PRESENTER: Enzo Crance

ABSTRACT. We present Trakt, a Coq plugin to pre-process goals before handing them to theory-based proof automation tactics. It is designed to be user-friendly, generic and extensible. The plugin is available with documentation and examples on GitHub.

11:30
10 Years of Superlinear Slowness in Coq
PRESENTER: Jason Gross

ABSTRACT. In most programming languages, asymptotic performance issues can almost always be explained by reference to the algorithm being implemented. At most, the standard asymptotic performance of explicitly used operations on chosen data structures must be considered. Even the constant factors in performance bottlenecks can often be explained without reference to the implementation of the interpreter, compiler, nor underlying machine.

In 10+ years of working with Coq, we (Jason, Andres, and our colleagues) have found this pattern, which holds across multiple programming languages, to be the exception rather than the rule in Coq! This turns performant proof engineering, especially performant proof automation engineering, from straightforward science into unpredictable and fragile guesswork.

By presenting in detail a sampling of examples, we propose a defense of the thesis: Performance bottlenecks in proof automation almost always result from inefficiencies in parts of the system which are conceptually distant from the theorem being proven. Said another way, *debugging, understanding, and fixing performance bottlenecks in automated proofs almost always requires extensive knowledge of the proof engine, and almost never requires any domain-specific knowledge of the theorem being proven*. Further, there is no clear direction of improvement: We know of no systematic proposal, nor even folklore among experts, of what primitives and performance characteristics are sufficient for a performant proof engine.

We hope to start a discussion on the obvious corollary of this thesis: *This should not be!*

Our presentation, we hope, will serve as a call for a POPLMark for Proof Engines, a call for designing and implementing an adequate proof engine for *scalable performant modular proof automation*.

12:00
Autogenerating Natural Language Proofs
PRESENTER: Seth Poulsen

ABSTRACT. Understanding mathematical proofs is critical for students learning the foundations of computing. Having students construct mathematical proofs with the help of a computer is appealing as it makes it easier to autograde student work and autogenerate practice problems for students. Most existing tools for students to construct proofs with a computer are restricted systems that only permit simple logics, or there is a large overhead involved in students learning how to use them Proof Blocks (proofblocks.org), a tool that allows students to drag-and-drop prewritten lines of a proof into the correct order is a nice compromise because the tool is easy for students to use to construct proofs on any topic~\cite{poulsen2022proof}. However, a downside is that the process of writing questions can be time consuming and error-prone. An instructor must write the proof, break it into lines, and then specify a directed acyclic graph giving the logical dependence between lines of the proof. In this paper, we document the first step toward building a system to automatically generate Proof Blocks problems from Coq proofs: a Coq plugin which generates a natural language proof from a Coq proof. Our natural language generator differs from similar tools in that we deliberately restrict which definitions and tactics are allowed in the interest of improving readability of the output.

11:00-12:30 Session 137B (GuttmanFest)
Location: Taub 6
11:00
A Declaration of Software Independence
PRESENTER: Peter Y A Ryan
11:30
A Tutorial-Style Introduction to DY*
PRESENTER: Guido Schmitz
11:50
Establishing the Price of Privacy in Federated Data Trading
PRESENTER: Kangsoo Jung
12:10
Prototyping Formal Methods Tools: a Protocol Analysis Case Study
PRESENTER: Abigail Siegel
11:00-12:30 Session 137C (PAAR)
Location: Taub 7
11:00
Generating Compressed Combinatory Proof Structures: An Approach to Automated First-Order Theorem Proving

ABSTRACT. Representing a proof tree by a combinator term that reduces to the tree lets subtle forms of duplication within the tree materialize as duplicated subterms of the combinator term. In a DAG representation of the combinator term these straightforwardly factor into shared subgraphs. To search for proofs, combinator terms can be enumerated, like clausal tableaux, with simultaneously relating formulas associated with components of the enumerated structures through unification. To restrict the search space, the enumeration can be based on proof schemas defined as parameterized combinator terms. We introduce here this "combinator term as proof structure" approach to automated first-order proving, present an implementation and first experimental results. The approach builds on a term view of proof structures rooted in condensed detachment and the connection method. It realizes features known from the connection structure calculus, which has not been implemented so far.

11:30
An Extensible Logic Embedding Tool for Lightweight Non-Classical Reasoning

ABSTRACT. The logic embedding tool encodes non-classical reasoning problems into classical higher-order logic. It is easily extensible and can support an increasing number of different non-classical logics as reasoning targets. When used as pre-processor or library for higher-order theorem provers, the tool admits off-the-shelf automation for logics for which otherwise few to none provers are available.

12:00
The Isabelle Community Benchmark
PRESENTER: Fabian Huch

ABSTRACT. Choosing hardware for theorem proving is no simple task: automated provers are highly complex and optimized programs, often utilizing a parallel computation model, and there is little prior research on the hardware impact on prover performance. To alleviate the problem for Isabelle, we initiated a community benchmark where the build time of HOL-Analysis is measured. On 54 distinct CPUs, a total of 669 runs with different Isabelle configurations were reported by Isabelle users. Results range from 107s to over 11h. We found that current consumer CPUs performed best, with an optimal number of 8 to 16 threads, largely independent of heap memory. As for hardware parameters, CPU base clock affected multi-threaded execution most with a linear correlation of 0.37, whereas boost frequency was the most influential parameter for single-threaded runs (correlation coefficient 0.55); cache size played no significant role. When comparing our benchmark scores with popular high-performance computing benchmarks, we found a strong linear relationship with Dolfyn (R² = 0.79) in the single-threaded scenario. Using data from the 3DMark CPU Profile consumer benchmark, we created a linear model for optimal (multi-threaded) Isabelle performance. When validating, the model has an average R²-score of 0.87; the mean absolute error in the final model corresponds to a wall-clock time of 46.6s. With a dataset of true median values for the 3DMark, the error improves to 37.1s.

11:00-12:30 Session 137D: Paper presentations (SC^2)
Location: Taub 3
11:00
NP: SMT-Solving Combinatorial Inequalities
PRESENTER: Ali Kemal Uncu

ABSTRACT. This paper accompanies a new dataset of \verb+QF_NRA+ for the SMT-LIB. The problems come from an automated proof procedure for combinatorial inequalities, and have not been tackled by SMT-solvers before. We describe the methodology and give one new such proof to illustrate it. We then describe the dataset and the results of a benchmarking on it which reveals the new dataset to be quite different to the existing benchmarks, and well suited for solution by SMT. The benchmarking also brings some interesting debate on the use of rational functions and algebraic numbers in the SMT-LIB.

11:30
Decidability of difference logics with unary predicates
PRESENTER: Baptiste Vergain

ABSTRACT. We investigate the decidability of a family of logics mixing difference-logic constraints and unary uninterpreted predicates. The focus is set on logics whose domain of interpretation is $\mathbb{R}$, but the language has a recognizer for integer values. We first establish the decidability of the logic allowing unary uninterpreted predicates, order constraints between real and integer variables, and difference-logic constraints between integer variables. Afterwards, we prove the undecidability of the logic where unary uninterpreted predicates and difference-logic constraints between real variables are allowed.

12:00
Automatic Deployment of Component-based Applications in the Cloud

ABSTRACT. Automated deployment of component-based applications in the Cloud became important with the increased digital market demand. It consists of: (1) selection of the computing resources, (2) distribution/assignment of the application components over the available computing resources, (3) its dynamic modification to cope with peaks of user requests. We focus on the first two steps which can be formulated as a linear constraint optimization problem (COP) and solved using existing exact general techniques, such as constraint programming (CP), mathematical programming (MP), and optimization modulo theory (OMT). However, the computational requirement is high, making the problem impossible to be solved. We solved the issue by using static symmetry breaking techniques problem-tailored.

11:00-12:30 Session 137E: Theories and Proofs (SMT)
Chair:
Location: Taub 2
11:00
User-Propagators for Custom Theories in SMT Solving

ABSTRACT. We present ongoing work on developing the user-propagator framework in SMT solving. We argue that the integration of user-propagators in SMT solving yields an efficient approach towards custom theory reasoning, without bringing fundamental changes in the underlining SMT architecture. We showcase our approach in the SMT solver Z3, provide practical evidence of our work, and also discuss potential venues for further improvements.

11:20
An SMT-LIB Theory of Heaps
PRESENTER: Zafer Esen

ABSTRACT. Constrained Horn Clauses (CHCs) are an intermediate program representation that can be generated by several verification tools, and that can be processed and solved by a number of Horn solvers. One of the main challenges when using CHCs in verification is the encoding of heap-allocated data-structures: such data-structures are today either represented explicitly using the theory of arrays, or transformed away with the help of invariants or refinement types, defeating the purpose of CHCs as a representation that is language-independent as well as agnostic of the algorithm implemented by the Horn solver. This paper presents an SMT-LIB theory of heaps tailored to CHCs, with the goal of enabling a standard interchange format for programs with heap data-structures. We introduce the syntax of the theory of heaps, define its semantics in terms of axioms and using a reduction to SMT-LIB arrays and data-types, provide an experimental evaluation and outline possible extensions and future work.

11:40
Challenges and Solutions for Higher-Order SMT Proofs
PRESENTER: Mikolas Janota

ABSTRACT. An interesting goal is for SMT solvers to produce independently checkable proofs. SMT languages already have expressive power that goes beyond first-order languages, and further extensions would give even more expressive power by allowing quantification over function types. Indeed, such extensions are considered in the current proposal for the new standard SMT3. Given the expressive power of SMT and its extensions, careful thought must be given to the intended semantics and an appropriate notion of proof. We propose higher-order set theory as an appropriate interpretation of SMT (and extensions) and obtain an adequate notion of SMT proofs via proof terms in higher-order set theory. To demonstrate the strength of this approach, we give a number of abstract examples that would be difficult to handle by other notions of proof. To demonstrate the practicality of the approach, we describe a family of integer difference logic examples. We give proof terms for each of these examples and check the correctness of the proofs using two proof checkers: the proof checker distributed with the Proofgold system and an alternative checker we have implemented that does not rely on access to the Proofgold block chain.

12:00
A simple proof format for SMT
PRESENTER: Jochen Hoenicke

ABSTRACT. We present a simple resolution-based proof format targeted for SMT. The proof format uses a syntax similar to SMT-LIB terms. The resolution rule is its only proof rule with premises; all other rules are axioms proving tautological clauses. The format aims to be solver independent by only including a minimal complete set of axioms while still being able to express proofs succinctly. Most of its axioms are purely syntactic; only for arithmetic reasoning, some axioms with side conditions are used for succinct reasoning with linear (in)equalities. The format can be extended with solver-specific rules, which can then either be treated as trusted axioms, or, better, replaced by their low-level proof. The format has been implemented in the solver SMTInterpol and the solver produces proofs for all benchmarks it can solve in the combinations of the theories of equality, linear arithmetic, arrays and datatypes. There is also a stand-alone proof checker for this format.

11:00-12:30 Session 137F (UNIF)
Location: Taub 8
11:00
Higher-Order Unification with Definition by Cases
PRESENTER: David Cerna

ABSTRACT. We discuss unification within the simply-typed $\lambda$-calculus extended by a definition by cases operator (denoted by d) slightly differing from similar operators introduced by earlier investigations. Such operators may be thought of as restrictions of Hilbert's choice operator. We provide several non-trivial examples which illustrate the benefits of introducing such an operator.

11:30
Higher-order unification from E-unification with second-order equations and parametrised metavariables

ABSTRACT. Type checking and, to a larger extent, type and term inference in programming languages and proof assistants can be implemented by means of unification. In presence of dependent types, variations of higher-order unification are often used, such as higher-order pattern unification. In practice, there are often several mechanisms for abstraction and application, as well as other eliminators (such as projections from pairs) that have to be accounted for in the implementation of higher-order unification for a particular language. In this work, we study the possibility of representing various higher-order unification problems as a special case of E-unification for second-order algebra of terms. This allows us to present $\beta$-reduction rules for various application terms, and some other eliminators as equations, and reformulate higher-order unification problems as $E$-unification problems with second-order equations. We then present a procedure for deciding such problems, introducing second-order \emph{mutate} rule (inspired by one-sided paramodulation) and generic versions of \emph{imitate} and \emph{project} rules. We also provide a prototype Haskell implementation for syntax-generic higher-order unification based on these ideas in Haskell programming language.

12:00
Nominal Anti-Unification of Letrec-Expressions

ABSTRACT. This is a work-in-progress where we present a rule-based algorithm for nominal anti-unification of expressions in a higher-order language with lambda abstraction, functions symbols and recursive let. The algorithm is sound and weakly complete, but has high complexity.

11:00-12:30 Session 137G (VeriProp)
Location: Taub 9
11:00
Distribution Testing and Probabilistic Programming: A Match made in Heaven

ABSTRACT. The field of distribution testing seeks to test whether a given distribution satisfies a property of interest. While there have been deep technical developments in the field of distribution testing, the field has somehow been confined to theoretical studies. I will discuss our work on development practical algorithmic frameworks based on distribution testing, and its relationship to probabilistic program verification. In particular, I will show that the problem of measuring distance between straight line probabilistic programs is #P-hard. As expected, the theoretical hardness shouldn’t deter us and I will discuss our success in using distribution testing-based frameworks to design efficient samplers. 

11:50
A Quantitative Verification Infrastructure

ABSTRACT. Following the success of deductive verification infrastructures for non-probabilistic programs like Viper and Boogie, we talk about our proposal for a deductive verification infrastructure for probabilistic programs. Our quantitative intermediate verification language enables modular reasoning about lower and upper bounds of expected values of programs. In our framework, we can encode and verify different proof rules for weakest pre-expectation semantics. As a proof of concept, we have created a deductive verifier for probabilistic programs on top of our framework. It supports reasoning about lower bounds of weakest liberal pre-expectations and about upper bounds of weakest pre-expectatons. For loops, we use encodings of proof rules of Park induction and k-induction.

12:10
Inferring Expected Runtimes and Sizes
PRESENTER: Eleanore Meyer

ABSTRACT. We present a modular approach to infer upper bounds on the expected runtimes of probabilistic integer programs automatically. To this end, it computes bounds on the runtimes of program parts and on the sizes of their variables in an alternating way. To evaluate its power, we implemented our approach in a new version of our open-source tool KoAT.

11:00-12:30 Session 137H (WST)
Location: Taub 4
11:00
Automatic Complexity Analysis of (Probabilistic) Integer Programs via KoAT
PRESENTER: Eleanore Meyer

ABSTRACT. In former work, we developed an approach for automatic complexity analysis of integer programs, based on an alternating modular inference of upper runtime and size bounds for program parts. Recently, we extended and reimplemented this approach in a new version of our open-source tool KoAT. In order to compute runtime bounds, we analyze subprograms in topological order, i.e., in case of multiple consecutive loops, we start with the first loop and propagate knowledge about the resulting values of variables to subsequent loops. By inferring runtime bounds for one subprogram after the other, in the end we obtain a bound on the runtime complexity of the whole program. We first try to compute runtime bounds for subprograms by means of multiphase linear ranking functions (MRFs). If MRFs do not yield a finite runtime bound for the respective subprogram, then we apply a technique to handle so-called triangular weakly non-linear loops (twn-loops) on the unsolved parts of the subprogram. Moreover, we integrated control-flow refinement via partial evaluation to improve the automatic complexity analysis of programs further. Additionally, we introduced the notion of expected size which allowed us to extend our approach to the computation of upper bounds on the expected runtimes of probabilistic programs.

11:20
LoAT: The Loop Acceleration Tool
PRESENTER: Florian Frohn

ABSTRACT. LoAT is a tool for proving non-termination and lower bounds on the worst-case complexity of integer transition systems, i.e., programs operating on integers. It supports a variety of input formats, user-defined cost models, and it provides detailed proofs. LoAT is based on loop acceleration, a technique to transform single-path loops into non-deterministic straight-line code. Due to its ability to accelerate loops, LoAT can transform programs with complex control flow into quantifier-free first-order formulas, which can then be analyzed by off-the-shelf SMT solvers.

11:40
AProVE 2022
PRESENTER: Jürgen Giesl

ABSTRACT. AProVE is a tool for automatic termination and complexity proofs of Java, C, Haskell, Prolog, and term rewrite systems (TRSs). To analyze programs in high-level languages, AProVE automatically converts them into TRSs and/or Integer Transition Systems (ITSs). Then, a wide range of techniques is employed to prove termination and to compute complexity bounds for the resulting TRSs or ITSs. In particular, the tools KoAT and LoAT are used to infer upper and lower complexity bounds for ITSs, and LoAT and T2 are used for non-termination proofs of ITSs. For term rewrite systems, the generated proofs can be exported to check their correctness using automatic certifiers. AProVE can easily be used via its web interface and there is also a corresponding plug-in for the popular Eclipse software development environment.

12:00
The System SOL version 2022 (Tool Description)
PRESENTER: Makoto Hamana

ABSTRACT. SOL is a Haskell-based tool for showing confluence and strong normalisation of higher-order computation rules. Termination analysis is based on Blanqui's General Schema criterion and a new modular termination proof method of second-order computation systems recently shown by Hamana.

12:20
Things WANDA cannot do

ABSTRACT. In this short presentation, I will discuss the tool WANDA (a participant in the higher-order termination category of the termination competition). In particular, I will focus on weaknesses of WANDA, and challenges where other participants have something to win!

12:30-14:00Lunch Break

Lunches will be held in Taub hall.

14:00-15:30 Session 138A: Presenting and Working with Coq Code (Coq)
Location: Taub 5
14:00
jsCoq: Literate Proof Documents Dressed for the Web
15:00
HenBlocks: Structured Editing for Coq
PRESENTER: Bernard Boey

ABSTRACT. There are a number of pain points in using the Coq Proof Assistant, which affects beginners most. Structured editors, which allow the user to manipulate structured blocks corresponding to the abstract syntax of a program, have been used to make programming more accessible to beginners. However, they have not been applied to proving thus far. We present HenBlocks (available at https://henblocks.github.io), a web-based fully-fledged structured editor that allows users to write Coq proofs by manipulating blocks. We conclude that structured editing is a promising approach to proof writing that warrants more exploration, development, and testing.

14:00-15:30 Session 138B (GuttmanFest)
Chair:
Location: Taub 6
14:00
Three Branches of Accountability
14:30
Adapting constraint solving to automatically analyze UPI Protocols
14:50
Quantum Machine Learning and Fraud Detection
15:10
Formal Methods and Mathematical Intuition
14:00-15:35 Session 138C (PAAR)
Location: Taub 7
14:00
short COST action EuroProofNet introduction
14:05
EuroProofNet presentation on proofs in Dedukti
14:50
EuroProofNet presentation on SMT and proofs
14:00-15:30 Session 138D: 2nd invited talk and paper presentation (SC^2)
Location: Taub 3
14:00
Computer algebra and automation in Lean's mathematical library

ABSTRACT. Mathlib is a library of mathematics formalized in the Lean proof assistant amounting to nearly one million lines of code. Taking advantage of Lean's powerful metaprogramming framework, mathlib defines hundreds of custom tactics. Some of these tactics implement general proof search or decision procedures, while others are small-step tactics that help establish a familiar mathematical vernacular for mathlib proofs. This talk will survey the current state of mathlib's suite of tactics, focusing on those related to the SC^2 community, and will speculate about how these connections could be strengthened. With the arrival of Lean 4, a powerful programming language in its own right, the space of possibilities will grow significantly.

15:00
Enumerating Projective Planes of Order Nine with Proof Verification
PRESENTER: Daniel Dallaire

ABSTRACT. In this paper we describe a method of enumerating projective planes of order nine. The enumeration was previously completed by Lam, Kolesova, and Thiel using highly optimized and customized search code. Despite the importance of this result in the classification of projective geometries, the previous search relied on unverified code and has never been independently verified. Our enumeration procedure uses a hybrid satisfiability (SAT) solving and symbolic computation approach. SAT solving performs an enumerative search, while symbolic computation removes symmetries from the search space. Certificates are produced which demonstrate the enumeration completed successfully.

14:00-15:30 Session 138F: Invited talk (UNIF)
Location: Taub 8
14:00
Unification Decision Procedures using Basic Narrowing modulo an Equational Theory
15:00
Fuzzy Order-Sorted Feature Term Unification

ABSTRACT. This paper provides a generalized definition of the unification of Order-Sorted Feature (OSF) terms that considers a fuzzy subsumption relation between sort symbols rather than an ordinary (crisp) one. In this setting the unifier of two OSF terms is associated with a subsumption degree. We refer to the problem of unifying two OSF terms and computing the associated subsumption degree as fuzzy OSF term unification.

14:00-15:30 Session 138G (VeriProp)
Location: Taub 9
14:00
Quantifying Leakage in Privacy Risk Analysis using Probabilistic Programming
14:50
Deterministic stream-sampling for probabilistic programming: semantics and verification
PRESENTER: William Smith

ABSTRACT. Probabilistic programming languages rely fundamentally on some notion of sampling, and this is doubly true for probabilistic programming languages which perform Bayesian inference using Monte Carlo techniques. Verifying samplers---proving that they generate samples from the correct distribution---is crucial to the use of probabilistic programming languages for statistical modelling and inference. However, the typical denotational semantics of probabilistic programs is incompatible with deterministic notions of sampling which is problematic, considering most statistical inference is performed using pseudorandom number generators.

We present an operational and denotational semantics for probabilistic programming, in terms of operations on samplers, that allows for deterministic generation of random variables, and which distinguishes between samplers---programs---and the probability distributions which they target. We also introduce a sound calculus for demonstrating that samplers target a desired distribution, justifying common approaches to constructing samplers, which can be applied to produce samplers that are correct-by-construction.

15:10
Probabilistic Guarded Kleene Algebra with Tests

ABSTRACT. We extend Guarded Kleene Algebra with Tests (GKAT) with probabilistic branching and loops to reason about imperative programs with probabilistic control flow. We introduce coalgebraic operational semantics and a sound Salomaa style axiomatisation of bisimulation equivalence. As an example, we show the correctness of a variant of the Knuth-Yao algorithm simulating a three-sided die with a series of fair coin tosses.

14:00-15:30 Session 138H (WST)
Location: Taub 4
14:00
CeTA – A certifier for termCOMP 2022
PRESENTER: René Thiemann

ABSTRACT. CeTA is a certifier that can be used to validate automatically generated termination and complexity proofs during the termination competition 2022. We briefly highlight some features of CeTA that have been recently added.

14:20
Certified Matchbox

ABSTRACT. We describe the Matchbox termination prover that takes part in the category of certified termination of string rewriting in the Termination Competition 2022.

14:35
MultumNonMulta at termCOMP 2022

ABSTRACT. The software tool MultumNonMulta (MnM) aims at proving termination or non-termination of string rewrite systems automatically. The purpose of this prototype implementation is to demonstrate the power of a combination of few but strong proof methods, notably matrix interpretations. After a short survey of old and new features, we discuss the reasons that prevent MnM from taking part in the certified categories of the termination competition.

14:55
Runtime complexity of parallel-innermost term rewriting at TermComp 2022

ABSTRACT. Thus far, the annual Termination and Complexity Competition (TermComp) has focussed on analysis of sequential languages. We aim to broaden the scope of TermComp to languages with parallel computation. As a first step, we consider (max-)parallel-innermost term rewriting. In contrast to innermost rewriting, which rewrites only one innermost redex at a time, parallel-innermost rewriting uses a multi-step rewrite relation: all innermost redexes are rewritten simultaneously. This approach can lead to significant decreases in the maximal length of the rewrite sequence from a given term when compared to innermost rewriting.

In this talk, we present a new category proposal for runtime complexity of parallel-innermost term rewriting at TermComp. We also discuss recent results on automated analysis techniques for this problem and present experimental results obtained with an implementation of these techniques in the program analysis tool AProVE.

15:15
DEPP
PRESENTER: Aart Middeldorp

ABSTRACT. We present a simple tool that implements the encodings of Hilbert's 10th problem as (incremental) polynomial termination problems of our FSCD 2022 paper. The tool can be used to generate problems for TPDB, in particular for testing the implementation of polynomial interpretations in termination tools.

15:30-16:00Coffee Break
16:00-17:00 Session 139A: Coq and its Ecosystem (Coq)
Location: Taub 5
16:00
Coq Community Survey 2022: Summary of Results

ABSTRACT. The Coq Community Survey 2022 was an online public survey conducted during February 2022. Its main goal was to obtain an updated picture of the Coq user community and inform future decisions taken by the Coq team. In particular, the survey aimed to enable the Coq team to make effective decisions about the development of the Coq software, and also about matters that pertain to the ecosystem maintained by Coq users in academia and industry. In this presentation proposal, we outline how the survey was designed, its content, and some initial data analysis and directions. Due to the many free-text answers to some questions that require careful analysis, we expect a full presentation to include additional data and conclusions.

16:30
Discussion with Coq Core Development Team
16:00-16:30 Session 139B: Paper presentation (SC^2)
Location: Taub 3
16:00
An SC-Square Approach to the Minimum Kochen–Specker Problem
PRESENTER: Zhengyu Li

ABSTRACT. One of the most fundamental theorems in quantum mechanics is the Kochen–Specker theorem, which shows the nonexistence of a noncontextual hidden-variable model that can reproduce the predictions of quantum theory when the dimension of the Hilbert space is at least three. The theorem hinges on the existence of a mathematical object called a Kochen–Specker (KS) vector system, and its minimum size in three dimensions has been an open problem worked on by renowned physicists and mathematicians for over fifty years. We improved the lower bound on the size of a three-dimensional KS system from 22 to 23 with a significant speed-up over the most recent computational approach. Our approach combines the combinatorial search capabilities of satisfiability (SAT) solvers, the isomorph-free exhaustive generation capabilities of computer algebra systems (CASs), and the nonlinear real arithmetic solving capabilities of SAT modulo theory (SMT) solvers. Our work therefore fits directly into the Satisfiability Checking and Symbolic Computation (SC-Square) paradigm.

16:00-17:00 Session 139D (UNIF)
Location: Taub 8
16:00
Graph-Embedded Term Rewrite Systems and Applications (A Preliminary Report)

ABSTRACT. In this preliminary paper we report on the development of a new form of term rewrite system, the graph-embedded term rewrite system. These systems are inspired by the graph minor relation and are more flexible extensions of the well known homeomorphic embedded term rewrite systems. As a motivating application area for the new systems, we consider the symbolic analysis of security protocols. In this field, restricted term rewrite systems, such as subterm-convergent, have proven useful since many of the decision procedures are sound and complete for such systems. However, many of the same decision procedures still work for examples of systems which are "beyond subterm-convergent". Interestingly, the applicability of the decision procedure to these examples must typically be proven on an individual basis since they don't fit into an existing syntactic definition for which the procedures are known to work. Here we show that most of these systems are examples of graph-embedded term rewrite systems.

16:30
Restricted Unification in the Description Logic FLbot

ABSTRACT. In a previous paper, we have investigated restricted unification in the Description Logic (DL) FL0. Here we extend this investigation to the DL FLbot, which extends FL0 with the bottom concept. We show that restricted unification in FLbot is decidable and provide some upper and lower bounds for the complexity. This is particularly interesting since the decidability status of unrestricted unification in FLbot appears to be still open. We also show an ExpTime lower bound for the unrestricted problem.

16:00-17:15 Session 139E (VeriProp)
Location: Taub 9
16:00
Uncertainty propagation in discrete-time systems using probabilistic forms

ABSTRACT.  We consider the problem of rigorously quantifying the uncertainty in the states of a dynamical system due to the influence of initial state uncertainties and stochastic disturbance inputs. We use ideas from interval arithmetic and its variations such as affine arithmetic or Taylor expansions to rigorously represent the state variables at time t as a function of the initial state variables and noise symbols that model the random exogenous inputs. We also show how concentration of measure inequalities can be employed to prove rigorous bounds on the tail probabilities of these state variables.

16:50
Verifying Probabilistic Programs using Generating Functions

ABSTRACT. We study discrete probabilistic programs with potentially unbounded looping behaviors over an infinite state space. We present, to the best of our knowledge, the first decidability result for the problem of determining whether such a program generates exactly a specified distribution over its outputs (provided the program terminates almost-surely). The class of distributions that can be specified in our formalism consists of standard distributions (geometric, uniform, etc.) and finite convolutions thereof. Our method relies on representing these (possibly infinite-support) distributions as probability generating functions which admit effective arithmetic operations. We have automated our techniques in a tool called Prodigy, which supports automatic invariance checking, compositional reasoning of nested loops, and efficient queries to the output distribution, as demonstrated by experiments.