View: session overviewtalk overview
09:00  Model Finding for Exploration 
09:30  On Orderings in Security Models 
10:00  Benign Interaction of Security Domains PRESENTER: Flemming Nielson 
09:00  The Logic Languages of the TPTP World and Proofs and Models in the TPTP World ABSTRACT. The TPTP World is a well established infrastructure that supports research, development, and deployment of Automated Theorem Proving (ATP) systems. This talk provides an overview of the logic languages of the TPTP World, from classical firstorder form, through typed firstorder form, up to typed higherorder form, and beyond to nonclassical forms. The logic languages are described in a nontechnical way, and are illustrated with examples using the TPTP language. The second part of the talk describes the current (underdeveloped?) representation and use of proofs and models in the TPTPWorld. Topics covered include the SZS ontology; the representation of derivations and finite models; the TSTP solution library; tools for examining and manipulating the derivations; uses of individual solutions, and the TSTP as a whole; projects that have used the TPTP format and tools for proofs and models; the proposed "Tons of Data for Theorem Provers" library; future work and directions to improve the status quo.

10:00  Automated Reasoning in Nonclassical Logics in the TPTP World PRESENTER: Alexander Steen ABSTRACT. Nonclassical logics are used in a wide spectrum of disciplines, including artificial intelligence, computer science, mathematics, and philosophy. The defacto standard infrastructure for automated theorem proving, the TPTP World, currently supports only classical logics. This paper describes the latest extension of the TPTP World, providing languages and infrastructure for reasoning in nonclassical logics. The extension integrates seamlessly with the existing TPTP World. 
09:00  NonLinear Real Arithmetic with Transcendental Function Symbols: Undecidable but Easy? ABSTRACT. Recent years have seen an incredible boost in the power of SMT solvers. This includes the ability to decide nonlinear real arithmetic using algorithms inspired by cylindrical algebraic decomposition, and the ability to handle transcendental function symbols by interval, linearization and heuristic techniques. However, despite this success, today's SMT solvers still do not succeed for many realnumber benchmarks problems that human intuition would consider easy to solve. In the talk, I will present some thoughts on the formalization of this notion of "easy to solve", and some mathematical and algorithmic tools that promise to be useful for arriving at SMT solvers that are able to efficiently solve problems that are easy according to such a formalization. Especially, I will concentrate on the problem of proving satisfiability in the presence of transcendental function symbols. The talk will be based on joint work with Peter Franek, Enrico Lipparini, and Piotr Zgliczynski. 
10:00  Cylindrical Algebraic Coverings for Quantifiers PRESENTER: Gereon Kremer ABSTRACT. The cylindrical algebraic coverings method was originally proposed to decide the satisfiability of a set of nonlinear real arithmetic constraints. We reformulate and extend the cylindrical algebraic coverings method to allow for checking the validity of arbitrary nonlinear arithmetic formulas, adding support for both quantifiers and arbitrary Boolean structure. Furthermore, we also propose a variant to perform quantifier elimination on such formulas. 
09:00  Panel discussion "SMT: Past, Present and Future" PRESENTER: Cesare Tinelli 
10:00  PRESENTER: Maria Paola Bonacina ABSTRACT. CDSAT (ConflictDriven Satisfiability) is a paradigm for theory combination that works by coordinating theory modules to reason in the union of the theories in a conflictdriven manner. We generalize CDSAT to the case of nondisjoint theories by presenting a new CDSAT theory module for a theory of arrays with abstract length, which is an abstraction of the theory of arrays with length. The length function is a bridging function as it forces theories to share symbols, but the proposed abstraction limits the sharing to one predicate symbol. The CDSAT framework handles shared predicates with minimal changes, and the new module satisfies the CDSAT requirements, so that completeness is preserved. 
09:00  Probabilistic Hyperproperties ABSTRACT. Four decades ago, Lamport used the notion of trace properties as a means to specify the correctness of individual executions of concurrent programs. This notion was later formalized and classified by Alpern and Schneider to safety and liveness properties. Temporal logics like LTL and CTL were built based on these efforts to give formal syntax and semantics to requirements of trace properties. Subsequently, verification algorithms were developed to reason about individual executions of a system. However, it turns out that many interesting requirements are not trace properties. For example, important informationflow security policies (e.g. noninterference, observational determinism) or service level agreements (e.g. mean response time, percentage uptime) cannot be expressed as properties of individual execution traces of a system. Rather, they are properties of sets of execution traces, also known as hyperproperties. Temporal logics such as HyperLTL and HyperCTL∗ have been proposed to provide a unifying framework to express and reason about hyperproperties. This talk is devoted to special class of hyperproperties: we ask the question what are hyperproperties in the context of systems with random behavior. We will discuss what are relevant probabilistic relations between independent executions of a system, how we can formally express them in a temporal logic, and how we can decide the truth of such statements. 
09:50  PRESENTER: Ichiro Hasuo ABSTRACT. Solving \emph{stochastic games} with the reachability objective is a fundamental problem, especially in quantitative verification and synthesis. For this purpose, \emph{bounded value iteration (BVI)} attracts attention as an efficient iterative method. However, BVI's performance is often impeded by costly \emph{end component (EC) computation} that is needed to ensure convergence. Our contribution is a novel BVI algorithm that conducts, in addition to local propagation by the Bellman update that is typical of BVI, \emph{global} propagation of upper bounds that is not hindered by ECs. To conduct global propagation in a computationally tractable manner, we construct a weighted graph and solve the \emph{widest path problem} in it. Our experiments show the algorithm's performance advantage over the previous BVI algorithms that rely on EC computation. 
10:10  Safe SamplingBased Planning for Stochastic Systems with Uncertain Dynamics PRESENTER: Thom Badings ABSTRACT. Autonomous systems that operate in safetycritical settings must be able to behave safely under stochastic disturbances and imprecise model parameters. Due to their uncertain and stochastic nature, such autonomous systems may be captured by probabilistic programs. A typical reachavoid planning problem is to reach a desirable goal state while avoiding certain unsafe states. However, due to stochastic disturbances and imprecise model parameters, the outcome of an action by the controller of the autonomous system may be uncertain. Moreover, the set of safe states is typically nonconvex, making it infeasible to find global solutions to reachavoid problems in general. In this research, we employ abstractionbased methods to solve reachavoid problems for continuousstate systems with partially unknown and/or stochastic dynamics. Given a partial model specification and a reachavoid problem, we abstract the system into a discretestate model that captures the uncertainty about the dynamics by probabilistic transitions between discrete states. We resort to sampling techniques to compute probably approximately correct (PAC) bounds on these transition probabilities, which we capture in a socalled interval Markov decision process (iMDP). We synthesize policies using a tailored version of the probabilistic model checker PRISM for iMDPs. We then convert the policy into a feedback controller for the continuous system, for which we give highconfidence guarantees on the probability that the reachavoid problem is satisfied. With a diverse set of benchmarks, we show the robustness and practical applicability of our method for solving realistic reachavoid problems. 
09:30  A Coq Library for Mechanised FirstOrder Logic PRESENTER: Dominik Kirst ABSTRACT. We report about an ongoing collaborative effort to consolidate several Coq developments concerning metamathematical results in firstorder logic into a single library. We first describe the framework regarding the representation of syntax, deduction systems, and semantics as well as its instantiation to axiom systems and tools for userfriendly interaction. Next, we summarise the included results mostly connected to completeness, undecidability, and incompleteness. Finally, we conclude by reporting on challenges experienced and anticipated during the integration. The current status of the project can be tracked in a public fork of the Coq Library of Undecidability Proofs. 
10:00  QuantumLib: A Library for Quantum Computing in Coq PRESENTER: Jacob Zweifler ABSTRACT. The INQWIRE QuantumLib is a library for representing and reasoning about quantum computing in the Coq proof assistant. The library includes a deep thread of linear algebra definitions and proofs that emphasize mathematical facts from the perspective of quantum computing. We include much support for different quantum objects and operations, allowing users to simulate many different types of quantum programs. QuantumLib now provides the mathematical foundation for a range of other quantum verification projects. QuantumLib emphasizes both computation and proof (unlike libraries such as Mathematical Components or CoqEAL that focus on one or the other) and is specialized to the usecase of verified quantum computing in Coq. 
10:00  The Termination Competition ABSTRACT. In this talk, we will present and discuss the results of the international termination and complexity competition. 
11:00  Trakt: a generic preprocessing tactic for theorybased proof automation PRESENTER: Enzo Crance ABSTRACT. We present Trakt, a Coq plugin to preprocess goals before handing them to theorybased proof automation tactics. It is designed to be userfriendly, generic and extensible. The plugin is available with documentation and examples on GitHub. 
11:30  PRESENTER: Jason Gross ABSTRACT. In most programming languages, asymptotic performance issues can almost always be explained by reference to the algorithm being implemented. At most, the standard asymptotic performance of explicitly used operations on chosen data structures must be considered. Even the constant factors in performance bottlenecks can often be explained without reference to the implementation of the interpreter, compiler, nor underlying machine. In 10+ years of working with Coq, we (Jason, Andres, and our colleagues) have found this pattern, which holds across multiple programming languages, to be the exception rather than the rule in Coq! This turns performant proof engineering, especially performant proof automation engineering, from straightforward science into unpredictable and fragile guesswork. By presenting in detail a sampling of examples, we propose a defense of the thesis: Performance bottlenecks in proof automation almost always result from inefficiencies in parts of the system which are conceptually distant from the theorem being proven. Said another way, *debugging, understanding, and fixing performance bottlenecks in automated proofs almost always requires extensive knowledge of the proof engine, and almost never requires any domainspecific knowledge of the theorem being proven*. Further, there is no clear direction of improvement: We know of no systematic proposal, nor even folklore among experts, of what primitives and performance characteristics are sufficient for a performant proof engine. We hope to start a discussion on the obvious corollary of this thesis: *This should not be!* Our presentation, we hope, will serve as a call for a POPLMark for Proof Engines, a call for designing and implementing an adequate proof engine for *scalable performant modular proof automation*. 
12:00  PRESENTER: Seth Poulsen ABSTRACT. Understanding mathematical proofs is critical for students learning the foundations of computing. Having students construct mathematical proofs with the help of a computer is appealing as it makes it easier to autograde student work and autogenerate practice problems for students. Most existing tools for students to construct proofs with a computer are restricted systems that only permit simple logics, or there is a large overhead involved in students learning how to use them Proof Blocks (proofblocks.org), a tool that allows students to draganddrop prewritten lines of a proof into the correct order is a nice compromise because the tool is easy for students to use to construct proofs on any topic~\cite{poulsen2022proof}. However, a downside is that the process of writing questions can be time consuming and errorprone. An instructor must write the proof, break it into lines, and then specify a directed acyclic graph giving the logical dependence between lines of the proof. In this paper, we document the first step toward building a system to automatically generate Proof Blocks problems from Coq proofs: a Coq plugin which generates a natural language proof from a Coq proof. Our natural language generator differs from similar tools in that we deliberately restrict which definitions and tactics are allowed in the interest of improving readability of the output. 
11:00  A Declaration of Software Independence PRESENTER: Peter Y A Ryan 
11:30  A TutorialStyle Introduction to DY* PRESENTER: Guido Schmitz 
11:50  Establishing the Price of Privacy in Federated Data Trading PRESENTER: Kangsoo Jung 
12:10  Prototyping Formal Methods Tools: a Protocol Analysis Case Study PRESENTER: Abigail Siegel 
11:00  Generating Compressed Combinatory Proof Structures: An Approach to Automated FirstOrder Theorem Proving ABSTRACT. Representing a proof tree by a combinator term that reduces to the tree lets subtle forms of duplication within the tree materialize as duplicated subterms of the combinator term. In a DAG representation of the combinator term these straightforwardly factor into shared subgraphs. To search for proofs, combinator terms can be enumerated, like clausal tableaux, with simultaneously relating formulas associated with components of the enumerated structures through unification. To restrict the search space, the enumeration can be based on proof schemas defined as parameterized combinator terms. We introduce here this "combinator term as proof structure" approach to automated firstorder proving, present an implementation and first experimental results. The approach builds on a term view of proof structures rooted in condensed detachment and the connection method. It realizes features known from the connection structure calculus, which has not been implemented so far. 
11:30  An Extensible Logic Embedding Tool for Lightweight NonClassical Reasoning ABSTRACT. The logic embedding tool encodes nonclassical reasoning problems into classical higherorder logic. It is easily extensible and can support an increasing number of different nonclassical logics as reasoning targets. When used as preprocessor or library for higherorder theorem provers, the tool admits offtheshelf automation for logics for which otherwise few to none provers are available. 
12:00  The Isabelle Community Benchmark PRESENTER: Fabian Huch ABSTRACT. Choosing hardware for theorem proving is no simple task: automated provers are highly complex and optimized programs, often utilizing a parallel computation model, and there is little prior research on the hardware impact on prover performance. To alleviate the problem for Isabelle, we initiated a community benchmark where the build time of HOLAnalysis is measured. On 54 distinct CPUs, a total of 669 runs with different Isabelle configurations were reported by Isabelle users. Results range from 107s to over 11h. We found that current consumer CPUs performed best, with an optimal number of 8 to 16 threads, largely independent of heap memory. As for hardware parameters, CPU base clock affected multithreaded execution most with a linear correlation of 0.37, whereas boost frequency was the most influential parameter for singlethreaded runs (correlation coefficient 0.55); cache size played no significant role. When comparing our benchmark scores with popular highperformance computing benchmarks, we found a strong linear relationship with Dolfyn (R² = 0.79) in the singlethreaded scenario. Using data from the 3DMark CPU Profile consumer benchmark, we created a linear model for optimal (multithreaded) Isabelle performance. When validating, the model has an average R²score of 0.87; the mean absolute error in the final model corresponds to a wallclock time of 46.6s. With a dataset of true median values for the 3DMark, the error improves to 37.1s. 
11:00  NP: SMTSolving Combinatorial Inequalities PRESENTER: Ali Kemal Uncu ABSTRACT. This paper accompanies a new dataset of \verb+QF_NRA+ for the SMTLIB. The problems come from an automated proof procedure for combinatorial inequalities, and have not been tackled by SMTsolvers before. We describe the methodology and give one new such proof to illustrate it. We then describe the dataset and the results of a benchmarking on it which reveals the new dataset to be quite different to the existing benchmarks, and well suited for solution by SMT. The benchmarking also brings some interesting debate on the use of rational functions and algebraic numbers in the SMTLIB. 
11:30  Decidability of difference logics with unary predicates PRESENTER: Baptiste Vergain ABSTRACT. We investigate the decidability of a family of logics mixing differencelogic constraints and unary uninterpreted predicates. The focus is set on logics whose domain of interpretation is $\mathbb{R}$, but the language has a recognizer for integer values. We first establish the decidability of the logic allowing unary uninterpreted predicates, order constraints between real and integer variables, and differencelogic constraints between integer variables. Afterwards, we prove the undecidability of the logic where unary uninterpreted predicates and differencelogic constraints between real variables are allowed. 
12:00  PRESENTER: Mădălina Erașcu ABSTRACT. Automated deployment of componentbased applications in the Cloud became important with the increased digital market demand. It consists of: (1) selection of the computing resources, (2) distribution/assignment of the application components over the available computing resources, (3) its dynamic modification to cope with peaks of user requests. We focus on the first two steps which can be formulated as a linear constraint optimization problem (COP) and solved using existing exact general techniques, such as constraint programming (CP), mathematical programming (MP), and optimization modulo theory (OMT). However, the computational requirement is high, making the problem impossible to be solved. We solved the issue by using static symmetry breaking techniques problemtailored. 
11:00  UserPropagators for Custom Theories in SMT Solving PRESENTER: Clemens Eisenhofer ABSTRACT. We present ongoing work on developing the userpropagator framework in SMT solving. We argue that the integration of userpropagators in SMT solving yields an efficient approach towards custom theory reasoning, without bringing fundamental changes in the underlining SMT architecture. We showcase our approach in the SMT solver Z3, provide practical evidence of our work, and also discuss potential venues for further improvements. 
11:20  PRESENTER: Zafer Esen ABSTRACT. Constrained Horn Clauses (CHCs) are an intermediate program representation that can be generated by several verification tools, and that can be processed and solved by a number of Horn solvers. One of the main challenges when using CHCs in verification is the encoding of heapallocated datastructures: such datastructures are today either represented explicitly using the theory of arrays, or transformed away with the help of invariants or refinement types, defeating the purpose of CHCs as a representation that is languageindependent as well as agnostic of the algorithm implemented by the Horn solver. This paper presents an SMTLIB theory of heaps tailored to CHCs, with the goal of enabling a standard interchange format for programs with heap datastructures. We introduce the syntax of the theory of heaps, define its semantics in terms of axioms and using a reduction to SMTLIB arrays and datatypes, provide an experimental evaluation and outline possible extensions and future work. 
11:40  Challenges and Solutions for HigherOrder SMT Proofs PRESENTER: Mikolas Janota ABSTRACT. An interesting goal is for SMT solvers to produce independently checkable proofs. SMT languages already have expressive power that goes beyond firstorder languages, and further extensions would give even more expressive power by allowing quantification over function types. Indeed, such extensions are considered in the current proposal for the new standard SMT3. Given the expressive power of SMT and its extensions, careful thought must be given to the intended semantics and an appropriate notion of proof. We propose higherorder set theory as an appropriate interpretation of SMT (and extensions) and obtain an adequate notion of SMT proofs via proof terms in higherorder set theory. To demonstrate the strength of this approach, we give a number of abstract examples that would be difficult to handle by other notions of proof. To demonstrate the practicality of the approach, we describe a family of integer difference logic examples. We give proof terms for each of these examples and check the correctness of the proofs using two proof checkers: the proof checker distributed with the Proofgold system and an alternative checker we have implemented that does not rely on access to the Proofgold block chain. 
12:00  A simple proof format for SMT PRESENTER: Jochen Hoenicke ABSTRACT. We present a simple resolutionbased proof format targeted for SMT. The proof format uses a syntax similar to SMTLIB terms. The resolution rule is its only proof rule with premises; all other rules are axioms proving tautological clauses. The format aims to be solver independent by only including a minimal complete set of axioms while still being able to express proofs succinctly. Most of its axioms are purely syntactic; only for arithmetic reasoning, some axioms with side conditions are used for succinct reasoning with linear (in)equalities. The format can be extended with solverspecific rules, which can then either be treated as trusted axioms, or, better, replaced by their lowlevel proof. The format has been implemented in the solver SMTInterpol and the solver produces proofs for all benchmarks it can solve in the combinations of the theories of equality, linear arithmetic, arrays and datatypes. There is also a standalone proof checker for this format. 
11:00  HigherOrder Unification with Definition by Cases PRESENTER: David Cerna ABSTRACT. We discuss unification within the simplytyped $\lambda$calculus extended by a definition by cases operator (denoted by d) slightly differing from similar operators introduced by earlier investigations. Such operators may be thought of as restrictions of Hilbert's choice operator. We provide several nontrivial examples which illustrate the benefits of introducing such an operator. 
11:30  Higherorder unification from Eunification with secondorder equations and parametrised metavariables ABSTRACT. Type checking and, to a larger extent, type and term inference in programming languages and proof assistants can be implemented by means of unification. In presence of dependent types, variations of higherorder unification are often used, such as higherorder pattern unification. In practice, there are often several mechanisms for abstraction and application, as well as other eliminators (such as projections from pairs) that have to be accounted for in the implementation of higherorder unification for a particular language. In this work, we study the possibility of representing various higherorder unification problems as a special case of Eunification for secondorder algebra of terms. This allows us to present $\beta$reduction rules for various application terms, and some other eliminators as equations, and reformulate higherorder unification problems as $E$unification problems with secondorder equations. We then present a procedure for deciding such problems, introducing secondorder \emph{mutate} rule (inspired by onesided paramodulation) and generic versions of \emph{imitate} and \emph{project} rules. We also provide a prototype Haskell implementation for syntaxgeneric higherorder unification based on these ideas in Haskell programming language. 
12:00  Nominal AntiUnification of LetrecExpressions PRESENTER: Daniele NantesSobrinho ABSTRACT. This is a workinprogress where we present a rulebased algorithm for nominal antiunification of expressions in a higherorder language with lambda abstraction, functions symbols and recursive let. The algorithm is sound and weakly complete, but has high complexity. 
11:00  Distribution Testing and Probabilistic Programming: A Match made in Heaven ABSTRACT. The field of distribution testing seeks to test whether a given distribution satisfies a property of interest. While there have been deep technical developments in the field of distribution testing, the field has somehow been confined to theoretical studies. I will discuss our work on development practical algorithmic frameworks based on distribution testing, and its relationship to probabilistic program verification. In particular, I will show that the problem of measuring distance between straight line probabilistic programs is #Phard. As expected, the theoretical hardness shouldn’t deter us and I will discuss our success in using distribution testingbased frameworks to design efficient samplers. 
11:50  A Quantitative Verification Infrastructure ABSTRACT. Following the success of deductive verification infrastructures for nonprobabilistic programs like Viper and Boogie, we talk about our proposal for a deductive verification infrastructure for probabilistic programs. Our quantitative intermediate verification language enables modular reasoning about lower and upper bounds of expected values of programs. In our framework, we can encode and verify different proof rules for weakest preexpectation semantics. As a proof of concept, we have created a deductive verifier for probabilistic programs on top of our framework. It supports reasoning about lower bounds of weakest liberal preexpectations and about upper bounds of weakest preexpectatons. For loops, we use encodings of proof rules of Park induction and kinduction. 
12:10  Inferring Expected Runtimes and Sizes PRESENTER: Eleanore Meyer ABSTRACT. We present a modular approach to infer upper bounds on the expected runtimes of probabilistic integer programs automatically. To this end, it computes bounds on the runtimes of program parts and on the sizes of their variables in an alternating way. To evaluate its power, we implemented our approach in a new version of our opensource tool KoAT. 
11:00  PRESENTER: Eleanore Meyer ABSTRACT. In former work, we developed an approach for automatic complexity analysis of integer programs, based on an alternating modular inference of upper runtime and size bounds for program parts. Recently, we extended and reimplemented this approach in a new version of our opensource tool KoAT. In order to compute runtime bounds, we analyze subprograms in topological order, i.e., in case of multiple consecutive loops, we start with the first loop and propagate knowledge about the resulting values of variables to subsequent loops. By inferring runtime bounds for one subprogram after the other, in the end we obtain a bound on the runtime complexity of the whole program. We first try to compute runtime bounds for subprograms by means of multiphase linear ranking functions (MRFs). If MRFs do not yield a finite runtime bound for the respective subprogram, then we apply a technique to handle socalled triangular weakly nonlinear loops (twnloops) on the unsolved parts of the subprogram. Moreover, we integrated controlflow refinement via partial evaluation to improve the automatic complexity analysis of programs further. Additionally, we introduced the notion of expected size which allowed us to extend our approach to the computation of upper bounds on the expected runtimes of probabilistic programs. 
11:20  LoAT: The Loop Acceleration Tool PRESENTER: Florian Frohn ABSTRACT. LoAT is a tool for proving nontermination and lower bounds on the worstcase complexity of integer transition systems, i.e., programs operating on integers. It supports a variety of input formats, userdefined cost models, and it provides detailed proofs. LoAT is based on loop acceleration, a technique to transform singlepath loops into nondeterministic straightline code. Due to its ability to accelerate loops, LoAT can transform programs with complex control flow into quantifierfree firstorder formulas, which can then be analyzed by offtheshelf SMT solvers. 
11:40  AProVE 2022 PRESENTER: Jürgen Giesl ABSTRACT. AProVE is a tool for automatic termination and complexity proofs of Java, C, Haskell, Prolog, and term rewrite systems (TRSs). To analyze programs in highlevel languages, AProVE automatically converts them into TRSs and/or Integer Transition Systems (ITSs). Then, a wide range of techniques is employed to prove termination and to compute complexity bounds for the resulting TRSs or ITSs. In particular, the tools KoAT and LoAT are used to infer upper and lower complexity bounds for ITSs, and LoAT and T2 are used for nontermination proofs of ITSs. For term rewrite systems, the generated proofs can be exported to check their correctness using automatic certiﬁers. AProVE can easily be used via its web interface and there is also a corresponding plugin for the popular Eclipse software development environment. 
12:00  The System SOL version 2022 (Tool Description) PRESENTER: Makoto Hamana ABSTRACT. SOL is a Haskellbased tool for showing confluence and strong normalisation of higherorder computation rules. Termination analysis is based on Blanqui's General Schema criterion and a new modular termination proof method of secondorder computation systems recently shown by Hamana. 
12:20  Things WANDA cannot do ABSTRACT. In this short presentation, I will discuss the tool WANDA (a participant in the higherorder termination category of the termination competition). In particular, I will focus on weaknesses of WANDA, and challenges where other participants have something to win! 
Lunches will be held in Taub hall.
14:00  jsCoq: Literate Proof Documents Dressed for the Web PRESENTER: Emilio Jesús Gallego Arias 
15:00  HenBlocks: Structured Editing for Coq PRESENTER: Bernard Boey ABSTRACT. There are a number of pain points in using the Coq Proof Assistant, which affects beginners most. Structured editors, which allow the user to manipulate structured blocks corresponding to the abstract syntax of a program, have been used to make programming more accessible to beginners. However, they have not been applied to proving thus far. We present HenBlocks (available at https://henblocks.github.io), a webbased fullyfledged structured editor that allows users to write Coq proofs by manipulating blocks. We conclude that structured editing is a promising approach to proof writing that warrants more exploration, development, and testing. 
14:00  Three Branches of Accountability PRESENTER: Sebastian Mödersheim 
14:30  Adapting constraint solving to automatically analyze UPI Protocols PRESENTER: Sreekanth Malladi 
14:50  Quantum Machine Learning and Fraud Detection PRESENTER: Alessandra Di Pierro 
15:10  Formal Methods and Mathematical Intuition 
14:00  short COST action EuroProofNet introduction 
14:05  EuroProofNet presentation on proofs in Dedukti 
14:50  EuroProofNet presentation on SMT and proofs 
14:00  Computer algebra and automation in Lean's mathematical library ABSTRACT. Mathlib is a library of mathematics formalized in the Lean proof assistant amounting to nearly one million lines of code. Taking advantage of Lean's powerful metaprogramming framework, mathlib defines hundreds of custom tactics. Some of these tactics implement general proof search or decision procedures, while others are smallstep tactics that help establish a familiar mathematical vernacular for mathlib proofs. This talk will survey the current state of mathlib's suite of tactics, focusing on those related to the SC^2 community, and will speculate about how these connections could be strengthened. With the arrival of Lean 4, a powerful programming language in its own right, the space of possibilities will grow significantly. 
15:00  PRESENTER: Daniel Dallaire ABSTRACT. In this paper we describe a method of enumerating projective planes of order nine. The enumeration was previously completed by Lam, Kolesova, and Thiel using highly optimized and customized search code. Despite the importance of this result in the classification of projective geometries, the previous search relied on unverified code and has never been independently verified. Our enumeration procedure uses a hybrid satisfiability (SAT) solving and symbolic computation approach. SAT solving performs an enumerative search, while symbolic computation removes symmetries from the search space. Certificates are produced which demonstrate the enumeration completed successfully. 
14:00  SMTLib Update Report PRESENTER: Cesare Tinelli 
14:30  Business Meeting PRESENTER: Antti Hyvärinen 
14:00  Unification Decision Procedures using Basic Narrowing modulo an Equational Theory 
15:00  PRESENTER: Gian Carlo Milanese ABSTRACT. This paper provides a generalized definition of the unification of OrderSorted Feature (OSF) terms that considers a fuzzy subsumption relation between sort symbols rather than an ordinary (crisp) one. In this setting the unifier of two OSF terms is associated with a subsumption degree. We refer to the problem of unifying two OSF terms and computing the associated subsumption degree as fuzzy OSF term unification. 
14:00  Quantifying Leakage in Privacy Risk Analysis using Probabilistic Programming 
14:50  Deterministic streamsampling for probabilistic programming: semantics and verification PRESENTER: William Smith ABSTRACT. Probabilistic programming languages rely fundamentally on some notion of sampling, and this is doubly true for probabilistic programming languages which perform Bayesian inference using Monte Carlo techniques. Verifying samplersproving that they generate samples from the correct distributionis crucial to the use of probabilistic programming languages for statistical modelling and inference. However, the typical denotational semantics of probabilistic programs is incompatible with deterministic notions of sampling which is problematic, considering most statistical inference is performed using pseudorandom number generators. We present an operational and denotational semantics for probabilistic programming, in terms of operations on samplers, that allows for deterministic generation of random variables, and which distinguishes between samplersprogramsand the probability distributions which they target. We also introduce a sound calculus for demonstrating that samplers target a desired distribution, justifying common approaches to constructing samplers, which can be applied to produce samplers that are correctbyconstruction. 
15:10  Probabilistic Guarded Kleene Algebra with Tests PRESENTER: Wojciech Rozowski ABSTRACT. We extend Guarded Kleene Algebra with Tests (GKAT) with probabilistic branching and loops to reason about imperative programs with probabilistic control flow. We introduce coalgebraic operational semantics and a sound Salomaa style axiomatisation of bisimulation equivalence. As an example, we show the correctness of a variant of the KnuthYao algorithm simulating a threesided die with a series of fair coin tosses. 
14:00  PRESENTER: René Thiemann ABSTRACT. CeTA is a certifier that can be used to validate automatically generated termination and complexity proofs during the termination competition 2022. We briefly highlight some features of CeTA that have been recently added. 
14:20  ABSTRACT. We describe the Matchbox termination prover that takes part in the category of certified termination of string rewriting in the Termination Competition 2022. 
14:35  MultumNonMulta at termCOMP 2022 ABSTRACT. The software tool MultumNonMulta (MnM) aims at proving termination or nontermination of string rewrite systems automatically. The purpose of this prototype implementation is to demonstrate the power of a combination of few but strong proof methods, notably matrix interpretations. After a short survey of old and new features, we discuss the reasons that prevent MnM from taking part in the certified categories of the termination competition. 
14:55  Runtime complexity of parallelinnermost term rewriting at TermComp 2022 ABSTRACT. Thus far, the annual Termination and Complexity Competition (TermComp) has focussed on analysis of sequential languages. We aim to broaden the scope of TermComp to languages with parallel computation. As a first step, we consider (max)parallelinnermost term rewriting. In contrast to innermost rewriting, which rewrites only one innermost redex at a time, parallelinnermost rewriting uses a multistep rewrite relation: all innermost redexes are rewritten simultaneously. This approach can lead to significant decreases in the maximal length of the rewrite sequence from a given term when compared to innermost rewriting. In this talk, we present a new category proposal for runtime complexity of parallelinnermost term rewriting at TermComp. We also discuss recent results on automated analysis techniques for this problem and present experimental results obtained with an implementation of these techniques in the program analysis tool AProVE. 
15:15  PRESENTER: Aart Middeldorp ABSTRACT. We present a simple tool that implements the encodings of Hilbert's 10th problem as (incremental) polynomial termination problems of our FSCD 2022 paper. The tool can be used to generate problems for TPDB, in particular for testing the implementation of polynomial interpretations in termination tools. 
16:00  Coq Community Survey 2022: Summary of Results PRESENTER: Ana de Almeida Borges ABSTRACT. The Coq Community Survey 2022 was an online public survey conducted during February 2022. Its main goal was to obtain an updated picture of the Coq user community and inform future decisions taken by the Coq team. In particular, the survey aimed to enable the Coq team to make effective decisions about the development of the Coq software, and also about matters that pertain to the ecosystem maintained by Coq users in academia and industry. In this presentation proposal, we outline how the survey was designed, its content, and some initial data analysis and directions. Due to the many freetext answers to some questions that require careful analysis, we expect a full presentation to include additional data and conclusions. 
16:30  Discussion with Coq Core Development Team PRESENTER: Emilio Jesús Gallego Arias 
16:00  An SCSquare Approach to the Minimum Kochen–Specker Problem PRESENTER: Zhengyu Li ABSTRACT. One of the most fundamental theorems in quantum mechanics is the Kochen–Specker theorem, which shows the nonexistence of a noncontextual hiddenvariable model that can reproduce the predictions of quantum theory when the dimension of the Hilbert space is at least three. The theorem hinges on the existence of a mathematical object called a Kochen–Specker (KS) vector system, and its minimum size in three dimensions has been an open problem worked on by renowned physicists and mathematicians for over fifty years. We improved the lower bound on the size of a threedimensional KS system from 22 to 23 with a significant speedup over the most recent computational approach. Our approach combines the combinatorial search capabilities of satisfiability (SAT) solvers, the isomorphfree exhaustive generation capabilities of computer algebra systems (CASs), and the nonlinear real arithmetic solving capabilities of SAT modulo theory (SMT) solvers. Our work therefore fits directly into the Satisfiability Checking and Symbolic Computation (SCSquare) paradigm. 
16:00  SMT Proof Standardization Update 
16:00  PRESENTER: Andrew M. Marshall ABSTRACT. In this preliminary paper we report on the development of a new form of term rewrite system, the graphembedded term rewrite system. These systems are inspired by the graph minor relation and are more flexible extensions of the well known homeomorphic embedded term rewrite systems. As a motivating application area for the new systems, we consider the symbolic analysis of security protocols. In this field, restricted term rewrite systems, such as subtermconvergent, have proven useful since many of the decision procedures are sound and complete for such systems. However, many of the same decision procedures still work for examples of systems which are "beyond subtermconvergent". Interestingly, the applicability of the decision procedure to these examples must typically be proven on an individual basis since they don't fit into an existing syntactic definition for which the procedures are known to work. Here we show that most of these systems are examples of graphembedded term rewrite systems. 
16:30  Restricted Unification in the Description Logic FLbot PRESENTER: Oliver Fernandez Gil ABSTRACT. In a previous paper, we have investigated restricted unification in the Description Logic (DL) FL0. Here we extend this investigation to the DL FLbot, which extends FL0 with the bottom concept. We show that restricted unification in FLbot is decidable and provide some upper and lower bounds for the complexity. This is particularly interesting since the decidability status of unrestricted unification in FLbot appears to be still open. We also show an ExpTime lower bound for the unrestricted problem. 