View: session overviewtalk overview

09:00 | SPEAKER: Alexander Svozil ABSTRACT. Parity games, which are equivalent to modal mu-calculus model checking, is a central algorithmic problem in formal methods, with applications in reactive synthesis, program repair, verification of branching-time properties, etc. Besides the standard computation model with the explicit representation of games, another important theoretical model of computation is that of set-based symbolic algorithms. Set-based symbolic algorithms use basic set operations and one-step predecessor operations on the implicit description of games, rather than the explicit representation. The significance of symbolic algorithms is that they provide scalable algorithms for large finite-state systems, as well as for infinite-state systems with finite quotient. Consider parity games on graphs with n vertices and parity conditions with c priorities. While there is a rich literature of explicit algorithms for parity games, the main results for set-based symbolic algorithms are as follows: (a) the basic algorithm that requires O(n^c) symbolic operations and O(c) symbolic space; and (b) an improved algorithm that requires O(n^(c/3+1)) symbolic operations and O(n) symbolic space. In this work, our contributions are as follows: (1) We present a black-box set-based symbolic algorithm based on the explicit progress measure algorithm. Two important consequences of our algorithm are as follows: (a) a set-based symbolic algorithm for parity games that requires quasi-polynomially many symbolic operations and O(n) symbolic space; and (b) any future improvement in progress measure based explicit algorithms immediately imply an efficiency improvement in our set-based symbolic algorithm for parity games. (2) We present a set-based symbolic algorithm that requires quasi-polynomially many symbolic operations and O(c * log n) symbolic space. Moreover, for the important special case of c <= log n, our algorithm requires only polynomially many symbolic operations and poly-logarithmic symbolic space. |

09:30 | ABSTRACT. In many different applications we are given a set of constraints with the goal to decide whether the set is satisfiable. If the set is determined to be unsatisfiable, one might be interested in analysing this unsatisfiability. Identification of minimal unsatisfiable subsets (MUSes) is a kind of such analysis. The more MUSes are identified, the better insight into the unsatisfiability is obtained. However, the full enumeration of all MUSes is often intractable. Therefore, algorithms that identify MUSes in an online fashion, i.e., one by one, are needed. Moreover, since MUSes find applications in various constraint domains, and new applications still arise, there is a desire for domain agnostic MUS enumeration approaches. In this paper, we present an experimental evaluation of four state-of-the-art domain agnostic MUS enumeration algorithms: MARCO, TOME, ReMUS, and DAA. The evaluation is conducted in the SAT, SMT, and LTL constraint domains. The results evidence that there is no silver-bullet algorithm that would beat all the others in all the domains. |

09:50 | ABSTRACT. The inherent complexity of parallel computing makes development, resources monitoring, and debugging for parallel constraint-solving-based applications difficult. This paper presents SMTS, a framework for parallelizing sequential constraint solving algorithms and running them in distributed computing environments. The design (i) is based on a general parallelization technique that supports recursively combining algorithm portfolios and divide-and-conquer with the exchange of learned information, (ii) provides monitoring by visually inspecting the parallel execution steps, and (iii) supports interactive guidance of the algorithm through a web interface. We report positive experiences on instantiating the framework for SMT and IC3 solvers, debugging parallel executions, and visualizing solving, structure, and learned clauses of SMT instances. |

10:30 | SPEAKER: Chuan Jiang ABSTRACT. A complementary technique to decision-diagram-based model checking is SAT-based bounded model checking (BMC), which reduces the model checking problem to a propositional satisfiability problem so that the corresponding formula is satisfiable iff a counterexample or witness exists. Due to the branching time nature of computation tree logic (CTL), BMC for the universal fragment of CTL (ACTL) considers a counterexample in a bounded model as a set of bounded paths. Since the existential fragment of CTL (ECTL) is dual to ACTL, and ACTL formulas are often negated to obtain ECTL ones in practice, we focus on BMC for ECTL and propose an improved translation that generates a possibly smaller propositional formula by reducing the number of bounded paths to be considered in a witness. Experimental results show that the formulas generated by our approach are often easier for a SAT solver to answer. In addition, we propose a simple modification to the translation so that it is also defined for models with deadlock states. |

11:00 | ABSTRACT. We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by easily explaining and extending the recently found polynomial simulation of DPR and DRAT by extended resolution, as well as finding intrinsic limitations in interference proof systems. |

11:30 | ABSTRACT. Finite model finders represent a powerful tool for deciding problems with finite model property, such as the Bernays-Schonfinkel fragment (EPR). Further, finite model finders provide useful information for counter-satisfiable conjectures. The paper investigates several novel techniques in a finite model-finder based on the translation to SAT, referred to as MACE-style approach. The approach we propose is driven by counterexample abstraction refinement (CEGAR), which has proven to be a powerful tool in the context of quantifiers in satisfiability modulo theories (SMT) and quantified Boolean formulas (QBF). One weakness of CEGAR-based approaches is that certain amount of luck is required in order to guess the right model because the solver always operates on incomplete information about the formula. To tackle this issue we propose to enhance the model finder with a machine learning algorithm to improve the likelihood that the right model is encountered. The implemented prototype based on the presented ideas shows highly promising results. |

12:00 | ABSTRACT. The weak completion semantics is an integrated and computational cognitive theory which is based on normal logic programs, three-valued Lukasiewicz logic, weak completion, and skeptical abduction. It has been successfully applied - among others - to the suppression task, the selection task, and to human syllogistic reasoning. In order to solve ethical decision problems like - for example - trolley problems, we need to extend the weak completion semantics to deal with actions and causality. To this end we consider normal logic programs and a set E of equations as in the fluent calculus. We formally show that normal logic programs with equality admit a least E-model under the weak completion semantics and that this model can be computed as the least fixed point of an associated semantic operator. We show that the operator is not continuous in general, but is continuous if the logic program is a propositional, a finite-ground, or a finite datalog program and the Herbrand E-universe is finite. Finally, we show that the weak completion semantics with equality can solve a variety of ethical decision problems like the bystander case, the footbridge case, and the loop case by computing the least E-model and reasoning with respect to this E-model. The reasoning process involves counterfactuals needed to solve the different ethical dilemmas. |

14:00 | Experiments in Verification of Linear Model Predictive Control: Automatic Generation and Formal Verification of an Interior Point Method Algorithm ABSTRACT. Classical control of cyber-physical systems used to rely on basic linear controllers. These controllers provided a safe and robust behavior but lack the ability to perform more complex controls such as aggressive maneuvering or performing fuel-efficient controls. Another approach called optimal control is capable of computing such difficult trajectories but lacks the ability to adapt to dynamic changes in the environment. In both cases, the control was designed offline, relying on more or less complex algorithms to find the appropriate parameters. More recent kinds of approaches such as Linear Model-Predictive Control (MPC) rely on the online use of convex optimization to compute the best control at each sample time. In these settings optimization algorithms are specialized for the specific control problem and embed on the device. This paper proposes to revisit the code generation of an interior point method (IPM) algorithm, an efficient family of convex optimization, focusing on the proof of its implementation at code level. Our approach relies on the code specialization phase to produce additional annotations formalizing the intented specification of the algorithm. Deductive methods are then used to prove automatically the validity of these assertions. Since the algorithm is complex, additional lemmas are also produced, allowing the complete proof to be checked by SMT solvers only. This work is the first to address the effective formal proof of an IPM algorithm. The approach could also be generalized more systematically to code generation frameworks, producing proof certificate along the code, for numerical intensive software. |

14:30 | ABSTRACT. We perform an automated analysis of two devices developed by Yubico: YubiKey, designed to authenticate a user to network-based services, and YubiHSM, Yubico’s hard- ware security module. Both are analyzed using the Maude-NPA cryptographic protocol analyzer. Although previous work has been done applying formal tools to these devices devices, there has not been any completely automated analysis. This is not surprising, because both YubiKey and YubiHSM, which make use of cryptographic APIs, involve a number of complex features: (i) discrete time in the form of Lamport clocks, (ii) a mutable memory for storing previously seen keys or nonces, (iii) event-based properties that require an analysis of sequences of actions, and (iv) reasoning modulo exclusive-or. Maude-NPA has provided support for exclusive-or for years but has not provided support for the other three features, which we show can also be supported by using constraints on natural num- bers, protocol composition and reasoning modulo associativity. In this work, we have been able to automatically both prove properties of YubiKey and find the known attacks on the YubiHSM, beyond the capabilities of previous work using the Tamarin Prover due to the need of auxiliary user-defined lemmas and limited support for exclusive-or. Tamarin has recently been endowed with exclusive-or. We have rewritten the original specification of YubiHSM to use exclusive-or and checked that both attacks on YubiHSM can be carried out by this recent version of Tamarin using user-defined lemmas. |

15:00 | SPEAKER: Max W. Haslbeck ABSTRACT. The LLL basis reduction algorithm was the first polynomial-time algorithm to compute a reduced basis of a given lattice, and hence also a short vector in the lattice. It thereby approximately solves an NP-hard problem. The algorithm has several applications in number theory, computer algebra and cryptography. Recently, the first mechanized soundness proof of the LLL algorithm has been developed in Isabelle/HOL. However, this proof did not include a formal statement of the algorithm's complexity. Furthermore, the resulting implementation was inefficient in practice. We address both of these shortcomings in this paper. First, we prove the correctness of a more efficient implementation of the LLL algorithm that uses only integer computations. Second, we formally prove statements on the polynomial running-time. |

15:30 | ABSTRACT. The computational bottleneck in model-checking applications is the blow-up involved in the translation of systems to their mathematical model. This blow up is especially painful in systems with variables over an infinite domain, and in composite systems described by means of their underlying components. We introduce and study {\em linear temporal logic with arithmetic\/} (LTLA, for short), where formulas include variables that take values in $\Z$, and in which linear arithmetic over these values is supported. We develop an automata-theoretic approach for reasoning about LTLA formulas and use it in order to solve, in PSPACE, the satisfiability problem for the existential fragment of LTLA and the model-checking problem for its universal fragment. We show that these results are tight, as a single universally-quantified variable makes the satisfiability problem for LTLA undecidable. In addition to reasoning about systems with variables over $\Z$, we suggest applications of LTLA in reasoning about hierarchical systems, which consist of subsystems that can call each other in a hierarchical manner. We use the values in $\Z$ in order to describe the nesting depth of components in the system. A naive model-checking algorithm for hierarchical systems flattens them, which involves an exponential blow up. We suggest a model-checking algorithm that avoids the flattening and avoids a blow up in the number of components. |