View: session overviewtalk overview
09:00 | Automated Reasoning In Natural Language: Current Directions ABSTRACT. TBA |
09:00 | Invited Talk: Optimization Problems in Answer Set Programming ABSTRACT. Answer set programming (ASP) is a declarative language for nonmonotonic reasoning based on stable model semantics, where stable models are classical models of the input program satisfying a stability condition: only necessary information is included in each of these models under the assumptions provided by the model itself for the unknown knowledge in the program, where unknown knowledge is encoded by means of default negation. Reasoning in presence of unknown knowledge is common for rational agents acting in the real world. It is also common that real world agents cannot meet all their desiderata, and therefore ASP programs may come with soft literals for representing numerical preferences over jointly incompatible conditions. Stable models are therefore associated with a cost given by the number of the unsatisfied soft literals, so that stable models of minimum cost are preferred. In this talk I will show algorithms and strategies for computing optimal stable models, discuss their properties and explain how they can be extended to handle some qualitative preferences. |
10:00 | anthem: Transforming gringo Programs into First-Order Theories (Preliminary Report) SPEAKER: Patrick Lühne ABSTRACT. In a recent paper by Harrison et al., the concept of program completion is extended to a large class of programs in the input language of the ASP grounder gringo. We would like to automate the process of generating and simplifying completion formulas for programs in that language, because examining the output produced by this kind of software may help programmers to see more clearly what their program does and to what degree its set of stable models conforms with their intentions. If a formal specification for the program is available, then it may be possible to use this software, in combination with automated reasoning tools, to verify that the program is correct. This note is a preliminary report on a project motivated by this idea. |
Formal Methods at Amazon Web Services
09:00 | Opening and introduction |
09:10 | ABSTRACT. The B method is quite popular for developing provably correct software for safety critical railway systems, particularly for driverless trains. In recent years, the B method has also been used successfully for data validation (http://www.data-validation.fr). There, the B language has proven to be a compact way to express complex validation rules, and tools such as predicateB, Ovado or ProB can be used to provide high assurance validation engines, where a secondary toolchain validates the result of the primary toolchain. This talk will give an overview of our experience in using logic-based formal methods in general and B in particular for industrial applications. We will also touch subjects such as training and readability and the implementation of ProB in Prolog. We will examine which features of B make it well suited for, e.g., the railway domain, but also point out some weaknesses and suggestions for future developments. We will also touch upon other formal methods such as Alloy or TLA+, as well as other constraint solving backends for B, not based on Prolog (SAT via Kodkod/Alloy and SMT via Z3 and CVC4). |
09:50 | Invited Talk: On the Development of Industrial Applications with ASP SPEAKER: Nicola Leone ABSTRACT. Answer Set Programming (ASP) is a powerful rule-based language for knowledge representation and reasoning that has been developed in the field of logic programming and nonmonotonic reasoning. After many years of basic research, the ASP technology has become mature for the development of significant real-world applications. In particular, the well-known ASP system DLV has undergone an industrial exploitation by a spin-off company called DLVSYSTEM srl, which has led to its successful usage in a number of industry-level applications. The success of DLV for applications development is due also to its endowment with powerful development tools, supporting researchers and software developers that simplify the integration of ASP in real-world applications which usually require to combine logic-based modules within a complete system featuring user interfaces, services etc. In this talk, we first recall the basics of the ASP language. Then, we overview our advanced development tools, and we report on the recent implementation of some challenging industry-level applications of our system. |
09:00 | SPEAKER: Bruno Courcelle ABSTRACT. A well-known algorithmic meta-theorem states that every graph property expressible by a monadic second-order (MSO in short) sentence possibly using edge set quantifications can be checked in linear time for graphs of bounded tree- width; furthermore, every graph property expressible by an MSO sentence not using edge set quantifications can be checked in linear time for graphs of bounded clique-width given by an appropriate algebraic term. The standard proofs construct from the sentences and the given bounds on tree-width or clique-width automata intended to run on tree-decompositions or clique-width terms. However, they have so many states that these constructions are not usable in practice. This is unavoidable by Frick and Grohe. To overcome this difficulty in cases that are not "toy examples", we have introduced fly-automata that do not store states and transitions but compute them whenever needed. They have been implemented and tested, in particular for checking colorability and so-called acyclic colorability. A subset of the implemented functionalities will be demonstated by using a web interface in the first part of the lecture. |
10:05 | SPEAKER: Lukas Holik ABSTRACT. We present a new decision procedure for the logic WS1S. It originates from the classical approach, which first builds an automaton accepting all models of a formula and then tests whether its language is empty. The main novelty is to test the emptiness on the fly, while constructing a symbolic, term-based representation of the automaton, and prune the constructed state space from parts irrelevant to the test. The pruning is done by a generalization of two techniques used in antichain-based language inclusion and universality checking of finite automata: subsumption and early termination. The richer structure of the WS1S decision problem allows us, however, to elaborate on these techniques in novel ways. Our experiments show that the proposed approach can in many cases significantly outperform the classical decision procedure (implemented in the MONA tool) as well as recently proposed alternatives. |
On parallel algorithms working with graphs.
09:00 | Parallel Algorithms for Parameter Synthesis from Temporal Logic Specifications ABSTRACT. We will present a set of parallel algorithms for parameter synthesis in parametrised dynamical systems from temporal logic specifications. The problem is for a dynamical system represented in terms of a finite-state transition system with parametrised transition relation to synthesise parameter values satisfying the given specification. Technically, the method is based on model checking algorithms adapted to such kind of transition systems. Additionally, we will address a specific subproblem of parameter synthesis targeting identification of attractors in systems dynamics. To that end, a novel parallel algorithm for detection of strongly connected components in a parameterised system will be presented. We will show that the underlying algorithms can gain advantages from efficient symbolic representations of sets of parameter valuations. To demonstrate the method, we will describe applications to models representing dynamics of biological systems with complex non-linear behaviour. |
09:45 | SPEAKER: Federico Igne ABSTRACT. There has been an increasing interest in recent years towards the development of efficient solvers for Answer Set Programming (ASP) and towards the application of ASP to solve increasing more challenging problems. In particular, several recent efforts have explored the issue of scalability of ASP solvers when addressing the challenges caused by the need to ground the program before resolution. This paper offers an alternative solution to this challenge, focused on the use of distributed programming techniques to reason about ASP programs whose grounding would be prohibitive for mainstream ASP solvers. The work builds on the work proposed by Konczak et al. in 2004, which proposed a characterization of answer set solving as a form of non-standard graph coloring. The paper expands this characterization to include syntactic extensions used in modern ASP (e.g., choice rules, weight constraints). We present an implementation of the solver using a distributed programming framework specifically designed to manipulate very large graphs, as provided by Apache Spark, which in turn builds on the MapReduce programming framework. Finally, we provide a few preliminary results obtained from the first prototype implementation of this approach. |
09:00 | Automated verification of linearizability |
10:00 | SPEAKER: Robert Colvin ABSTRACT. Reasoning about the correctness of concurrent objects on weak memory models supported by modern multicore architectures is complicated. Traditional notions such as linearizability do not directly apply as the effect of an operation can become visible after the operation call has returned. In this paper we develop a theory for correctness of concurrent objects under weak memory models. Central to our definitions is the concept of observations which determine when effects of operations become visible, and hence determine the semantics of objects, under a given memory model. The resulting notion of correctness, called object refinement, is generic as it is parameterised by the memory model under consideration. Our theory enforces the minimal constraints on the placing of observations and on the semantics of objects that underlie object refinement. Object refinement is suitable as a reference for correctness when proving new proof methods for objects under weak memory models to be sound and complete. |
09:00 | Students' Proof Assistant (SPA) SPEAKER: Anders Schlichtkrull ABSTRACT. The Students' Proof Assistant (SPA) aims to both teach how to use a proof assistant like Isabelle but also to teach how reliable proof assistants are built. Technically it is a miniature proof assistant inside the Isabelle proof assistant. In addition we conjecture that a good way to teach structured proving is with a concrete prover where the connection between semantics, proof system, and prover is clear. In fact, the proofs in Lamport's TLAPS proof assistant have a very similar structure to those in the declarative prover SPA. To illustrate this we compare a proof of Pelletier's problem 43 in TLAPS, Isabelle/Isar and SPA. |
09:30 | Natural Deduction Assistant (NaDeA) SPEAKER: Jørgen Villadsen ABSTRACT. We present the Natural Deduction Assistant (NaDeA) and discuss its advantages and disadvantages as a tool for teaching logic. In particular we compare our approach to natural deduction in the Isabelle proof assistant. NaDeA is available online: https://nadea.compute.dtu.dk/ |
10:00 | Rating of Geometric Automated Theorem Provers SPEAKER: Pedro Quaresma ABSTRACT. The field of geometric automated theorem provers has a long and rich history, form the early synthetic provers in the 50th of last century to nowadays provers. Establishing a rating among them will be useful for the improvement of the current methods/implementations. With wider scope, more efficient, with readable proofs and with trustworthy proofs. We need a common test bench: a common language to describe the geometric problems; a large set of problems and a set of measures to achieve such goal. |
Model Checking I
09:00 | ABSTRACT. Model-based software development is a leading methodology for the construction of safety- and mission-critical embedded systems. Formal models of such systems can be validated, via formal verification or testing, against system-level requirements and modified as needed before the actual system is built. In many cases, source code can be even produced automatically from the model once the system designer is satisfied with it. As embedded systems become increasingly large and sophisticated the size and complexity of models grows correspondingly, making the verification of top-level requirements harder, especially in the case of infinite-state systems. We argue that, as with conventional software, contracts are an effective mechanism to establish boundaries between components in a system model, and can be used to aid the verification of system-level properties by using compositional reasoning techniques. Component-level contracts also enable formal analyses that provide more accurate feedback to identify sources of errors or the parts of a system that contribute to the satisfaction of a given requirement. This talk discusses our experience in designing an assume-guarantee-based contract language on top of the Lustre modeling language and leveraging it to extend the Kind 2 model checker with contract-based compositional reasoning techniques. |
10:00 | SPEAKER: Alexander Nutz ABSTRACT. Data flow proofs, which are classically based on data flow graphs, bear a promising practical potential for automatic verification. We base a fundamental investigation of data flow proofs on the notion of a data flow tree, which allows us to capture data flow at a finer level of granularity than data flow graphs. In particular, we characterize the exact relation between the data flow in actual executions and the data flow represented by the data flow graph. |
Invited Speaker
Contributed Talks: Distributed Systems I
09:00 | Formal Synthesis of Control Strategies for Dynamical Systems ABSTRACT. In control theory, complex models of physical processes, such as systems of differential equations, are analyzed or controlled from simple specifications, such as stability and set invariance. In formal methods, rich specifications, such as formulae of temporal logics, are checked against simple models of software programs and digital circuits, such as finite transition systems. With the development and integration of cyber physical and safety critical systems, there is an increasing need for computational tools for verification and control of complex systems from rich, temporal logic specifications. In this talk, I will discuss a set of approaches to formal synthesis of control strategies for dynamical systems from temporal logic specifications. I will first show how automata games for finite systems can be extended to obtain conservative control strategies for low dimensional linear and multilinear dynamical systems. I will then present several methods to reduce conservativeness and improve the scalability of the control synthesis algorithms for more general classes of dynamics. I will illustrate the usefulness of these approaches with examples from robotics and traffic control. |
10:00 | Towards a Hierarchical-Control Architecture for Distributed Autonomous Systems (Extended Abstract - Paper) SPEAKER: Saud Yonbawi ABSTRACT. We present work-in-progress to develop a hierarchical-control architecture for distributed autonomous systems. The new architecture comprises (i) a decentralised system-level control loop that partitions the goals of the distributed autonomous system among its components under conservative assumptions, and (ii) component-level control loops responsible for achieving the component sub-goals. The operation of the two control loops is underpinned by formal models that are small enough to enable their verification at runtime. To illustrate the use of our approach, we briefly describe its application to a multi-robot search and rescue mission. |
10:00 | SPEAKER: Wieger Wesselink ABSTRACT. Model checking is a technique for automatically assessing the quality of software and hardware systems and designs. Given a formalisation of both the system behaviour and the requirements the system should meet, a model checker returns either a yes or a no. In case the answer is not as expected, it is desirable to provide feedback to the user as to why this is the case. Providing such feedback, however, is not straightforward if the requirement is expressed in a highly expressive logic such as the modal $\mu$-calculus, and when the decision problem is solved using intermediate formalisms. In this paper, we show how to extract witnesses and counterexamples from parameterised Boolean equation systems encoding the model checking problem for the first-order modal $\mu$-calculus. We have implemented our technique in the modelling and analysis toolset mCRL2 and showcase our approach on a few illustrative examples. |
11:00 | SPEAKER: Eugenio Orlandelli ABSTRACT. We introduce labelled sequent calculi for quantified modal logics with non-rigid and and non-denoting terms. We prove that these calculi have the good structural properties of G3-style calculi. In particular, all rules are height-preserving invertible, weakening and contraction are height-preserving admissible and cut is admissible. Finally, we show that each calculus gives a proof-theoretic characterization of validity in the corresponding class of models. |
11:30 | An Outline for Simple Semi-automated Proof Assistants for First-order Modal Logics ABSTRACT. Most theorem provers and proof assistants are written in imperative or functional programming languages. Recently, the claim that higher-order logic programming languages might be better suited for this task was revisited and a new interpreter, as well as new proof assistants based on it, were introduced. In this paper we follow these claims and describe a concise implementation of a prototype for a semi-automated proof assistant for first-order modal logic. The aim of this paper is to encourage the development of personal proof assistants and semi-automated provers for a variety of modal calculi. |
12:00 | Labelled Connection-based Proof Search for Multiplicative Intuitionistic Linear Logic SPEAKER: Didier Galmiche ABSTRACT. We propose a connection-based characterization for multiplicative intuitionistic linear logic (MILL) which is based on labels and constraints that capture Urquhart's possible worlds semantics of the logic. We briefly recall the purely syntactic sequent calculus for MILL, which we call LMILL and then propose a sound and complete labelled sequent calculus GMILL for MILL. We then show how to translate every LMILL proof into a GMILL proof. From this translation, refining our results on BI (The logic of Bunched Implications), we show the soundness of the connection-based characterization and its completeness without the need for any notion of multiplicity. |
11:00 | SPEAKER: Anthony Leclerc ABSTRACT. Recent research in extensions of Answer Set Programming has included a renewed interest in the language of Epistemic Specifications, which adds modal operators K ("known") and M ("may be true") to provide for more powerful introspective reasoning and enhanced capability, particularly when reasoning with incomplete information. An epistemic logic program is a set of rules in this language. Infused with the research has been the desire for an efficient solver to enable the practical use of such programs for problem solving. In this paper, we report on the current state of development of epistemic logic program solvers. |
11:30 | SPEAKER: Manuel Bichler ABSTRACT. Epistemic Logic Programs (ELPs) are an extension of Answer Set Programming (ASP) with epistemic operators that allow for a form of meta-reasoning, that is, reasoning over multiple possible worlds. Existing ELP solving approaches generally rely on making multiple calls to an ASP solver in order to evaluate the ELP. However, in this paper, we show that there also exists a direct translation from ELPs into non-ground ASP with bounded arity. The resulting ASP program can thus be solved in a single shot. We then implement this encoding method, using recently proposed techniques to handle large, non-ground ASP rules, into a prototype ELP solving system called selp. This solver exhibits competitive performance on a set of ELP benchmark instances. |
12:00 | SPEAKER: Chiaki Sakama ABSTRACT. In this paper, we introduce methods of encoding propositional logic programs in vector spaces. Interpretations are represented by vectors and programs are represented by matrices. The least model of a definite program is computed by multiplying interpretation vectors and program matrices. To optimize computation in vector spaces, we provide a method of partial evaluation of programs using linear algebra. Partial evaluation is done by unfolding rules in a program, and it is realized in a vector space by multiplying program matrices. We perform experiments using randomly generated programs and show that partial evaluation has potential for realizing efficient computation in huge scale of programs. |
Automated verification
11:00 | ABSTRACT. We present a novel notion of deadlock for tasks which synchronize on arbitrary boolean conditions and a sound deadlock analysis for it. Contrary to other approaches, our analysis aims to detect dead- locks caused by faulty system design, rather than implementation bugs. A deadlock is a circular dependency between multiple tasks, where a task dependents on a second task, if the execution of this second task enables the first one to resume. This requires the analysis of the side- effects of the computations. The analysis is implemented as an extension to the DECO tool, evaluated for the full coreABS language and uses the KeY-ABS theorem prover to reason about side-effects of computations. |
11:30 | SPEAKER: Rebeka Farkas ABSTRACT. Efficient techniques for reachability analysis of timed automata are zone-based methods that explore the reachable state space from the initial state, and SMT-based methods that perform backward search from the target states. It is also possible to perform backward exploration based on zones, but calculating predecessor states for systems with data variables is computationally expensive, prohibiting the successful application of this approach so far. In this paper we overcome this problem by combining the zone-based backward exploration algorithm with the weakest precondition operation for data variables. This combination allows us to handle diagonal constraints efficiently as opposed to zone-based forward search where most approaches require additional operations to ensure correctness. We demonstrate the applicability and compare the efficiency of the algorithm to existing forward exploration approaches by measurements performed on industrial case studies. We show that data variables can be handled efficiently by the weakest precondition operation but the large number of target states often prevents successful verification. |
12:00 | SPEAKER: Cristina Serban ABSTRACT. In this paper, we present Inductor, a checker for entailments between mutually recursive predicates, whose inductive definitions contain ground constraints belonging to the quantifier-free fragment of Separation Logic. Our tool implements a proof-search method for a cyclic proof system that we have shown to be sound and complete, under certain semantic restrictions involving the set of constraints in a given inductive system. Dedicated decision procedures from the DPLL(T)-based SMT solver CVC4 are used to establish the satisfiability of Separation Logic formulae. Given inductive predicate definitions, an entailment query, and a proof-search strategy, Inductor uses a compact tree structure to explore all derivations enabled by the strategy. A successful result is accompanied by a proof, while an unsuccessful one is supported by a counterexample. |
11:00 | Introduction: Role-Based Access Control as a Programming Challenge |
11:10 | Security Policies in Constraint Handling Rules ABSTRACT. Role-Based Access Control (RBAC) is a popular security policy framework. User access to resources is controlled by roles and privileges. Constraint Handling Rules (CHR) is a rule-based constraint logic language. CHR has been used to implement security policies for trust management systems like RBAC for more than two decades. In this extended abstract, we give some references to these works. |
11:20 | LPOP2018 XSB Position Paper ABSTRACT. In this position paper we first describe a classic logic programming approach to the solution of (a portion) of the challenge problem involving RBAC. We use the XSB Tabled Prolog Language and system \cite{xsbmanual-07}, with ideas from Transaction Logic \cite{trans-tcs94xs}. Then we discuss issues that involve what would be required to use such an implementation in a real-world application requiring RBAC functionality. And finally we raise issues about the possible design of a general procedural application programming interface to logic (programming) systems. In analogy to ODBC (Open Data Base Connectivity), what might an OLSC (Open Logic System Connectivity) look like? |
11:30 | SPEAKER: Nicola Leone ABSTRACT. In this paper, we answer the Role-Based Access Control (RBAC) challenge by showcasing the solution of RBAC components by using JASP, a flexible framework integrating ASP with Java. In JASP the programmer can simply embed ASP code in a Java program without caring about the interaction with the underlying ASP system. This way, it is possible solve seamlessly both tasks suitable for imperative and declarative specification as required by RBAC. |
11:40 | The RBAC challenge in the Knowledge Base Paradigm |
11:50 | Role-Based Access Control via LogicBlox |
12:00 | ABSTRACT. Both software engineers and business people tend to be reluctant to adopt logic-based methods. On the one hand, this may be due to unfamiliarity with ``scary'' logical syntax. To ameliorate this, we developed an API for a state-of-the-art logic system, using only standard Python syntax. On the other hand, logic-based methods might have more impact if they could be used directly by business people. The recent DMN standard that might help in this respect. |
12:10 | Easier Rules and Constraints for Programming SPEAKER: Yanhong A. Liu ABSTRACT. We discuss how rules and constraints might be made easier for more conventional programming. We use a language that extends DistAlgo, which extends Python, and we use the RBAC programming challenge plus distributed RBAC as examples. |
12:20 | Questions about RBAC challenge solutions |
11:00 | Courcelle’s Theorem – A Game-Theoretic Approach SPEAKER: Peter Rossmanith ABSTRACT. Courcelle’s Theorem states that every problem definable in Monadic Second- Order logic can be solved in linear time on structures of bounded treewidth, for example, by constructing a tree automaton that recognizes or rejects a tree decomposition of the structure. Existing, optimized software like the MONA tool can be used to build the corresponding tree automata, which for bounded treewidth are of constant size. Unfortunately, the constants involved can become extremely large – every quantifier alternation requires a power set construction for the automaton. Here, the required space can become a problem in practical applications. We present a direct approach based on model checking games, which avoids the expensive power set construction. The resulting tool solves some problems on graphs quite efficiently if the treewidth is moderate. |
12:05 | ABSTRACT. Weak monadic second-order logic of one successor (WS1S) is a simple and natural formalism for specifying regular properties. WS1S is decidable, although the decision procedure's complexity is non-elementary. Typically, decision procedures for WS1S exploit the logic-automaton connection, i.e., they escape the simple and natural formalism by translating formulas into equally expressive but less concise regular structures such as finite automata. In this talk, I will present a decision procedure for WS1S that stays within the logical world by directly operating on formulas. The key operation is the derivative of a formula, modeled after Brzozowski's derivatives of regular expressions. The presented decision procedure has been formalized and proved correct in the interactive proof assistant Isabelle. |
Techniques to verify parallel programs.
11:00 | SPEAKER: Jérôme Dohrau ABSTRACT. Information about the memory locations accessed by a program is, for instance, required for program parallelisation and program verification. Existing inference techniques for this information provide only partial solutions for the important class of array-manipulating programs. In this paper, we present a static analysis that infers the memory footprint of an array program in terms of permission pre- and postconditions as used, for example, in separation logic. This formulation allows our analysis to handle concurrent programs and produces specifications that can be used by verification tools. Our analysis expresses the permissions required by a loop via maximum expressions over the individual loop iterations. These maximum expressions are then solved by a novel maximum elimination algorithm, in the spirit of quantifier elimination. Our approach is sound and is implemented; an evaluation on existing benchmarks for memory safety of array programs demonstrates accurate results, even for programs with complex access patterns and nested loops. |
11:45 | SPEAKER: Yatin Manerkar ABSTRACT. Paramount to the viability of a parallel architecture is the correct implementation of its memory consistency model (MCM). Although tools exist for verifying consistency models at several design levels, a problematic verification gap exists between checking an abstract microarchitectural specification of a consistency model and verifying that the actual processor RTL implements it correctly. This paper presents RTLCheck, a methodology and tool for narrowing the microarchitecture/RTL MCM verification gap. Given a set of microarchitectural axioms about MCM behavior, an RTL design, and user-provided mappings to assist in connecting the two, RTLCheck automatically generates the SystemVerilog Assertions (SVA) needed to verify that the implementation satisfies the microarchitectural specification for a given litmus test program. When combined with existing automated MCM verification tools, RTLCheck enables test-based full-stack MCM verification from high-level languages to RTL. We evaluate RTLCheck on a multicore version of the RISC-V V-scale processor, and discover a bug in its memory implementation. Once the bug is fixed, we verify that the multicore V-scale implementation satisfies sequential consistency across 56 litmus tests. The JasperGold property verifier finds com- plete proofs for 89% of our properties, and can find bounded proofs for the remaining properties. |
11:00 | Challenges of specifying concurrent program components ABSTRACT. The purpose of this paper is to review some of the challenges of formally specifying components of concurrent programs in a manner suitable for refining them to an implementation. We present some approaches to devising specifications, in some cases investigating different forms of specification suitable for different contexts. Our focus is on shared variable concurrency rather than event-based process algebras. |
11:30 | SPEAKER: Wided Ghardallou ABSTRACT. To derive a program for a given specification R means to find an artifact P that satisfies two conditions: P is executable in some programming language; and P is correct with respect to R. Refinement-based program derivation achieves this goal in a stepwise manner by enhancing executability while preserving correctness until we achieve complete executability. In this paper, we argue that it is possible to invert these properties, and to derive a program by enhancing correctness while preserving executability (i.e. proceeding from one executable program to another) until we achieve absolute correctness. Of course, this latter process is possible only if we know what it means to enhance correctness. |
12:00 | SPEAKER: Mathieu Montin ABSTRACT. Software is now ubiquitous and involved in complex interactions with the human users and the physical world in so-called cyber-physical systems (CPS) where the management of time is a major issue. Separation of concerns is a key asset in the development of these ever more complex systems. Two different kinds of separation exist : a first one corresponds to the different steps in a development leading from the abstract requirements to the system implementation and is qualified as vertical. It matches the commonly used notion of refinement. A second one corresponds to the various components in the system architecture at a given level of refinement and is called horizontal. Refinement has been studied thoroughly for the data, functional and concurrency concerns while our work focuses on the time modeling concern. This contribution aims at providing a formal construct for the verification of vertical separation in time models, through the definition of an order between strict partial orders used to relate the different instants in asynchronous systems. This work has been conducted using the proof assistant Agda and is connected to a previous work on the asynchronous language CCSL, which has also been modeled using the same tool. |
11:00 | Reactive Synthesis from LTL Specification with Spot ABSTRACT. We present ltlsynt, a new tool for reactive synthesis from LTL specifications. It relies on the efficiency of Spot to translate the input LTL specification to a deterministic parity automaton. The latter yields a parity game, which we solve with Zielonka's recursive algorithm. The approach taken in ltlsynt was widely believed to be impractical, due to the double-exponential size of the parity game, and to the open status of the complexity of parity games resolution. ltlsynt ranked second of its track in the 2017 edition of the SYNTCOMP competition. This demonstrates the practical applicability of the parity game approach, when backed by efficient manipulations of omega-automata such as the ones provided by Spot. We present our approach and report on our experimental evaluation of ltlsynt to better understand its strengths and weaknesses. |
11:18 | SPEAKER: Philip Lukert ABSTRACT. Hyperproperties generalize trace properties, i.e., sets of traces to sets of sets of traces. Prominent examples are information-flow policies like observational determinism and noninterference. In this paper, we give an overview on the problem of automatically synthesizing implementations from hyperproperties given in the temporal logic HyperLTL. We show that HyperLTL synthesis subsumes various synthesis approaches studied in the literature: synthesis under incomplete information, distributed synthesis, symmetric synthesis, fault-tolerance synthesis, and synthesis with dynamic hierarchical information. As a case study, where the control of the flow of information and the distributivity of the system are equally important, we consider the dining cryptographers problem: neither well-studied LTL synthesis nor its distributed variant is sufficient for this problem. Our implementation of the (bounded) synthesis procedure for HyperLTL, called BoSyHyper, is able to automatically construct a solution to this problem. We present the decidability results for HyperLTL synthesis: we identify, based on the quantifier structure of the HyperLTL formula, for which fragments the synthesis problem remains decidable and for which fragments we have to rely on a bounded variation of the decision procedure. |
11:36 | Maximum Realizability for Linear Temporal Logic Specifications SPEAKER: Rayna Dimitrova ABSTRACT. Automatic synthesis from linear temporal logic (LTL) specifications is widely used in robotic motion planning and control of autonomous systems. A common specification pattern in such applications consists of an LTL formula describing the requirements on the behaviour of the system, together with a set of additional desirable properties. We study the synthesis problem in settings where the overall specification is unrealizable, more precisely, when some of the desirable properties have to be (temporarily) violated in order to satisfy the system's objective. We provide a quantitative semantics of sets of safety specifications, and use it to formalize the ``best-effort'' satisfaction of such soft specifications while satisfying the hard LTL specification. We propose an algorithm for synthesizing implementations that are optimal with respect to this quantitative semantics. Our method builds upon the idea of the bounded synthesis approach, and we develop a MaxSAT encoding which allows for maximizing the quantitative satisfaction of the safety specifications. We evaluate our algorithm on scenarios from robotics and power distribution networks. |
11:54 | SPEAKER: Zexiang Liu ABSTRACT. In this work we propose a patching algorithm to incrementally modify controllers, synthesized to satisfy a temporal logic formula, when some of the control actions become unavailable. The main idea of the proposed algorithm is to ``warm-start" the synthesis process with an existing fixed-point based controller that has a larger action set. By exploiting the structure of the fixed-point based controllers, our algorithm avoids repeated computations while synthesizing a controller with restricted action set. Moreover, we show that the algorithm is sound and complete, that is, it provides the same guarantees as synthesizing a controller from scratch with the new action set. An example on synthesizing controllers for a simplified walking robot model under ground constraints is used to illustrate the approach. In this application, the ground constraints determine the action set and they might not be known a priori. Therefore it is of interest to quickly modify a controller synthesized for an unconstrained surface, when new constraints are encountered. Our simulations indicate that the proposed approach provides at least an order of magnitude speed-up compared to synthesizing a controller from scratch. |
12:12 | Safe, Automated and Formal Synthesis of Digital Controllers for Continuous Plants SPEAKER: Cristina David ABSTRACT. We present a sound and automated approach to synthesizing safe, digital controllers for physical plants represented as linear, time-invariant models. The synthesis accounts for errors caused by the digitization effects introduced by digital controllers operating in either fixed- or floating-point arithmetic. Our approach uses counterexample-guided inductive synthesis (CEGIS): in the first phase an inductive generalisation engine produces a possible solution that is safe for some possible initial conditions but may not be safe for all initial conditions. Safety for all initial conditions is then verified in a second phase either via bounded model checking or abstract acceleration; if the verification step fails, a counterexample is provided to the inductive generalisation and the process iterates until a safe controller is obtained. We implemented our approach in a tool named DSSynth (Digital-System Synthesizer) and demonstrate its practical value by automatically synthesizing safe controllers for physical plant models from the digital control literature. |
11:00 | ABSTRACT. The Animation module is a new TLA+ module which allows for the creation of interactive visualizations of TLC execution traces that can be run inside a web browser. The talk will present an overview of the core concepts of the Animation module and how it works, and then demonstrate a few examples of TLA+ specs that have been animated using the module. |
11:45 | SPEAKER: Igor Konnov ABSTRACT. We present the development version of BMCMT, a symbolic model checker for TLA+. It finds, whether a TLA+ specification satisfies an invariant candidate by checking satisfiability of an SMT formula that encodes: (1) an execution of bounded length, and (2) preservation of the invariant candidate in every state of the execution. Our tool is still in the experimental phase, due to a number of challenges posed by TLA+ semantics to SMT solvers. We will discuss these challenges and our current approach to them in the talk. Our preliminary experiments show that BMCMT scales better than the standard TLA+ model checker TLC for large parameter values, e.g., when a TLA+ specification models a system of 10 processes, though TLC is more efficient for tiny parameters, e.g. when the system has 3 processes. |
Model Checking II
11:00 | Executable Counterexamples in Software Model Checking ABSTRACT. Counterexamples, execution traces of the system that illustrate how an error state is reachable from the initial state, are essential for understanding verification failures. They are one of the most important features of Model Checkers which distinguish them from Abstract Interpretation and other Static Analysis techniques by providing a user with an actionable information to debug their system and/or the specification. While in Hardware and Protocol verification, the counterexamples are re-playable in the system, in Software Model Checking, counterexamples take a form of a textual report (often presented as a semi-structured document). This is problematic since it complicates the debugging process by not allowing to use existing processes and tools such as debuggers, delta-debuggers, fault localization, fault minimization, etc. In this paper, we argue that the most useful form of a counterexample is an \emph{executable harness} that can be linked with the code under analysis (CUA) to produce an executable that exhibits the fault witnessed by the counterexample. A harness is different from a unit test since it can control CUA directly bypassing potentially complex logic that interprets program inputs. This makes harnesses easier to construct compared to complete unit tests. We describe harness generation that we have developed in the SeaHorn verification framework. We identify key challenges for generating harness from Model Checking counterexamples of complex memory manipulating programs that use many third party libraries and external calls. We validate our prototype on the verification benchmarks from Linux Device Drivers in SV-COMP. Finally, we discuss many of the open challenges and suggests avenues for future work. |
11:30 | SPEAKER: Pritom Rajkhowa ABSTRACT. We have previously described a novel fully automated program verification system called VIAP primarily for verifying the safety properties of programs with integer assignments. In this paper, we extend it to programs with arrays. Our extension is not restricted to single dimensional arrays but general and works for multidimensional and nested arrays as well. In the most recent \textit{SV-COMP} 2018 competition, VIAP with array extension came in second in the ReachSafety-Arrays sub-category, behind \textit{VeriAbs}. |
12:00 | SPEAKER: Karine Even-Mendoza ABSTRACT. Abstract. In this paper we present an algorithm for bounded model-checking with SMT solvers of programs with library functions — either standard or user-defined. Typically, if the program correctness depends on the output of a library function, the model-checking process either treats this function as an uninterpreted function, or is required to use a theory under which the function in question is fully defined. The former approach leads to numerous spurious counter-examples, whereas the later faces the danger of the state-explosion problem, where the resulting formula is too large to be solved by means of modern SMT solvers. We extend the approach of user-defined summaries and propose to represent the set of existing summaries for a given library function as a lattice of subsets of summaries, with the meet and join operations defined as intersection and union, respectively. The refinement process is then triggered by the lattice traversal, where in each node the SMT solver uses the subset of SMT summaries stored in this node to search for a satisfying assignment. The direction of the traversal is determined by the results of the concretisation of an abstract counterexample obtained at the current node. Our experimental results demonstrate that this approach allows to solve a number of instances that were previously unsolvable by the existing bounded model-checkers. |
Contributed Talks: Distributed Systems II
11:00 | Work-in-Progress: Adaptive Neighborhood Resizing for Stochastic Reachability in Distributed Flocking SPEAKER: Anna Lukina ABSTRACT. We present DAMPC, a distributed, adaptive-horizon and adaptive-neighborhood algorithm for solving the stochastic reachability problem in a distributed-flocking modeled as a Markov decision process. At each time step, DAMPC takes the following actions: First, every agent calls a centralized, adaptive-horizon model-predictive control AMPC algorithm to obtain an optimal solution for its local neighborhood. Second, the agents derive the flock-wide optimal solution through a sequence of consensus rounds. Third, the neighborhood is adaptively resized using a flock-wide, cost-based Lyapunov function V. This improves efficiency without compromising convergence. The proof of statistical global convergence is non-trivial and involves showing that V follows a monotonically decreasing trajectory despite potential fluctuations in cost and neighborhood size. We evaluate DAMPC's performance using statistical model checking. Our results demonstrate that, compared to AMPC, DAMPC achieves considerable speed-up (2 in some cases) with only a slightly lower rate of convergence. The smaller average neighborhood size and lookahead horizon demonstrate the benefits of the DAMPC approach for stochastic reachability problems involving any distributed controllable system that possesses a cost function. |
11:30 | Policing Functions for Machine Learning Systems SPEAKER: Thai Son Hoang ABSTRACT. Machine learning systems typically involve complex decision making mechanisms while lack clear and concise specifications. Demonstrating the quality of machine learning systems therefore is a challenging task. We propose an approach combining formal methods and metamorphic testing for improving the quality of machine learning systems. In particular, our framework enables the possibility of developing policing functions for runtime monitoring machine learning systems based on metamorphic relations. |
12:00 | SPEAKER: Marina De Vos ABSTRACT. This abstract considers the problem of how autonomous systems might incorporate accounting for first (and higher order) norms in their own actions and those of others, through on-the-fly symbolic model-checking of (multiple) institutional models. |
11:00 | SPEAKER: Peter Schrammel ABSTRACT. Non-termination is the root cause of a variety of program bugs, such as hanging programs and denial-of-service vulnerabilities. This makes an automated analysis that can prove the absence of such bugs highly desirable. To scale termination checks to large systems, an interprocedural termination analysis seems essential. This is a largely unexplored area of research in termination analysis, where most effort has focused on small but difficult single-procedure problems. We present a modular termination analysis for C programs using template-based interprocedural summarisation. Our analysis combines a context-sensitive, over-approximating forward analysis with the inference of under-approximating preconditions for termination. Bit-precise termination arguments are synthesised over lexicographic linear ranking function templates. Our experimental results show the advantage of interprocedural reasoning over monolithic analysis in terms of efficiency, while retaining comparable precision. |
11:30 | GPO: A Path Ordering for Graphs SPEAKER: Jean-Pierre Jouannaud ABSTRACT. We generalize term rewriting techniques to finite, directed, ordered multigraphs. We define well-founded monotonic graph rewrite orderings inspired by the recursive path ordering on terms. Our graph path ordering provides a building block for rewriting with such graphs, which should impact the many areas in which computations take place on graphs. |
12:00 | Objective and Subjective Specifications ABSTRACT. We examine specifications for dependence on the agent that performs them. We look at the consequences for the Church-Turing Thesis and for the Halting Problem. |
14:00 | Going Beyond First-Order Logic with Vampire ABSTRACT. TBA |
14:00 | Towards Abstraction in ASP with an Application on Reasoning about Agent Policies SPEAKER: Zeynep G. Saribatur ABSTRACT. ASP programs are a convenient tool for problem solving, whereas with large problem instances the size of the state space can be prohibitive. We consider abstraction as a means of over-approximation and introduce a method to automatically abstract (possibly non-ground) ASP programs that preserves their structure, while reducing the size of the problem. One particular application case is the problem of defining declarative policies for reactive agents and reasoning about them, which we illustrate on examples. |
14:30 | Datalog for Static Analysis Involving Logical Formulae SPEAKER: Aaron Bembenek ABSTRACT. Datalog has become a popular language for writing static analyses. Because Datalog is very limited, some implementations of Datalog for static analysis have extended it with new language features. However, even with these features it is hard or impossible to express a large class of analyses because they use logical formulae to represent program state. FormuLog fills this gap by extending Datalog to represent, manipulate, and reason about logical formulae. We have used FormuLog to implement declarative versions of symbolic execution and abstract model checking, analyses previously out of the scope of Datalog-based languages. While this paper focuses on the design of FormuLog and one of the analyses we have implemented in it, it also touches on a prototype implementation of the language and identifies performance optimizations that we believe will be necessary to scale FormuLog to real-world static analysis problems. |
15:00 | ABSTRACT. We study syntactic conditions which guarantee when a CR-Prolog (Consistency Restoring Prolog) program has antichain property: no answer set is a proper subset of another. A notable such condition is that the program's dependency graph being acyclic and having no directed path from one cr-rule head literal to another. |
Automatic Verification of Concurrent Objects
14:00 | Panel: Practice of Modeling and Programming |
14:30 | ABSTRACT. A growing trend in modeling is the construction of high-level modeling languages that invoke a suite of solvers. This requires automatic reformulation of parts of the problem to suit different solvers, a process that typically introduces many auxiliary variables. We show how semantic typing can manage relationships between variables created by different parts of the problem. These relationships must be revealed to the solvers if efficient solution is to be possible. The key is to view variables as defined by predicates, and declaration of variables as analogous to querying a relational database that instantiates the predicates. The modeling language that results is self-documenting and self-checks for a number of modeling errors. |
15:10 | A Picat-based XCSP Solver - from Parsing, Modeling, to SAT Encoding SPEAKER: Neng-Fa Zhou ABSTRACT. This document gives an overview of a Picat-based XCSP3 solver, named PicatSAT, submitted to the 2018 XCSP competition. The solver demonstrates the strengths of Picat, a logic-based language, in parsing, modeling, and encoding constraints into SAT. |
15:20 | Confluence Analysis of Cognitive Models with Constraint Handling Rules ABSTRACT. Computational cognitive modeling tries to explore cognition through developing detailed, process-based understanding by specifying corresponding computational models. Currently, computational cognitive modeling architectures as well as the implementations of cognitive models are typically ad-hoc constructs. They lack a formalization from the computer science point of view. This impedes analysis of the models. In this work, we present how cognitive models can be formalized and analyzed with the help of logic programming in form of Constraint Handling Rules (CHR). We concentrate on confluence analysis of cognitive models in the popular cognitive architecture Adaptive Control of Thought -- Rational (ACT-R). |
Parallel approaches to perform SAT solving.
14:00 | Tuning Parallel SAT Solvers SPEAKER: Dirk Nowotka ABSTRACT. ABSTRACT. In this paper we present new implementation details and benchmarking results for our parallel portfolio solver TOPO. In particular, we discuss ideas and implementation details for the exchange of learned clauses in a massively-parallel SAT solver which is designed to run more that 1,000 solver threads in parallel. Furthermore, we go back to the roots of portfolio SAT solving, and discuss the impact of diversifying the solver by using different restart-, branching- and clause database management heuristics. We show that these techniques can be used to tune the solver towards different problems. However, in a case study on formulas derived from Bounded Model Checking problems we see the best performance when using a rather simple clause exchange strategy. We show details of these tests and discuss possible explanations for this phenomenon. As computing times on massively-parallel clusters are expensive, we consider it especially interesting to share these kind of experimental results. |
14:45 | GPU Accelerated SAT solving SPEAKER: Muhammad Osama ABSTRACT. Details will follow shortly. |
14:00 | A more relaxed way to make concrete: uses of heuristic search to discover implementations SPEAKER: John Clark ABSTRACT. Over the past decades we have developed formal frameworks to refine specifications into more detailed representations. These handle both deterministic and probabilistic specifications. We also have developed means for relaxing formality when the occasion demands it (retrenchment). In this talk I will explore approaches to inferring programs from test data (derived from a specification), particularly involving the use of genetic programming (GP). I will pay particular attention to quantum artifacts, e.g. discussing how GP can find Shor's Quantum Discrete Fourier Transform circuits. Such approaches generally work by 'homing in' on an appropriate program, using deviation of results from specified results as guidance. This can produce programs that are either right or nearly right. I'll do a round trip with evolutionary computation and indicate the use of GP in unearthing formal specification fragments from test traces for traditional programs (using mutation approaches to get rid of uninteresting potential invariants inferred). |
14:00 | SPEAKER: Sumith Kulal ABSTRACT. General-purpose program synthesizers face a tradeoff between having a rich vocabulary for output programs and the time taken to discover a solution. One performance bottleneck is the construction of a space of possible output programs that is both expressive and easy to search. In this paper we achieve both richness and scalability using a new algorithm for constructing symbolic syntax graphs out of easily specified components to represent the space of output programs. Our algorithm ensures that any program in the space is type-safe and only mutates values that are explicitly marked as mutable. It also shares structure where possible and encodes programs in a straight-line format instead of the typical bushy-tree format, which gives an inductive bias towards realistic programs. These optimizations shrink the size of the space of programs, leading to more efficient synthesis, without sacrificing expressiveness. We demonstrate the utility of our algorithm by implementing \syncro, a system for synthesis of incremental operations. We evaluate our algorithm on a suite of benchmarks and show that it performs significantly better than prior work. |
14:18 | SPEAKER: Dennis Peuter ABSTRACT. We study possibilities of using symbol elimination in program verification and synthesis. We consider programs in which some instructions or subprograms are not fully specified; the task is to derive conditions on the parameters (or subprograms) which imply inductivity of certain properties. We then propose a method for property-directed invariant generation and analyze its properties. |
14:36 | Programming by example: efficient, but not “helpful” SPEAKER: Mark Santolucito ABSTRACT. Programming by example (PBE) is a powerful programming paradigm based on example driven synthesis. Users can provide examples, and a tool automatically constructs a program that satisfies the examples. To investigate the impact of PBE on real-world users, we built a study around StriSynth, a tool for shell scripting by example, and recruited 27 working IT professionals to participate. In our study, we asked the users to complete three tasks with StriSynth, and the same three tasks with PowerShell, a traditional scripting language. We found that, although our participants completed the tasks more quickly with StriSynth, they reported that they believed PowerShell to be a more helpful tool. |
14:54 | Minimal Synthesis of String To String Functions From Examples SPEAKER: Viktor Kunčak ABSTRACT. We study the problem of synthesizing string to string transformations from a set of input/output examples. The transformations we consider are expressed using deterministic finite automata (DFA) that read pairs of letters, one letter from the input and one from the output. The DFA corresponding to these transformations have additional constraints, ensuring that each input string is mapped to exactly one output string. We suggest that, given a set of input/output examples, the smallest DFA consistent with the examples is a good candidate for the transformation the user was expecting. We therefore study the problem of, given a set of examples, finding a minimal DFA consistent with the examples and satisfying the functionality and totality constraints mentioned above. We prove that, in general, this problem (the corresponding decision problem) is NP-complete. This is unlike the standard DFA minimization problem which can be solved in polynomial time. We provide several NP-hardness proofs that show the hardness of multiple (independent) variants of the problem. Finally, we propose an algorithm for finding the minimal DFA consistent with input/output examples, that uses a reduction to SMT solvers. We implemented the algorithm, and used it to evaluate the likelihood that the minimal DFA indeed corresponds to the DFA expected by the user. |
14:00 | SPEAKER: Maximilian Doré ABSTRACT. Proving lemmas in synthetic geometry is an often time-consuming endeavour - many intermediate lemmas need to be proven before interesting results can be obtained. Automated theorem provers (ATP) made much progress in the recent years and can prove many of these intermediate lemmas automatically. The interactive theorem prover Elfe accepts mathematical texts written in fair English and verifies the text with the help of ATP. Geometrical texts can thereby easily be formalized in Elfe, leaving only the cornerstones of a proof to be derived by the user. This allows for teaching axiomatic geometry to students without prior experience in formal mathematics. |
14:30 | Proving in the Isabelle Proof Assistant that the Set of Real Numbers is not Countable ABSTRACT. We present a new succinct proof of the uncountability of the real numbers – optimized for clarity – based on the proof by Benjamin Porter in the Isabelle Analysis theory. |
Certification & Formalisation I
14:00 | Verified Software: Theories, Tools, ... and Engineering ABSTRACT. Continual innovation of software verification theories & tools is |
15:00 | Verified Certificate Checking for Counting Votes SPEAKER: Dirk Pattinson ABSTRACT. We introduce a new framework for verifying electronic vote counting results that are based on the Single Transferable Vote scheme (STV). Our approach frames electronic vote counting as certified computation where each execution of the counting algorithm is accompanied by a certificate that witnesses the correctness of the output. These certificates are then checked for correctness independently of how they are produced. We advocate to verify the verifier rather than the soft- ware used to produce the result. We use the theorem prover HOL to formalise the STV vote counting scheme, and obtain a fully verified certificate checker. By connecting HOL with the verified CakeML compiler, we then extract an executable that is guaranteed to behave correctly with respect to the formal specification of the protocol down to machine level. We demonstrate that our verifier can check certificates of real-size elections efficiently. Our encoding is modular, so repeating the same pro- cess for another different STV scheme would require a minimal amount of additional work. |
Invited Speaker
Contributed Talks: Autonomy
14:00 | Trust me I am autonomous ABSTRACT. Developing advanced robotics and autonomous applications is now facing the confidence issue for their acceptability in everyday life. This confidence could be justified by the use of dependability techniques as it is done in other safety-critical applications. However, due to specific robotic properties (such as continuous physical interaction or non-deterministic decisional layer), many techniques need to be adapted or revised. This presentation will introduce these major issues for autonomous systems, and focus on current research work at LAAS in France, on model-based risk analysis for physical human-robot interactions, active safety monitoring for autonomous systems, and testing in simulation of mobile robot navigation. |
15:00 | SPEAKER: Nick Hawes ABSTRACT. In this demo, we will illustrate our work on integrating formal verification techniques, in particular probabilistic model checking, to enable long term deployments of mobile service robots in everyday environments. Our framework is based on generating policies for Markov decision process (MDP) models of mobile robots, using (co-safe) linear temporal logic specifications. More specifically, we build MDP models of robot navigation and action execution where the probability of successfully navigating between two locations and the expected time to do so are learnt from experience. For a specification over these models, we maximise the probability of the robot satisfying it, and minimise the expected time to do so. The policy obtained for this objective can be seen as a robot plan with attached probabilistic performance guarantees. Our proposal is to showcase this framework live during the workshop, deploying our robot in the workshop venue and having it perform tasks throughout the day (the robot is based in the Oxford Robotics Institute, hence it can be easily moved to the workshop venue). In conjunction with showing the live robot behaviour, we will, among other things, provide visualisation of the generated policies on a map of the environment; showcase how the robot keeps track of the performance guarantees calculated offline during policy execution; and show how these guarantees can be used for execution monitoring. |
14:00 | Comparing on Strings: Semantic Kachinuki Order SPEAKER: Alfons Geser ABSTRACT. We present an extension of the Kachinuki order on strings. The Kachinuki order transforms the problem of comparing strings to the problem of comparing their syllables length-lexicographically, where the syllables are defined via a precedence on the alphabet. Our extension allows the number of syllables to increase under rewriting, provided we bound it by a weakly compatible interpretation. |
14:30 | Embracing Infinity - Termination of String Rewriting by Almost Linear Weight Functions ABSTRACT. Weight functions are among the simplest methods for proving termination of string rewrite systems, and of rather limited applicability. In this working paper, we propose a generalized approach. As a first step, syllable decomposition yields a transformed, typically infinite rewrite system over an infinite alphabet, as the title indicates. Combined with generalized weight functions, termination proofs become feasible also for systems that are not necessarily simply terminating. The method is limited to systems with linear derivational complexity, however, and this working paper is restricted to original alphabets of size two. The proof principle is almost self-explanatory, and if successful, produces simple proofs with short proof certificates, often even shorter than the problem instance. A prototype implementation was used to produce nontrivial examples. |
15:00 | Well-founded models in proofs of termination ABSTRACT. We prove that operational termination of declarative programs can be characterized by means of well-founded relations between specific formulas which can be obtained from the program. We show how to generate such relations by means of logical models where the interpretation of some binary predicates are required to be well-founded relations. Such logical models can be automatically generated by using existing tools. This provides a basis for the implementation of tools for automatically proving operational termination of declarative programs. |
16:00 | Formalization of a Paraconsistent Infinite-Valued Logic ABSTRACT. Classical logics are explosive -- from a contradiction everything follows. This is problematic e.g. when reasoning about contradictory evidence. In paraconsistent logics everything does not follow from a contradiction. In this paper, formalized proofs of two meta-theorems about a propositional fragment of a paraconsistent infinite-valued higher-order logic are presented. One implies that the validity of any formula can be decided by considering a finite number of truth values and evaluating the formula in all models over these. The other implies that there is no upper bound on the size of this finite set -- it depends on the number of propositional symbols in the formula. |
16:30 | System Demonstration: The Higher-Order Prover Leo-III SPEAKER: Alexander Steen ABSTRACT. The higher-order theorem prover Leo-III will be demonstrated. |
16:00 | ABSTRACT. In the development of safety-critical embedded systems, requirements-driven approaches are widely used. Expressing functional requirements in formal languages enables reasoning and formal testing. This paper proposes the Simplified Universal Pattern (SUP) as an easy to use formalism and compares it to SPS, another commonly used specification pattern system. Consistency is an important property of requirements that can be checked already in early design phases. However, formal definitions of consistency are rare in literature and tent to be either too weak or computationally too complex to be applicable to industrial systems. Therefor this work proposes a new formal consistency notion, called partial consistency, for the SUP that is a trade-off between exhaustiveness and complexity. Partial consistency identifies critical cases and verifies if these cause conflicts between requirements. |
16:30 | Formal Verification of Synchronisation, Gossip and Environmental Effects for Wireless Sensor Networks SPEAKER: Matt Webster ABSTRACT. The Internet of Things (IoT) promises a revolution in the monitoring and control of a wide range of applications, from urban water supply networks and precision agriculture food production, to vehicle connectivity and healthcare monitoring. For applications in such critical areas, control software and protocols for IoT systems must be verified to be both robust and reliable. Two of the largest obstacles to robustness and reliability in IoT systems are effects on the hardware caused by environmental conditions, and the choice of parameters used by the protocol. In this paper we use probabilistic model checking to verify that a synchronisation and dissemination protocol for Wireless Sensor Networks (WSNs) is correct with respect to its requirements, and is not adversely affected by the environment. We show how the protocol can be converted into a logical model and then analysed using the probabilistic model-checker, PRISM. Using this approach we prove under which circumstances the protocol is guaranteed to synchronise all nodes and disseminate new information to all nodes. We also examine the bounds on synchronisation as the environment changes the performance of the hardware clock, and investigate the scalability constraints of this approach. |
17:00 | SPEAKER: Jessica Petrasch ABSTRACT. During a course on model checking we developed BMoth, a full-stack model checker for classical B, featuring both explicit-state and symbolic model checking. Given that we only had a single university term to finish the project, a particular focus was on reusing existing libraries to reduce implementation workload. In the following, we report on a selection of reusable libraries, which can be combined into a prototypical model checker relatively easily. Additionally, we discuss where custom code depending on the specification language to be checked is needed and where further optimization can take place. To conclude, we compare to other model checkers for classical B. |
16:00 | SPEAKER: Filipe Gouveia ABSTRACT. Recently, biological data has been increasingly produced calling for the existence of computational models able to organize and computationally reproduce existing observations. In particular, biological regulatory networks have been modeled relying on the Sign Consistency Model or the logical formalism. However, their construction still completely relies on a domain expert to choose the best functions for every network component. Due to the number of possible functions for k arguments, this is typically a process prone to error. Here, we propose to assist the modeler using logic-based tools to verify the model, identifying crucial network components responsible for model inconsistency. We intend to obtain a model building procedure capable of providing the modeler with repaired models satisfying a set of pre-defined criteria, therefore minimizing possible modeling errors. |
16:00 | Invited Talk: The Young Software Engineer’s Guide to Using Formal Methods ABSTRACT. If programming was ever a hermitlike activity, those days are in the past. Like other internet-aided social processes, software engineers connect and learn online. Open-source repositories exemplify common coding patterns and best practices, videos and interactive tutorials teach foundations and pass on insight, and online forums invite and answer technical questions. These knowledge-sharing facilities make it easier for engineers to pick up new techniques, coding practices, languages, and libraries. This is good news in a world where software quality is as important as ever, where logic specification can be used to declare intent, and where formal verification tools have become practically feasible. In this talk, I give one view of the future of software engineering, especially with an eye toward software quality. I will survey some techniques, look at the history of tools, and inspire with some examples of what can be daily routine in the lives of next-generation software engineers. |
16:40 | How to upgrade ASP for true dynamic modelling and solving? ABSTRACT. The world is dynamic, and ASP is not! This is a provocative way to say that ASP is not up to dealing with many complex real-world applications having a dynamic nature, let alone transitions over states, not even mentioning more fine-grained temporal structures. |
16:50 | A Rule-Based Tool for Analysis and Generation of Graphs Applied to Mason's Marks ABSTRACT. We are developing a rule-based implementation of a tool to analyse and generate graphs. It is used in the domain of mason's marks. For thousands of years, stonemasons have been inscribing these symbolic signs on dressed stone. Geometrically, mason's marks are line drawings. They consist of a pattern of straight lines, sometimes circles and arcs. We represent mason's marks by connected planar graphs. Our prototype tool for analysis and generation of graphs is implemented in the rule-based declarative language Constraint Handling Rules (CHR). It features - a vertex-centric logical graph representation as constraints, - derivation of properties and statistics from graphs, - recognition of (sub)graphs and patterns in a graph, - automatic generation of graphs from given constrained subgraphs, - drawing graphs by visualization using svg graphics In particular, we started to use the tool to classify and to invent mason's marks. In principe, our tool can be applied to any problem domain that admits a modeling as graphs. The drawing and generation module of our tool is available online at http://chr.informatik.uni-ulm.de/mason. |
17:00 | ABSTRACT. We propose a system design principle that explains how to use declarative programming (logic and functional) together with imperative programming. The advantages of declarative programming are well known; they include ease of analysis, verification, testing, optimization, maintenance, upgrading, and distributed implementation. We will not elaborate on these advantages here, but rather focus on what part of the software system should be written declaratively. Declarative programming cannot interface directly with the real world since it does not support common real-world concepts such as physical time, named state, and nondeterminism. Other programming paradigms that support these concepts must therefore be used, such as imperative programming (which contains named state). To optimize the system design, we propose that real-world concepts should only be used where they are needed, namely where the system interfaces with the real world. It follows that a software system should be built completely declaratively except where it interfaces with the real world. We motivate this principle with examples from our research and we give it a precise formal definition. |
17:10 | Questions about logic and constraints in real-world applications |
17:20 | Panel: Future of Programming with Logic and Knowledge |
17:50 | Closing SPEAKER: Tuncay Tekle |
16:00 | ABSTRACT. Natural language elements in source code, e.g., the names of variables and functions, convey useful information. However, most existing bug detection tools ignore this information and therefore miss some classes of bugs. The few existing name-based bug detection approaches reason about names on a syntactic level and rely on manually designed and tuned algorithms to detect bugs. This talk presents DeepBugs, a learning approach to name-based bug detection, which reasons about names based on a semantic representation and which automatically learns bug detectors instead of manually writing them. We formulate bug detection as a binary classification problem and train a classifier that distinguishes correct from incorrect code. To address the challenge that effectively learning a bug detector requires examples of both correct and incorrect code, we create likely incorrect code examples from an existing corpus of code through simple code transformations. A novel insight learned from our work is that learning from artificially seeded bugs yields bug detectors that are effective at finding bugs in real-world code. We implement our idea into a framework for learning-based and name-based bug detection. Three bug detectors built on top of the framework detect accidentally swapped function arguments, incorrect binary operators, and incorrect operands in binary operations. Applying the approach to a corpus of 150,000 JavaScript files yields bug detectors that have a high accuracy (between 89% and 95%), are very efficient (less than 20 milliseconds per analyzed file), and reveal 102 programming mistakes (with 68% true positive rate) in real-world code. |
16:45 | SPEAKER: Ian Wright ABSTRACT. We apply machine learning to version control data to measure software development productivity. Our models measure both the quantity and quality of produced code. Quantity is defined by a model that predicts the labor hours supplied by the `standard coder’ to make any code change, and quality is defined by a model that predicts the distribution of different kinds of problems identified by a static code analysis tool. |
17:15 | Answering Cloze-style Software Questions Using Stack Overflow SPEAKER: Ezra Winston ABSTRACT. Modern Question Answering (QA) systems rely on both knowledge bases (KBs) and unstructured text corpora as sources for their answers. KBs, when available, generally offer more precise answers than unstructured text. However, in specialized domains such as software engineering, QA requires deep domain expertise and KBs are often lacking. In this paper we tackle such specialized QA by using both text and semi-structured knowledge, in the form of a corpus of entity-labeled documents. We propose CASE, a hybrid of an RNN language model and an entity co-occurrence model, where the entity co-occurrence model is learned from the entity-labeled corpus. On QUASAR-S, a dataset derived from Stack Overflow consisting of Cloze (fill-in-the-blank) software questions and a corpus of tagged posts, CASE shows large accuracy gains over strong baselines. |
16:00 | Discussion |
On parallel algorithms to solve parity games.
16:00 | Using work-stealing to parallelize symbolic algorithms and parity game solvers ABSTRACT. For multi-core computers, an important paradigm for parallel execution is task-based or fork-join parallelism. Typically this is implemented using work-stealing. This paradigm is a good fit for algorithms that contain recursion, but is also suitable in other contexts, for example the load-balancing of parallel computations on arrays. We apply work-stealing in several verification contexts. We parallelize operations on binary decision diagrams and on verification tools that use binary decision diagrams, where we apply work-stealing both on the low level of the individual operations and on the higher level of the search algorithms. We parallelize parity game solvers in the following two ways. We use work-stealing to parallelize backward search for attractor computation. We also use work-stealing to parallelize all steps of the strategy improvement algorithm. In these applications, using work-stealing is necessary but not sufficient to obtain a good performance. We must also avoid locking techniques and instead use lock-free techniques for scalable performance. We use lock-free techniques not only for the parallelized algorithms but also to implement the scalable work-stealing framework Lace with high multi-core scaling. |
16:45 | An Efficient Zielonka's Algorithm for Parallel Parity Games SPEAKER: Loredana Sorrentino ABSTRACT. Parity games are abstract infinite-duration two-player games, widely studied in computer science. Several solution algorithms have been proposed and also implemented in the community tool of choice called PGSolver, which has declared the Zielonka Recursive (ZR) algorithm the best performing on randomly generated games. With the aim of scaling and solving wider classes of parity games, several improvements and optimizations have been proposed over the existing algorithms. However, no one has yet explored the benefit of using the full computational power of which even common modern multicore processors are capable of. This is even more surprisingly by considering that most of the advanced algorithms in PGSolver are sequential. In this work we introduce and implement, on a multicore architecture, a parallel version of the Attractor algorithm, that is the main kernel of the ZR algorithm. This choice follows our investigation that more of the 99% of the execution time of the ZR algorithm is spent in this module. |
16:00 | Refining Santa: An Excercise in Efficient Synchronization SPEAKER: Emil Sekerinski ABSTRACT. The Santa Claus Problem is an intricate exercise for concurrent programming. This paper outlines the refinement step to develop a highly efficient implementation with concurrent objects, starting from a very simple specification. The efficiency of the implementation is compared to three other languages. |
16:30 | a Theory of Lazy Imperative Timing ABSTRACT. We present a theory of lazy imperative timing. |
16:00 | SPEAKER: Yanhong A. Liu ABSTRACT. We will discuss making invariants explicit in specification of distributed algorithms. Clearly this helps prove properties of distributed algorithms. More importantly, we show that this helps make it easier to express and to understand distributed algorithms at a high level, especially through direct uses of message histories. We will use example specifications in TLA+, for verification of Paxos using TLAPS, as well as complete excutable specifications in DistAlgo, a high-level language for distributed algorithms. |
16:45 | Proving properties of a minimal covering algorithm SPEAKER: Ioannis Filippidis ABSTRACT. This work concerns the specification and proof using TLA+ of properties of an algorithm for solving the minimal covering problem, which we have implemented in Python. Minimal covering is the problem of choosing a minimal number of elements from a given set to cover all the elements from another given set, as defined by a given function. The TLA+ modules are available online. The proofs of safety properties in these modules have been checked with the proof assistant TLAPS (version 1.4.3), in the presence of the tools Zenon, CVC4, Isabelle, and LS4. We have been motivated to study and implement this algorithm for converting binary decision diagrams to formulas of small size, so that humans can read the results of symbolic computation. |
ThEdu Business Meeting
16:00 | ThEdu'18 business meeting SPEAKER: Pedro Quaresma ABSTRACT. ThEdu'18 business meeting ThEdu'18 post proceedings at EPTCS ThEdu'19 - where, when and other organizational issues. |
Certification & Formalisation II
16:00 | Program Verification in the Presence of I/O: Semantics, verified library routines, and verified applications SPEAKER: Hugo Férée ABSTRACT. Software verification tools that build machine-checked proofs of functional correctness usually focus on the algorithmic content of the code. Their proofs are not grounded in a formal semantic model of the environment that the program runs in, or the program’s interaction with that environment. As a result, several layers of translation and wrapper code must be trusted. In contrast, the CakeML project focuses on end-to-end verification to replace this trusted code with verified code in a cost-effective manner. In this paper, we present infrastructure for developing and verifying impure functional programs with I/O and imperative file handling. Specifically, we extend CakeML with a low-level model of file I/O, and verify a high-level file I/O library in terms of the model. We use this library to develop and verify several Unix-style command-line utilities: cat, sort, grep, diff and patch. The workflow we present is built around the HOL4 theorem prover, and therefore all our results have machine-checked proofs. |
16:30 | SPEAKER: Brandon Bohrer ABSTRACT. Type-preserving (or typed) compilation uses typing derivations to certify correctness properties of compilation. We have designed and implemented a type-preserving compiler for a simply-typed dialect of Prolog we call T-Prolog. The crux of our approach is a new certifying abstract machine which we call the Typed Warren Abstract Machine (TWAM). The TWAM has a dependent type system strong enough to specify the semantics of a logic program in the logical framework LF. We present a soundness metatheorem which constitutes a partial correctness guarantee: well-typed programs implement the logic program specified by their type. This metatheorem justifies our design and implementation of a certifying compiler from T-Prolog to TWAM. |
17:00 | A Java bytecode formalisation SPEAKER: Aleksy Schubert ABSTRACT. This paper presents the first Coq formalisation of the full Java byte-code instruction set and its semantics. The set of instructions is organised in a hierarchy depending on how the instructions deal with the runtime structures of the Java Virtual Machine such as threads, stacks, heap etc. The hierarchical nature of Coq modules neatly reinforces this view and facilitates the understanding of the Java bytecode semantics. This approach makes it possible to both conduct verification of properties for programs and to prove metatheoretical results for the language. Based upon our formalisation experience, the deficiencies of the current informal bytecode language specification are discussed. |
17:30 | Formalising Executable Specifications of Low-Level Systems SPEAKER: Paolo Torrini ABSTRACT. Formal models of low-level applications rely often on the distinction between executable layer and underlying hardware abstraction. On one hand, the formal development of the executable specification can benefit from a language designed to support theorem proving in connection with the specific domain of the application. A deep embedding makes it possible to manipulate such a language in a direct way, e.g. in formally translating the executable specification to an efficient program. On the other hand, the abstract character of the hardware specification makes it desirable to take full advantage of the expressive power of the theorem prover, relying on a shallow embedding. Implementing deeply embedded languages that allow importing shallow specifications can be an answer to the problem of combining the two approaches. We use Coq to formalise such a deep embedding, which we use to support the formal development of an OS kernel. We take advantage of the Coq logic in specifying the deep embedding relationally, using structural operational semantics, then extracting an executable interpreter from the type soundness proof. |
Contributed Talks: Autonomous Vehicles
16:00 | SPEAKER: Alice Miller ABSTRACT. We describe work that was presented at the NASA Formal Methods Symposium and published in the symposium proceedings. We give an overview of the probabilistic models that we presented for autonomous agent search and retrieve missions. We summarise how probabilistic model checking and the probabilistic model checker PRISM were used for optimal strategy generation with a view to returning the generated strategies to the UAV embedded within controller software. |
16:30 | Towards a verifiable decision making framework for self-driving vehicles SPEAKER: Mohammed Al-Nuaimi ABSTRACT. A new verification framework is presented for the decision making of autonomous vehicles (AVs). The overall structure of the framework consists of: (1) A perception system of sensors that feed into (2) a reasoning agent based on a Jason architecture that operates on-board an AV and interacts with a model of the environment using (3) multi-input multi-output adaptive control system. The agent relies on a set of rules, planners and actions to achieve its ultimate goal of driving the AV safely from a starting point to its destination. The verification framework deploys an innovative combination of two well known verification tools: (1) the model checker for multi-agent systems (MCMAS) to check the consistency and stability of the agent logic rules and perception-based beliefs before choosing any action and (2) the PRISM probabilistic model checker to provide the agent with the probability of success when it decides to take a planned movement sequence. The agent will select the movement-actions with the highest probability of success. The planned actions are executed by a control system to control the movement of the AV on the ground using a set of primitive movement skills using its actuators. The framework adopts the Jason agent paradigm to design the reasoning and the decision making process. The Robot Operating System (ROS) and the Gazebo Simulator are used to model the AV, its sensors and the surrounding environment. The agent connects to a MATLAB-based perception and control system to steer the AV. |
17:00 | SPEAKER: Gleifer Alves ABSTRACT. We propose a description language for formalising the ``Rules of the Road" (e.g., the UK Highway Code). By way of example, we represent the subset of the Highway Code responsible for road junctions and represent these Rules of the Road using linear temporal logic, LTL. Our aim is to ``extract" from this generic representation a set of beliefs, goals and plans that can be used by a BDI agent to augment its universal autonomous vehicle control (e.g., route planning and obstacle avoidance capabilities) with region-specific rules for road use. We intend to target the Gwendolen agent programming language as a prototype for this system, which provides formal verification possibilities. |
TERMCOMP 2018 - Report
Presentation of tool papers:
- M. Brockschmidt, S. Dollase, F. Emrich, F. Frohn, C. Fuhs, J. Giesl, M. Hark, J. Hensel, D. Korzeniewski, M. Naaf, T. Ströder. AProVE at the Termination Competition 2018
- Florian Messner and Christian Sternagel. TermComp 2018 Participant: TTT2
- Dieter Hofbauer. MultumNonMulta at TermComp 2018
- Raúl Gutiérrez and Salvador Lucas. MU-TERM at the 2018 Termination Competition
- Jesús J. Doménech and Samir Genaim. iRankFinder
Workshops dinner at Magdalen College. Drinks reception from 7.15pm, to be seated by 7:45 (pre-booking via FLoC registration system required; guests welcome).