FMCAD 2022: FORMAL METHODS IN COMPUTER-AIDED DESIGN
PROGRAM FOR FRIDAY, OCTOBER 21ST
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:00 Session 31: Invited Talk 2

Hana Chockler. Why do things go wrong (or right)? Applications of causal reasoning to verification

Abstract. In this talk I will look at the connections between causality and learning from one side, and verification and synthesis from the other side. I will introduce the relevant concepts and discuss how causality and learning can help to improve the quality of systems and reduce the amount of human effort in designing and verifying systems. I will (briefly) introduce the theory of actual causality as defined by Halpern and Pearl. This theory turns out to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition. I will illustrate the definitions by examples from formal verification. I will also argue that active learning can be viewed as a type of causal discovery. Tackling the problem of reducing the human effort from the other direction, I will discuss ways to improve the quality of specifications and will focus in particular on synthesis.

10:30-12:10 Session 33: Reachability and safety verification
10:30
Automating Geometric Proofs of Collision Avoidance with Active Corners

ABSTRACT. Avoiding collisions between obstacles and vehicles such as cars, robots or aircraft is essential to the development of automation and autonomy. To simplify the problem, many collision avoidance algorithms and proofs consider vehicles to be a point mass, though the actual vehicles are not points. In this paper, we consider a convex polygonal vehicle with nonzero area traveling along a 2-dimensional trajectory. We derive an easily-checkable, quantifier-free formula to check whether a given obstacle will collide with the vehicle moving on the planned trajectory. We apply our method to two case studies of aircraft collision avoidance.

10:55
Differential Testing of Pushdown Reachability with a Formally Verified Oracle

ABSTRACT. Pushdown automata are an essential model of recursive computation. In model checking and static analysis, numerous problems can be reduced to reachability questions about pushdown automata and several efficient libraries implement automata-theoretic algorithms for answering these questions. These libraries are often used as core components in other tools, and therefore it is instrumental that the used algorithms and their implementations are correct. We present a method that significantly increases the trust in the answers provided by the libraries for pushdown reachability by (i) formally verifying the correctness of the used algorithms using the Isabelle/HOL proof assistant, (ii) extracting executable programs from the formalization, (iii) implementing a framework for the differential testing of library implementations with the verified extracted algorithms as oracles, and (iv) automatically minimizing counter-examples from the differential testing based on the delta-debugging methodology. We instantiate our method to the concrete case of PDAAAL, a state-of-the-art library for pushdown reachability. Thereby, we discover and resolve several nontrivial errors in PDAAAL.

11:20
Automated Conversion of Axiomatic to Operational Models: Theoretical and Practical Results

ABSTRACT. A system may be modelled as an operational model (which has explicit notions of state and transitions between states) or an axiomatic model (which is specified entirely as a set of invariants). Most formal methods techniques (e.g., IC3, invariant synthesis, etc) are designed for operational models and are largely inaccessible to axiomatic models. Furthermore, no prior method exists to automatically convert axiomatic models to operational ones, so operational equivalents to axiomatic models had to be manually created and proven equivalent.

In this paper, we advance the state-of-the-art in axiomatic to operational model conversion. We show that general axioms in the muspec axiomatic modelling framework cannot be translated to equivalent finite-state operational models. We also derive restrictions on the space of muspec axioms that enable the feasible generation of equivalent finite-state operational models for them. As for practical results, we develop a methodology for automatically translating muspec axioms to finite-state automata-based operational models. We demonstrate the efficacy of our method by using operational models generated by our procedure to prove the correctness of properties on three RTL designs.

11:45
Split Transition Power Abstraction for Unbounded Safety

ABSTRACT. Transition Power Abstraction (TPA) is a recent symbolic model checking approach that leverages Craig interpolation to create a sequence of symbolic abstractions for transition paths that double in length with each new element. This doubling abstraction allows the approach to find bugs that require long executions much faster than traditional approaches that unfold transitions one at a time, but its ability to prove system safety is limited. This paper proposes a novel instantiation of the TPA approach capable of proving unbounded safety efficiently while preserving the unique capability to detect deep counterexamples. The idea is to split the transition over-approximations in two complementary parts: One part focuses only on reachability in fixed number of steps, the second part complements it by summarizing all shorter paths. The resulting split abstractions are suitable for discovering safe transition invariants, making the split-TPA approach much more efficient in proving safety and even improving the counterexample detection. The approach is implemented in the constrained Horn clause solver Golem and our experimental comparison against state-of-the-art solvers shows it to be both competitive and complementary.

12:10
TriCera: Verifying C Programs Using the Theory of Heaps

ABSTRACT. TriCera is an automated, open-source verification tool for C programs based on the concept of Constrained Horn Clauses (CHCs). In order to handle programs operating on heap, TriCera applies a novel theory of heaps, which enables the tool to hand off most of the required heap reasoning directly to the underlying CHC solver. This leads to a cleaner interface between the language-specific verification front-end, and the language-independent CHC back-end, and in the long run to reduced implementation effort. The paper introduces TriCera, gives an overview of the theory of heaps, and presents preliminary experimental results using SV-COMP benchmarks.

12:25
Formally Verified Quite OK Image Format

ABSTRACT. Lossless compression and decompression functions are ubiquitous operations that have a clear high-level specification and are thus suitable as verification benchmarks. Such functions are also important. One the one hand they improve performance of communication, storage, and computation. On the other hand, errors in them would result in a loss of data. These functions operate on sequences of unbounded length and contain unbounded loops or recursion that update large state space, which makes finite-state methods and symbolic execution difficult to apply.

We present deductive verification of an executable Stainless implementation of compression and decompression for the recently proposed Quite OK Image format (QOI). While fast and easy to implement, QOI is non-trivial and includes a number of widely used techniques such as run-length encoding and dictionary-based compression. We completed formal verification using the Stainless verifier, proving that encoding followed by decoding produces the original image. Stainless transpiler was also able to generate C code that compiles with GCC, is inter-operable with the reference implementation, and runs with performance essentially matching the reference C implementation.

14:00-15:30 Session 35: Synthesis
14:00
Synthesizing Transducers from Complex Specifications

ABSTRACT. Automating string transformations has been one of the killer applications of program synthesis. Existing synthesizers that solve this problem produce programs in domain-specific languages (DSL) that are engineered to help the synthesizer, and therefore lack nice formal properties. This limitation prevents the synthesized programs from being used in verification applications (e.g., to check complex pre-post conditions) and makes the synthesizers hard to modify due to their reliance on the given DSL.

We present a constraint-based approach to synthesizing transducers, a well-studied model with strong closure and decidability properties. Our approach handles three types of specifications: 1. input-output examples, 2. input-output types expressed as regular languages, and 3. input/output distances that bound how many characters the transducer can modify when processing an input string. Our work is the first to support such complex specifications and it does so by using the algorithmic properties of transducers to generate constraints that can be solved using off-the-shelf SMT solvers. Our synthesis approach can be extended to many transducer models and it can be used, thanks to closure properties of transducers, to compute repairs for partially correct transducers.

14:25
Synthesis of Semantic Actions in Attribute Grammars

ABSTRACT. Attribute grammars allow the association of semantic actions to the production rules in context-free grammars, providing a simple yet effective formalism to define the semantics of a language. However, drafting the semantic actions can be tricky and a large drain on developer time. In this work, we propose a synthesis methodology to automatically infer the semantic actions from a set of examples associating strings to their meanings. We also propose a new coverage metric, derivation coverage. We use it to build a sampler to effectively and automatically draw strings to drive the synthesis engine. We build our ideas into our tool, PANINI, and empirically evaluate it on twelve benchmarks, including a forward differentiation engine, an interpreter over a subset of Java bytecode, and a mini-compiler for C language to two-address code. Our results show that PANINI scales well with the number of actions to be synthesized and the size of the context-free grammar, significantly outperforming simple baselines.

14:50
Reactive Synthesis Modulo Theories using Abstraction Refinement

ABSTRACT. Reactive synthesis builds a system from a specifi- cation given as a temporal logic formula. Traditionally, reactive synthesis is defined for systems with Boolean input and output variables. Recently, new techniques have been proposed to extend reactive synthesis to data domains, which are required for more sophisticated programs. In particular, Temporal stream logic (TSL) extends LTL with state variables, updates, and uninterpreted functions and was created for use in synthesis. We present a new synthesis procedure for TSL(T), an extension of TSL with theories. Our approach is also able to find predicates, not present in the specification, that are required to synthesize some program. Synthesis is performed using two nested counter- example guided synthesis loops and an LTL synthesis procedure. Our method translates TSL(T) specifications to LTL and extracts a system if synthesis is successful. Otherwise, it analyzes the counterstrategy for inconsistencies with the theory, these are than ruled out by adding temporal assumptions and the next iteration of the loop is started. If no inconsistencies are found the outer refinement loop tries to identifies new predicates and reruns the inner loop. A system can be extracted if the LTL synthesis returns realizable at any point, if no more predicates can be added the problem is unrealizable. The general synthesis problem for TSL is known to be undecidable. We identify a new decidable fragment and demonstrate that our method can successfully synthesize or show unrealizability of several non-Boolean examples.

15:15
Learning Deterministic Finite Automata Decompositions from Examples and Demonstrations

ABSTRACT. The identification of a deterministic finite automaton (DFA) from labeled examples is a well-studied problem in the literature; however, prior work focuses on the identification of monolithic DFAs. Although monolithic DFAs provide accurate descriptions of systems’ behavior, they lack simplicity and interpretability; moreover, they fail to capture sub-tasks realized by the system and introduce inductive biases away from the inherent decomposition of the overall task. In this paper, we present an algorithm for learning conjunctions of DFAs from labeled examples. Our approach extends an existing SAT-based method to systematically enumerate Pareto-optimal candidate solutions. We highlight the utility of our approach by integrating it with a state-of-the-art algorithm for learning DFAs from demonstrations. Our experiments show that the algorithm learns sub-tasks realized by the labeled examples, and it is scalable in the domains of interest.

16:00-17:15 Session 37: Distributed systems
16:00
ACORN: Network Control Plane Abstraction using Route Nondeterminism

ABSTRACT. Networks are hard to configure correctly, and misconfigurations occur frequently, leading to outages or security breaches. Formal verification techniques have been applied to guarantee the correctness of network configurations, thereby improving network reliability. This work addresses verification of distributed network control planes, with two distinct contributions to improve the scalability of formal verification. Our first contribution is a hierarchy of abstractions of varying precision which introduce nondeterminism into the route selection procedure that routers use to select the best available route. We prove the soundness of these abstractions and show their benefits. Our second contribution is a novel SMT encoding which uses symbolic graphs to encode all possible stable routing trees that are compliant with the given network control plane configurations. We have implemented our abstractions and SMT encoding in a prototype tool called ACORN. Our evaluations show that our abstraction can provide significant relative speedups (up to 323x) in performance, and ACORN can scale up to about 37,000 routers (organized in FatTree topologies, with synthesized shortest-path routing and valley-free policies) for verifying reachability. This far exceeds the performance of existing control plane verifiers.

16:25
Plain and Simple Inductive Invariant Inference for Distributed Protocols in TLA+

ABSTRACT. We present a new technique for automatically inferring inductive invariants of parameterized distributed protocols specified in TLA+. Ours is the first such invariant inference technique to work directly on TLA+, an expressive, high level specification language. To achieve this, we present a new algorithm for invariant inference that is based around a core procedure for generating plain, potentially non-inductive lemma invariants that are used as candidate conjuncts of an overall inductive invariant. We couple this with a greedy lemma invariant selection procedure that selects lemmas that eliminate the largest number of counterexamples to induction at each round of our inference procedure. We have implemented our algorithm in a tool, endive, and evaluate it on a diverse set of distributed protocol benchmarks, demonstrating competitive performance and ability to uniquely solve an industrial scale reconfiguration protocol.

16:50
Awaiting for Godot: Stateless Model Checking that Avoids Executions where Nothing Happens

ABSTRACT. Stateless Model Checking (SMC) is a verification technique for concurrent programs that checks for safety violations by exploring all possible thread schedulings. It is highly effective when coupled with Dynamic Partial Order Reduction (DPOR), which introduces an equivalence on schedulings and need explore only one in each equivalence class. Even with DPOR, SMC often spends unnecessary effort in exploring loop iterations that are \emph{pure}, i.e., have no effect on the program state.

We present techniques for making SMC with DPOR more effective on programs with pure loop iterations. The first is a static program analysis to detect loop purity and an associated program transformation, called \emph{Partial Loop Purity Elimination}, that inserts assume statements to block pure loop iterations. Subsequently, some of these assumes are turned into await statements that completely remove many assume-blocked executions. Finally, we present an extension of the standard DPOR equivalence, obtained by weakening the conflict relation between events. All these techniques are incorporated into a new DPOR algorithm, Optimal-DPOR-Await, which can handle both \await{s} and the weaker conflict relation, is optimal in the sense that it explores exactly one execution in each equivalence class, and can also diagnose livelocks. Our implementation in Nidhugg shows that these techniques can significantly speed up the analysis of concurrent programs that are currently challenging for SMC tools, both for exploring their complete set of interleavings, but even for detecting concurrency errors in them.