FMCAD 2018: FORMAL METHODS IN COMPUTER AIDED DESIGN 2018
PROGRAM FOR THURSDAY, NOVEMBER 1ST
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 13: Concurrency
09:00
Automatic Synchronization for GPU Kernels
SPEAKER: Sourav Anand

ABSTRACT. We present a technique for automatic synthesis of efficient and provably correct synchronization in GPU kernels. Our technique relies on an off-the-shelf correctness oracle and achieves efficient synthesis by leveraging the race location information provided by the oracle in order to encode optimal synchronization synthesis as a MaxSAT problem. We have implemented our technique in a tool called AUTOSYNC that works on kernels written in CUDA and uses a static verifier GPUVERIFY as the correctness oracle. An evaluation on 19 realistic kernels from the GPUVERIFY benchmark suite shows that AUTOSYNC is able to synthesize optimal synchronization placements, and synthesis times are reasonable (30 seconds for our largest benchmark).

09:30
Rely-Guarantee Reasoning for Automated Bound Analysis of Lock-Free Algorithms
SPEAKER: Thomas Pani

ABSTRACT. We present a thread-modular proof method for complexity and resource bound analysis of concurrent, shared-memory programs, lifting Jones' rely-guarantee reasoning to assumptions and commitments capable of expressing bounds. We automate reasoning in this logic by reducing bound analysis of concurrent programs to the sequential case. Our work is motivated by its application to lock-free data structures, fine-grained concurrent algorithms whose time complexity has to our knowledge not been inferred automatically before.

10:30-12:00 Session 14: Verification
10:30
Template-Based Verification of Heap-Manipulating Programs
SPEAKER: Viktor Malík

ABSTRACT. We propose a shape analysis suitable for analysis engines that perform automatic invariant inference using an SMT solver. The proposed solution includes an abstract template domain that encodes the shape of a program heap based on logical formulae over bit-vectors. It is based on a points-to relation between pointers and symbolic addresses of abstract memory objects. Our abstract heap domain can be combined with value domains in a straight-forward manner, which particularly allows us to reason about shapes and contents of heap structures at the same time. The information obtained from the analysis can be used to prove reachability and memory safety properties of programs manipulating dynamic data structures, mainly linked lists. The solution has been implemented in 2LS and compared against state-of-the-art tools that perform the best in heap-related categories of the well-known Software Verification Competition (SV-COMP). Results show that 2LS outperforms these tools on benchmarks requiring combined reasoning about unbounded data structures and their numerical contents.

11:00
Using Loop Bound Analysis For Invariant Generation
SPEAKER: Pavel Cadek

ABSTRACT. The problem of loop bound analysis can conceptually be seen as an instance of invariant generation. However, the methods used in loop bound analysis differ considerably from previous invariant generation techniques. Interestingly, there is almost no previous work comparing the two sets of techniques. In this paper, we show that loop bound analysis methods can efficiently produce invariants which are hard to prove for state-of-the-art invariant generation techniques (e.g., polynomial invariants or invariants relating many variables) and thus enrich the tool-set of invariant analysis.

11:30
Post-Verification Debugging and Rectification of Finite Field Arithmetic Circuits using Computer Algebra Techniques
SPEAKER: Priyank Kalla

ABSTRACT. In formal verification of arithmetic circuits, it is required to check whether or not a gate-level circuit correctly implements a given specification model. In cases where this equivalence check fails -- i.e. a counterexample to equivalence is detected -- it is required to: i) debug the circuit, ii) identify a set of nets (signals) where the circuit might be rectified, and iii) compute the corresponding rectification functions at those locations. This paper addresses the problem of post-verification debugging and correction (rectification) of finite field arithmetic circuits. The specification model and the circuit implementation may differ at any number of inputs. We present techniques that determine whether the circuit can be rectified at one particular net -- i.e. single-fix rectification.

Starting from an equivalence checking setup modeled as a polynomial ideal membership test, we analyze the ideal membership residue to identify potential single-fix rectification locations. Subsequently, we use Nullstellensatz principles to ascertain if indeed a single-fix rectification can be applied at any of these locations. If a single-fix rectification exists, we derive a rectification function by modeling it as the synthesis of an unknown component problem. Our approach is based upon the Groebner basis algorithm, which we use both as a decision procedure (for rectification test) as well as a quantification procedure (for computing a rectification function). Experiments are performed over various finite field arithmetic circuits that demonstrate the efficacy of our approach.

12:00-13:30Lunch Break

Tejas Conference Dining Room

13:30-15:00 Session 15: Learning and Synthesis
13:30
Automata Learning for Symbolic Execution

ABSTRACT. From a system-level point of view, black-box components conceal parts of software execution paths making systematic testing, e.g., via symbolic execution, difficult. In this paper, we use automata learning to facilitate symbolic execution in the presence of black-box components. We substitute black-boxes in a software system with learned automata that model them. This enables us to symbolically execute program paths that run through black-boxes. We show that applying the approach on real-world software systems incorporating black-boxes increases code coverage, as compared to standard techniques.

14:00
Functional Synthesis via Input-Output Separation

ABSTRACT. Boolean functional synthesis is the process of constructing a Boolean function from a Boolean specification that relates input and output variables. Despite significant recent developments in synthesis algorithms, Boolean functional synthesis remains a challenging problem even when state-of-the-art methods are used for decomposing the specification. In this work we bring a fresh decomposition approach, orthogonal to existing methods, that explores the decomposition of the specification into separate input and output components. Initially we try to adapt the more theoretical notion of "sequential decomposition" to a method that allows each component to be independently synthesized and later recomposed to form an implementation for the entire specification. We point the drawbacks for practical use of this approach and propose a more relaxed approach, in which we make use of an input-output decomposition of a given specification described as a CNF formula, by alternatingly analyzing the separate input and output components. We exploit well-defined properties of these components to ultimately synthesize a solution for the entire specification. We first provide a theoretical result that, for input components with specific structures, synthesis for CNF formulas via this framework can be performed more efficiently than in the general case. We then show by experimental evaluations that our algorithm performs well also in practice on instances which are challenging for existing state-of-the-art tools, serving as a good complement to modern synthesis techniques.

14:30
Learning Linear Temporal Properties

ABSTRACT. We present two novel algorithms for learning formulas in Linear Temporal Logic (LTL) from examples. The first learning algorithm reduces the learning task to a series of satisfiability problems in propositional Boolean logic and produces a smallest LTL formula (in terms of the number of subformulas) that is consistent with the given data. Our second learning algorithm, on the other hand, combines the SAT-based learning algorithm with classical algorithms for learning decision trees. The result is a learning algorithm that scales to real-world scenarios with hundreds of examples but can no longer guarantee to produce minimal consistent LTL formulas. We compare both learning algorithms and demonstrate their performance on a wide range of synthetic benchmarks. Additionally, we illustrate their usefulness on the task of understanding executions of a leader election protocol.

15:00-15:30Coffee Break
15:30-16:40 Session 17: SMT and CHC
15:30
The Eldarica Horn Solver

ABSTRACT. This paper presents the Eldarica version 2 model checker. Over the last years we have been developing and maintaining Eldarica as a state-of-the-art solver for Horn clauses over integer arithmetic. In the version 2, we have extended the solver to support also algebraic data types and bit-vectors, theories that are commonly applied in verification, but currently unsupported by most Horn solvers. This paper describes the high-level structure of the tool and the interface that it provides to other applications. We also report on an evaluation of the tool. While some of the techniques in Eldarica have been documented in research papers over the last years, this is the first tool paper describing Eldarica in its entirety.

15:50
Trau: SMT solver for string constraints

ABSTRACT. We introduce Trau, an SMT solver for an expressive constraint language, including word equations, length constraints, context-free membership queries, and transducer constraints. The satisfiability problem for such a class of constraints is in general undecidable. The key idea behind Trau is a technique called flattening, which searches for satisfying assignments that follow simple patterns. Trau implements a Counter-Example Guided Abstraction Refinement (CEGAR) framework which contains both an under- and an over-approximation module. The approximations are refined in an automatic manner by information flow between the two modules. The technique implemented by Trau can handle a rich class of string constraints and has better performance than state-of-the-art string solvers.

16:10
Solving Constrained Horn Clauses Using Syntax and Data

ABSTRACT. Discovery of inductive invariants for a program with arbitrary loop structure can be viewed as an instance of a more general problem of synthesizing solutions for a system of Constrained Horn Clauses (CHCs). A CHC is a logical implication involving unknown predicates. In order to find interpretations that make every CHC in a given system true, we propose an algorithm based on Syntax-Guided Synthesis. For each unknown predicate, it generates a formal grammar from all known parts of the CHC system (i.e., using syntax). This grammar is further enriched by predicates and constants obtained from models of formulas (i.e., using data). The use of syntax and data to derive grammars are complementary to one another. Using syntax makes a number of useful candidates readily available that may be computationally expensive to derive from data. Whereas using data provides meaningful semantic candidates that the CHC system may be syntactically oblivious to. A naive strategy for handling multiple unknown predicates would be to sample candidate formulas individually from the corresponding grammars and to check their fitness with an SMT solver. Since this does not scale well, we propose to exploit a more accurate approach to sampling: we generate a candidate for one unknown predicate at a time, and then we propagate it to candidates of the remaining unknowns through all available implications in the CHC system. We present an evaluation of the algorithm on a range of benchmarks originating from program verification tasks and show that it is competitive with state-of-the-art in CHC solving.

16:40-17:00Coffee Break
17:00-18:00 Session 18: Rails
17:00
Analysis of Relay Interlocking Systems via SMT-based Model Checking of Switched Multi-Domain Kirchhoff Networks

ABSTRACT. Relay Interlocking Systems (RIS) are analog electromechanical networks traditionally applied in the safety-critical domain of railway signaling. RIS consist of networks of interconnected components s.a. power supplies, contacts, resistances, circuit breakers, and electrically-controlled switches (i.e. the relays). Due to cost and flexibility needs, RIS are progressively being replaced by equivalent computer-based systems. Unfortunately, RIS are often legacy systems, hard to understand at an abstract level, so that the valuable information encoded in them is not available. In this paper, we propose a methodology and a tool chain to analyze and understand legacy RIS. A RIS is reduced to Switched Multi-Domain Kirchhoff Network (smdkn), which is in turn compiled into a network of hybrid automata. SMT-based model checking supports various forms of formal analysis. The approach is based on the modeling of the RIS analog signals (i.e. currents and voltages) over continuous time, and their mapping in terms of railways control actions. Starting from the diagram representation, we overcome a key limitation of previous approaches based on purely Boolean models, i.e. the presence of spurious behaviors. The evaluation of the tool chain on a set of real-world railway RIS demonstrates practical scalability.

17:30
Design-Time Railway Capacity Verification using SAT modulo Discrete Event Simulation

ABSTRACT. Railway capacity is complex to define and analyze, and existing tools and methods used in practice require comprehensive models of the railway network and its timetables. Design engineers working within the limited scope of construction projects report that only ad-hoc, experience-based methods of capacity analysis are available to them. Designs have subtle capacity pitfalls which are discovered too late, only when network-wide timetables are made -- there is a mismatch between the scope of construction projects and the scope of capacity analysis, as currently practiced.

We suggest a language for capacity specifications suited for construction projects, expressing properties such as running time, train frequency, overtaking and crossing. The planning problem arising from verification of these properties is constrained by discrete control system logic, network topology, laws of motion, and sparse communication. The second-order linear differential equations describing train motion can be solved analytically, but the resulting expressions contain multiplication of real variables.

We argue that reasoning over the whole discrete/continuous space of solutions is not efficient with current state-of-the-art solvers, and we present instead a special-purpose SAT-based dispatch planning tool which solves the discrete part of the problem. Continuous dynamics and timing constraints are evaluated using discrete event simulation, and the two components communicate in a CEGAR-loop (counterexample-guided abstraction refinement). We show that our method is fast enough at relevant scales to provide agile verification in a design setting, and we present case studies based on data from existing infrastructure and ongoing construction projects.

19:00-21:00 banquet

Banquet

Location: Clay Pit