FMCAD 2018: FORMAL METHODS IN COMPUTER AIDED DESIGN 2018
PROGRAM FOR WEDNESDAY, OCTOBER 31ST
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 7: Formal Verification of Financial Algorithms with Imandra
09:00
Formal Verification of Financial Algorithms with Imandra

ABSTRACT. Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Aesthetic Integration, we’ve pioneered the use of formal verification for analysing the safety and fairness of financial algorithms. With a focus on financial infrastructure (e.g., the matching logics of exchanges and dark pools), we’ll describe the landscape, and illustrate our Imandra formal verification system on a number of real-world examples. We’ll sketch many open problems and future directions along the way.

10:00-10:15Coffee Break
10:45-12:05 Session 9: Hardware
10:45
CoSA: Integrated Verification for Agile Hardware Design
SPEAKER: Makai Mann

ABSTRACT. Symbolic model-checking is a well-established technique used in hardware design to assess, and formally verify, functional correctness. However, most modern model-checkers encode the problem into propositional satisfiability (SAT) and do not leverage any additional information beyond the input design, which is typically provided in a hardware description language such as Verilog.

In this paper, we present CoSA (CoreIR Symbolic Analyzer), a model-checking tool for CoreIR designs. CoreIR is a new intermediate representation for hardware, which is intended to provide for hardware what LLVM did for software. CoSA encodes model-checking queries into first-order formulas that can be solved by Satisfiability Modulo Theories (SMT) solvers. In particular, it natively supports encodings using the theories of bitvectors and arrays. CoSA is also closely integrated with CoreIR and can thus leverage CoreIR-generated metadata in addition to user-provided lemmas to assist with formal verification. CoSA supports multiple input formats and provides a broad set of analyses including equivalence checking and safety and liveness verification. CoSA is open-source and written in Python, making it easily extendable.

11:05
Integrating Memory Consistency Models with Instruction-Level Abstraction for Heterogeneous System-on-Chip Verification
SPEAKER: Hongce Zhang

ABSTRACT. Modern Systems-on-Chip (SoCs) integrate heterogeneous compute elements ranging from non-programmable specialized accelerators to programmable CPUs and GPUs. To ensure correct system behavior, SoC verification techniques must account for inter-component interactions through shared memory, which necessitates reasoning about memory consistency models (MCMs). This paper presents ILA-MCM, a symbolic reasoning framework for automated SoC verification, where MCMs are integrated with Instruction-Level Abstractions (ILAs) that have been recently proposed to model architecture-level, program-visible states and state updates in heterogeneous SoC components. ILA-MCM enables reasoning about system-wide properties that depend on functional state updates as well as ordering relations between them. Central to our approach is a novel facet abstraction, where a single program-visible variable is associated with potentially multiple facets that act as auxiliary state variables. Facets are updated by ILA “instructions,” and the required orderings between these updates are captured by MCM axioms. Thus, facets provide a symbolic constraint-based integration between operational ILA models and axiomatic MCM specifications. We have implemented a prototype ILA-MCM framework and demonstrate two verification applications: (a) finding a known bug in an accelerator-based SoC and a new potential bug under a weaker MCM, and (b) checking that a recently proposed low-level GPU hardware implementation is correct with respect to a high-level ILA-MCM specification.

11:35
BMC with Memory Models as Modules

ABSTRACT. We present two bounded model checking tools for concurrent programs. Their distinguishing feature is modularity: besides a program, they expect as input a module describing the hardware architecture for which the program should be verified. DARTAGNAN implements state reachability under the given memory model using a novel SMT encoding. PORTHOS checks state equivalence under two given memory models using a new guided search strategy. We have performed experiments to compare our tools against other memory model-aware verifiers and find them very competitive, despite the modularity offered by our approach.

12:05-13:30Lunch Break

Tejas Conference Dining Room

13:30-15:00 Session 10: Quantifiers and SAT
13:30
Complete Test Sets And Their Approximations

ABSTRACT. We use testing to check if a combinational circuit N is a constant 0 (i.e. always evaluates to 0). We call a set of tests proving that N is a constant 0 a complete test set (CTS). The conventional point of view is that to prove that N is a constant 0 one has to generate a trivial CTS. It consists of all 2^|X| input assignments where X is the set of input variables of N. We use the notion of a Stable Set of Assignments (SSA) to show that one can build a non-trivial CTS consisting of less than 2^|X| tests. Given an unsatisfiable CNF formula H(W), an SSA of H is a set of assignments to W proving unsatisfiability of H. A trivial SSA is the set of all 2^|W| assignments to W. Importantly, real-life formulas can have non-trivial SSAs that are much smaller than 2^|W|. In general, construction of even non-trivial CTSs is inefficient. We describe a much more efficient approach where tests are extracted from an SSA built for a ``projection'' of N on a subset of variables of N. These tests can be viewed as an approximation of a CTS for N. We give experimental results and describe potential applications of this approach.

14:00
Expansion-Based QBF Solving Without Recursion

ABSTRACT. In recent years, expansion-based techniques have been shown to be very powerful in theory and practice for solving quantified Boolean formulas (QBF), the extension of propositional formulas with existential and universal quantifiers over Boolean variables. Such approaches partially expand one type of variable (either existential or universal) and pass the obtained formula to a SAT solver for deciding the QBF. State-of-the-art expansion-based solvers process the given formula quantifier-block wise and recursively apply expansion until a solution is found.

In this paper, we present a novel algorithm for expansion-based QBF solving that deals with the whole quantifier prefix at once. Hence recursive applications of the expansion principle are avoided. Experiments indicate that the performance of our simple approach is comparable with the state of the art of QBF solving, especially in combination with other solving techniques.

14:30
Bit-Vector Interpolation and Quantifier Elimination by Lazy Reduction

ABSTRACT. The inference of program invariants over machine arithmetic, commonly called bit-vector arithmetic, is an important problem in verification. Techniques that have been successful for unbounded arithmetic, in particular Craig interpolation, have turned out to be difficult to generalise to machine arithmetic: existing bit-vector interpolation approaches are based either on eager translation from bit-vectors to unbounded arithmetic, resulting in complicated constraints that are hard to solve and interpolate, or on bit-blasting to propositional logic, in the process losing all arithmetic structure. We present a new approach to bit-vector interpolation, as well as bit-vector quantifier elimination (QE), that works by lazy translation of bit-vector constraints to unbounded arithmetic. Laziness enables us to fully utilise the information available during proof search (implied by decisions and propagation) in the encoding, and this way produce constraints that can be handled relatively easily by existing interpolation and QE procedures for Presburger arithmetic. The lazy encoding is complemented with a set of native proof rules for bit-vector equations and non-linear (polynomial) constraints, this way minimising the number of cases a solver has to consider.

15:00-15:30Coffee Break
15:40-17:00 Session 11: Liveness
15:40
Analyzing the Fundamental Liveness Property of the Chord Protocol
SPEAKER: Jeanne Tawa

ABSTRACT. Chord is a protocol that provides a scalable distributed hash table over an underlying peer-to-peer network. Since it combines data structures, asynchronous communications, concurrency, and fault tolerance, it features rich structural and temporal properties that make it an interesting target for formal specification and verification. Previous work has mainly focused on automatic proofs of safety properties or manual proofs of the full correctness of the protocol (a liveness property). In this paper, we report on analyzing automatically the correctness of Chord with the Electrum language (developed in former work) on small instance of networks. In particular, we were able to find various corner cases in previous work and showed that the protocol was not correct as described there. We fixed all these issues and provided a version of protocol for which we were not able to find any counterexample using our method.

16:10
k-FAIR = k-LIVENESS + FAIR: Revisiting SAT-based Liveness Algorithms

ABSTRACT. We revisit the two main SAT-based algorithms for checking liveness properties of finite-state transition systems: the k-LIVENESS algorithm of [1] and the FAIR algorithm of [2]. The two approaches are described very differently and are based on different principles. k-LIVENESS works by translating the liveness property together with fairness constraints to the form FGq, and then bounding the number of times variable q can evaluate to false. FAIR works by finding an over-approximation R of reachable states, so that no state in R is contained on a fair cycle. In this paper, we present an algorithm k-FAIR that builds on top of the ideas of k-LIVENESS and FAIR, and that suitably combines the strengths of both approaches. The experiments on the hardware model checking and industrial benchmarks show the promise of the new approach.

References: [1] K. Claessen and N. Sorensson, "A liveness checking algorithm that counts," FMCAD 2012. [2] A. Bradley, F. Somenzi, Z. Hassan, and Y. Zhang, "An incremental approach to model checking progress properties," FMCAD 2011.

16:30
Temporal Prophecy for Proving Temporal Properties of Infinite-State Systems
SPEAKER: Oded Padon

ABSTRACT. Various verification techniques for temporal properties reduce temporal verification to safety verification. For infinite-state systems, these reductions are inherently incomplete. That is, for some instances, the temporal property is correct, but the resulting safety property is not. This paper introduces a mechanism for tackling this incompleteness. This mechanism, which we call \emph{temporal prophecy}, is inspired by prophecy variables. Temporal prophecy refines an infinite-state system using temporal formulas, via a suitable tableau construction. For a specific liveness to safety reduction based on first-order logic, where temporal prophecy is defined via first-order linear temporal logic, we show that using temporal prophecy strictly increases the proof power of the reduction. Furthermore, temporal prophecy leads to robustness of the proof method, which is manifested by a cut elimination theorem. We integrate our approach into the Ivy deductive verification system, and show that it can handle challenging temporal verification examples.

17:00-17:15Break
17:45-19:00 reception

Reception, haloween