FMCAD 2022: FORMAL METHODS IN COMPUTER-AIDED DESIGN
PROGRAM FOR THURSDAY, OCTOBER 20TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:05 Session 21: Proofs - Part 1
09:00
Small Proofs from Congruence Closure

ABSTRACT. Satisfiability Modulo Theory (SMT) solvers and equality saturation engines must generate proof certificates from e-graph-based congruence closure procedures to enable verification and conflict clause generation. Smaller proof certificates speed up these activities. Though the problem of generating proofs of minimal size is known to be NP-complete, existing proof minimization algorithms for congruence closure generate unnecessarily large proofs and introduce asymptotic overhead over the core congruence closure procedure. In this paper, we introduce an O(n^5) time algorithm which generates optimal proofs under a new relaxed "proof tree size" metric that directly bounds proof size. We then relax this approach further to a practical O(n log(n)) greedy algorithm which generates small proofs with no asymptotic overhead. We implemented our techniques in the egg equality saturation toolkit, yielding the first certifying equality saturation engine. We show that our greedy approach in egg quickly generates substantially smaller proofs than the state-of-the-art Z3 SMT solver on a corpus of 3,760 benchmarks.

09:25
Reconstructing Fine-Grained Proofs of Complex Rewrites Using a Domain-Specific Language

ABSTRACT. Satisfiability modulo theories (SMT) solvers are widely used to prove security and safety properties of computer systems. For these applications, it is crucial that the result reported by an SMT solver be correct. Recently, there has been a renewed focus on producing independently checkable proofs in SMT solvers, partly with the aim of addressing this risk. These proofs record the reasoning done by an SMT solver and are ideally detailed enough to be easy to check. At the same time, modern SMT solvers typically implement hundreds of different term-rewriting rules in order to achieve state-of-the-art performance. Generating detailed proofs for applications of these rules is a challenge, because code implementing rewrite rules can be large and complex. Instrumenting this code to additionally produce proofs makes it even more complex and makes it harder to add new rewrite rules. We propose an alternative approach to the direct instrumentation of the rewriting module of an SMT solver. The approach uses a domain-specific language (DSL) to describe a set of rewrite rules declaratively and then reconstructs detailed proofs for specific rewrite steps on demand based on those declarative descriptions.

09:50
TBUDDY: A Proof-Generating BDD Package

ABSTRACT. The TBUDDY library enables the construction and manipulation of reduced, ordered binary decision diagrams (BDDs). It extends the capabilities of the BUDDY BDD package to support trusted BDDs, where the generated BDDs are accompanied by proofs of their logical properties. These proofs are expressed in a standard clausal framework, for which a variety of proof checkers are available.

Building on TBUDDY via its application-program interface (API) enables developers to implement automated reasoning tools that generate correctness proofs for their outcomes. In some cases, BDDs serve as the core reasoning mechanism for the tool, while in other cases they provide a bridge from the core reasoner to proof generation. A Boolean satisfiability (SAT) solver based on TBUDDY can achieve polynomial scaling when generating unsatisfiability proofs for a number of problems that yield exponentially-sized proofs with standard solvers. It performs particularly well for formulas containing parity constraints, where it can employ Gaussian elimination to systematically simplify the constraints.

10:30-11:00 Session 23: Proofs - Part 2
10:30
Stratified Certification for k-Induction

ABSTRACT. Our recently proposed certification framework for bit-level $k$-induction-based model checking has been shown to be quite effective in increasing the trust of verification results even though it partially involved quantifier reasoning. In this paper we show how to simplify the approach by assuming reset functions to be stratified. This way it can be lifted to word-level and in principle to other theories where quantifier reasoning is difficult. Our new method requires six simple SAT checks and one polynomial check, allowing certification to remain in co-NP while the previous approach required five SAT checks and one QBF check. Experimental results show a substantial performance gain for our new approach. Finally we present and evaluate our new tool \wlcertifier which is able to certify $k$-induction-based word-level model checking.

10:45
Proof-Stitch: Proof Combination for Divide-and-Conquer SAT Solvers

ABSTRACT. With the increasing availability of parallel computing power, there is a growing focus on parallelizing algorithms for important automated reasoning problems such as Boolean satisfiability (SAT). Divide-and-Conquer (D&C) is a popular parallel SAT solving paradigm that partitions SAT instances into independent sub-problems which are then solved in parallel. For unsatisfiable instances, state-of-the-art D&C solvers generate DRAT refutations for each sub-problem. However, they do not generate a single refutation for the original instance. To close this gap, we present Proof-Stitch, a procedure for combining refutations of different sub-problems into a single refutation for the original instance. We prove the correctness of the procedure and propose optimizations to reduce the size and checking time of the combined refutations by invoking existing trimming tools in the proof-combination process. We also provide an extensible implementation of the proposed technique. Experiments on instances from last year's SAT competition show that the optimized refutations are checkable up to seven times faster than unoptimized refutations.

11:00-12:05 Session 24: SAT and SMT - Part 2
11:00
First-Order Subsumption via SAT Solving

ABSTRACT. Automated reasoners, such as SAT/SMT solvers and first-order provers, are becoming the backbones of applications of formal methods, for example in automating deductive verification, program synthesis, and security analysis. Automation in these formal methods domains crucially depends on the efficiency of the underlying reasoners towards finding proofs and/or counterexamples of the task to be enforced. In order to gain efficiency, automated reasoners use dedicated proof rules to keep proof search tractable. To this end, subsumption is one of the most important proof rules used by automated reasoners, ranging from SAT solvers to first-order theorem provers and beyond. It is common that millions of subsumption checks are performed during proof search, necessitating efficient implementations. However, in contrast to propositional subsumption as used by SAT solvers and implemented using sophisticated polynomial algorithms, first-order subsumption in first-order theorem provers involves NP-complete search queries, turning the efficient use of first-order subsumption into a huge practical burden. In this paper we argue that integration of a dedicated SAT solver provides a remedy towards efficient implementation of first-order subsumption and related rules, and thus further increasing scalability of first-order theorem proving towards applications of formal methods. Our experimental results demonstrate that, by using a tailored SAT solver within first-order reasoning, we gain a large speed-up in state-of-the-art benchmarks.

11:25
BaxMC: a CEGAR approach to MAX#SAT
PRESENTER: Thomas Vigouroux

ABSTRACT. The MAX#SAT problem is an important problem with multiple applications in security and program synthesis and it is proven hard to solve. MAX#SAT is defined as: given a parameterized propositional formula, with an optional existential prefix, compute parameters such that the number of models of the formula is maximal.

We propose a CEGAR based algorithm and refinements thereof, relying on both exact and approximate model counting and we provide formal guarantees about its correctness in both cases. Our experiments show that this algorithm has much better effective complexity compared to the state of the art.

As another contribution we give a simple proof of hardness of the exact problem and generalizations thereof (counting hierarchy).

11:50
INC: A Scalable Incremental Weighted Sampler

ABSTRACT. The fundamental problem of weighted sampling involves sampling of satisfying assignments of Boolean formulas, which specify sampling sets, and according to distributions defined by pre-specified weight functions to weight functions. The tight integration of sampling routines in various applications has highlighted the need for samplers to be incremental, i.e., samplers are expected to handle updates to weight functions. The primary contribution of this work is an efficient knowledge compilation-based weighted sampler, INC, designed for incremental sampling. INC builds on top of the recently proposed knowledge compilation language, OBDD[AND], and is accompanied by rigorous theoretical guarantees. Our extensive experiments demonstrate that INC is faster than state-of-the-art approach for majority of the evaluation. In particular, we observed a median of 1.69× runtime improvement over the prior state-of-the-art approach.

12:05-12:40 Session 25: Student forum talks - Part 2
GAMBIT: An Interactive Playground for Concurrent Programs Under Relaxed Memory Models

ABSTRACT. Concurrency bugs are challenging to diagnose. Further, relaxed memory models implemented by ``real" architectures and compilers drive concurrency bugs still more mysterious. GAMBIT was designed as an interactive debugging environment for debugging and repairing concurrency bugs under multiple relaxed memory models. GAMBIT analyzes supervised executions of concurrent programs using a theorem prover to answer queries regarding program behaviors due to alternate thread schedulings or instruction reorderings on other relaxed memory models. Further, it provides repair capabilities to automatically synthesize required fences and atomic blocks.

Our ongoing work attempts to retarget the symbolic encodings used in the above for pedagogical applications. Graduate level courses like multiprocessor architecture, parallel programming, and advanced compilers discuss thread interleavings and relaxed memory models. However, such courses tend to be challenging for both students and instructors. We believe that students would appreciate a ``concurrency playground" that allows them to explore various concurrency constructs under a controlled setting, allowing them to examine them in composition and isolation. Instructors face the challenge of setting questions that test assimilation of certain topics and also grade the solutions. Our attempts are in this direction of applying the power of modern theorem provers on the symbolic encodings of concurrent programs on relaxed memory model (already developed for our debug and repair tool) for attacking the above problems.

Commutativity in Concurrent Program Verification

ABSTRACT. Verification algorithms for concurrent programs can take advantage of commutativity between statements: Verification of a representative subset of interleavings often succeeds with simpler proofs and with less time and memory consumption. The precise notion of commutativity as well as the selection of representatives are two crucial aspects of such verification algorithms. We develop verification algorithms that are parametric in these aspects, and theoretically analyse as well as empirically evaluate the impact of the chosen parameters on the verification.

SEAURCHIN: Bounded Model Checking for Rust

ABSTRACT. Rust is a systems programming language with similar applications as C and C++. The Rust compiler ensures that programs written in the “safe” subset of Rust possess certain safety properties, e.g., memory safety. At the same time practical Rust programs contain “unsafe” code for performance or legacy reasons – for example when delegating to existing C libraries. The Rust compiler, rustc, does not type check unsafe code. Thus, Rust programs using unsafe code may contain memory safety issues in addition to usual functional correctness problems. These errors are not restricted to the syntactically unsafe parts of the program but can manifest even in the safe part that calls unsafe code. Bounded Model Checking is an effective way to enable whole program analysis of Rust programs. Since Rust compiles to LLVM IR and SEAHORN has a bounded model checker (SEABMC) for LLVM IR  – it becomes feasible to use SEAHORN to verify Rust programs.

We build a pre-processing pipeline for SEAHORN called SEAURCHIN and use it to ascertain conditions for two standard library components String and RefCell to not panic at runtime. Interestingly, we find an invariant missing from the RefCell documentation whose absence leads to an overflow bug. Currently SEAURCHIN does not exploit ownership semantics (borrowed, owned, shared) from Rust to scale verification. Some ideas for doing so are discussed. We conclude by comparing our approach to that taken by other state-of-the-art verification tools for Rust.

Proof Production for Neural Network Verification

ABSTRACT. Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification community has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question. In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors. Our proof production is based on an efficient adaptation of the well-known Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. As a proof of concept, we implemented our technique on top of the Marabou DNN verifier. Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases, and requires only minimal overhead.

Trading Accuracy For Smaller Cardinality Constraints

ABSTRACT. We propose a new approximate Boolean cardinality constraint encoding trading accuracy for size. With the right trade-off, it is smaller than any existing encoding, maintaining generalized arc consistency through unit propagation. The accuracy and with it the size of the encoding can be adjusted.

Formal Approach to Identifying Genes and Microbes Significant to Inflammatory Bowel Disease 

ABSTRACT. Research into inflammatory bowel disease (IBD) has exponentially increased due to the influx of new methods for data analysis. Previous work mainly focused on investigating either the microbiome or transcriptomic data. However, there may be additional benefits to a multi-omics approach that combines microbiome and gene expression data. We present a formal method that uses Boolean implication analysis as a CAD tool to identify microbes and genes important to IBD and the progression to IBD. We previously researched microbiomes and gene expression individually, and describe a new approach to analyze the combined data. This analysis may provide novel microbes and genes that can be used for disease diagnosis and drug development, and increase accuracy and efficiency due to the formal method approach.

Understanding Trust Assumptions for Attestation in Confidential Computing

ABSTRACT. Despite its critical role, remote attestation in Intel Software Guard Extensions (SGX) and Trust Domain Extensions (TDX) is poorly specified by Intel with some obvious flaws. We believe that it is part of Intel's strategic policy to create resistance to revealing trust assumptions of the process.

On Optimizing Back-Substitution Methods for Neural Network Verification

ABSTRACT. With the increasing application of deep learning in mission-critical systems, there is a growing need to obtain formal guarantees about the behaviors of neural networks. Indeed, many approaches for verifying neural networks have been recently proposed, but these generally struggle with limited scalability or insufficient accuracy. A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific input domain — and the tighter these bounds, the more likely the verification is to succeed. Many common algorithms for computing these bounds are variations of the symbolic-bound propagation method; and among these, approaches that utilize a process called back-substitution are particularly successful. In this work, we present an approach for making back-substitution produce tighter bounds. To achieve this, we formulate and then minimize the imprecision errors incurred during back-substitution. We implement our approach as a proof-of-concept tool, and present favorable results compared to state-of-the-art verifiers that perform back substitution.

Incremental Weighted Sampling

ABSTRACT. The fundamental problem of weighted sampling involves sampling of satisfying assignments of Boolean formulas, which specify sampling sets, and according to distributions defined by pre-specified weight functions to weight functions. The tight integration of sampling routines in various applications has highlighted the need for samplers to be incremental, i.e., samplers are expected to handle updates to weight functions.

The primary focus of this line of work is an efficient knowledge compilation-based weighted sampler, INC, designed for incremental sampling. INC builds on top of the recently proposed knowledge compilation language, OBDD[AND], and is accompanied by rigorous theoretical guarantees. Our extensive experiments demonstrate that INC is faster than state-of-the-art approach for majority of the evaluations. We discuss the potential novel extensions of INC.

14:00-15:30 Session 27: Parameterized systems and quantified reasoning
14:00
Automatic Repair and Deadlock Detection for Parameterized Systems

ABSTRACT. We present an algorithm for the repair of parameterized systems. The repair problem is, for a given process implementation, to find a refinement such that a given safety property is satisfied by the resulting parameterized system, and deadlocks are avoided. Our algorithm employs a constraint-based search for candidate repairs, and uses a parameterized model checker to determine their correctness and updates the constraint system in case errors are reachable. We apply this algorithm on systems that can be represented as well-structured transition systems (WSTS), including disjunctive systems, pairwise rendezvous systems, and broadcast protocols. Furthermore, we use an abstract transition system to show that parameterized deadlock detection can be decided in EXPTIME for disjunctive systems, vastly improving on the best known lower bound, and we show that deadlock detection is in general undecidable for broadcast protocols.

14:25
Synthesizing Locally Symmetric Parameterized Protocols from Temporal Specifications

ABSTRACT. Scalable protocols and web services are typically parameterized: that is, each instance of the system is formed by linking together several isomorphic copies of a representative process. Verification of such systems is difficult due to state explosion for large instances and the undecidability of verifying properties over all instances at once. This work turns instead to the derivation of a parameterized protocol from its specification. We exploit a reduction theorem showing that it suffices to construct a representative process P that meets a local specification under interference by neighboring copies of P. The reduction from the original to a local specification is done by hand, but the construction of P is fully automated. This is a new and challenging synthesis question, as one must synthesize an unknown process P while simultaneously considering interference by copies of this unknown process. We present two algorithms: an eager reduction to the synthesis of a transformed specification, and a lazy, iterative, tableau construction which incorporates fresh interference at each step. The tableau method has worst-case complexity that is exponential in the length of the local specification. For any given instance of the parameterized protocol, the distributed system is built by deploying replicated instances of the synthesized representative. We have implemented the tableau construction and show that it is capable of synthesizing parameterized protocols for mutual exclusion, leader election, and dining philosophers.

14:50
Synthesizing Self-Stabilizing Parameterized Protocols with Unbounded Variables

ABSTRACT. The focus of this paper is on the synthesis of unidirectional symmetric ring protocols that are self-stabilizing. Such protocols have an unbounded number of processes and unbounded variable domains, yet they ensure recovery to a set of legitimate states from any state. This is a significant problem as many distributed systems should preserve their fault tolerance properties when they scale. While previous work addresses this problem for constant-space protocols where domain size of variables are fixed regardless of the ring size, this work tackles the synthesis problem assuming that both variable domains and the number of processes in the ring are unbounded (but finite). We present a sufficient condition for synthesis and develop a sound algorithm that takes a conjunctive state predicate representing legitimate states, and generates the parameterized actions of a protocol that is self-stabilizing to legitimate states. We characterize the unbounded nature of protocols as semilinear sets, and show that such characterization simplifies synthesis. The proposed method addresses a longstanding problem because recovery is required from any state in an unbounded state space. For the first time, we synthesize some self-stabilizing unbounded protocols, including a near agreement and a parity protocol.

15:15
The Rapid Software Verification Framework

ABSTRACT. We present the RAPID framework for automating software verification by applying first-order reasoning in trace logic. RAPID establishes partial correctness of programs with loops and arrays by inferring loop invariants necessary to prove program correctness using a saturation-based automated theorem prover. RAPID can heuristically generate trace lemmas, common program properties that guide inductive invariant reasoning. Alternatively, RAPID can exploit nascent support for induction in modern provers to fully automate inductive reasoning without the use of trace lemmas. In addition, RAPID can be used as an invariant generation engine, supplying other verification tools with quantified loop invariants necessary for proving partial program correctness.

16:00-17:15 Session 29: Hardware and RTL - Part 2
16:00
Reconciling Verified-Circuit Development and Verilog Development

ABSTRACT. In software development, verified compilers like the CompCert compiler and the CakeML compiler enable a methodology for software development and verification that allows software developers to establish program-correctness properties on the verified compiler's target level. Inspired by verified compilers for software development, the verified Verilog synthesis tool Lutsig enables the same methodology for Verilog hardware development. In this paper, we address how Verilog features that must be understood as hardware constructs, rather than as software constructs, fit into hardware development methodologies, such as Lutsig's, inspired the development methodology enabled by software compilers. We explore this issue by extending the subset of Verilog supported by Lutsig with one such feature: always_comb blocks. In extending Lutsig's Verilog support with this, seemingly minor, feature, we are, perhaps surprisingly, required to revisit Lutsig's methodology for circuit development and verification; this revisit, it turns out, requires reconciling traditional Verilog development and the traditional program-verification methodology offered by verified software compilers. All development for this paper has been carried out in the HOL4 theorem prover.

16:25
Timed Causal Fanin Analysis for Symbolic Circuit Simulation

ABSTRACT. Symbolic circuit simulation has been the main vehicle for formal verification of Intel Core processor execution engines for over twenty years. It extends traditional simulation by allowing symbolic variables in the stimulus, covering the circuit behavior for all possible values simultaneously. A distinguishing feature of symbolic simulation is that it gives the human verifier clear visibility into the progress of the computation during the verification of an individual operation, and fine-grained control over the simulation to focus only on the datapath for that operation while abstracting away the rest of the circuit behavior.

In this paper we describe an automated simulation complexity reduction method called timed causal fanin analysis that can be used to carve out the minimal circuit logic needed for verification of an operation on a cycle-by-cycle basis. The method has been a key component of Intel's large-scale execution engine verification efforts, enabling closed-box verification of most operations in the interface level.

As a specific application, we discuss the formal verification of Intel’s new half-precision floating-point FP16 micro-instruction set. Thanks to the ability of the timed causal fanin analysis to separate the half-precision datapaths from full-width ones, we were able to verify all these instructions closed box, including the most complex ones like fused multiply-add and division. This led to early detection of several deep datapath bugs.

16:50
Divider Verification Using Symbolic Computer Algebra and Delayed Don’t Care Optimization
PRESENTER: Alexander Konrad

ABSTRACT. Recent methods based on Symbolic Computer Algebra (SCA) have shown great success in formal verification of multipliers and -- more recently -- of dividers as well. In this paper we enhance known approaches by the computation of satisfiability don’t cares for so-called Extended Atomic Blocks (EABs) and by Delayed Don’t Care Optimization (DDCO) for optimizing polynomials during backward rewriting. Using those novel methods we are able to extend the applicability of SCA-based methods to further divider architectures which could not be handled by previous approaches. We successfully apply the approach to the fully automatic formal verification of large dividers (with bit widths up to 512).

18:30-22:30 Banquet at Cantina Martinelli

Social dinner in a beautiful historical winery of Trentino.

https://www.cantinamartinelli.com/?lang=en