FMCAD 2019: FORMAL METHODS IN COMPUTER-AIDED DESIGN 2019
PROGRAM FOR WEDNESDAY, OCTOBER 23RD
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 4: Hardware Verification (Part 1)
Chair:
09:00
Boosting Verification Scalability via Structural Grouping and Semantic Partitioning of Properties
PRESENTER: Rohit Dureja

ABSTRACT. From equivalence checking to functional verification to design-space exploration, industrial verification tasks entail checking a large number of properties on the same design. State- of-the-art tools typically solve all properties concurrently, or one-at-a-time. They do not optimally exploit subproblem sharing between properties, leaving an opportunity to save considerable verification resource via concurrent verification of properties with nearly identical cone of influence (COI). These high-affinity properties can be concurrently solved; the verification effort expended for one can be directly reused to accelerate the verification of the others, without hurting per-property verification resources through bloating COI size. We present a near-linear runtime algorithm for partitioning properties into provably high- affinity groups for concurrent solution. We also present an effective method to partition high-structural-affinity groups using semantic feedback, to yield an optimal multi-property localization abstraction solution. Experiments demonstrate substantial end- to-end verification speedups through these techniques, leveraging parallel solution of individual groups.

09:30
Input Elimination Transformations for Scalable Verification and Trace Reconstruction

ABSTRACT. We present two novel sound and complete netlist transformations, which substantially improve verification scalability while enabling very efficient trace reconstruction. First, we present a 2QBF variant of input reparameterization, capable of eliminating inputs without introducing new logic and without complete range computation. While weaker in reduction potential, it yields up to 4 orders of magnitude speedup to trace reconstruction when used as a fast-and-lossy preprocess to traditional reparameterization. Second,  we present a novel scalable approach to leverage sequential unateness to merge selective inputs, in cases greatly reducing netlist size and verification complexity. Extensive benchmarking demonstrates the utility of these techniques. Connectivity verification  particularly benefits from these reductions, up to 99.8%.

10:30-11:30 Session 5: Hardware Verification (Part 2)
10:30
Chasing Minimal Inductive Validity Cores in Hardware Model Checking

ABSTRACT. Model checking of safety properties is fundamental in formal verification. When a safety property is found to hold, the model checker provides (at best) a machine-checkable certificate that gives limited insight to users and little confidence that the check passes for the ``right'' reasons, rather than due to e.g., vacuity or unjustified assumptions. Recently, inductive validity cores (IVCs) have been developed to address this issue. In this paper, we lift several algorithms from the field of UNSAT core extraction in order to compute minimal IVCs of hardware safety checking problems. The MARCO algorithm extracts all minimal cores of an UNSAT formula by efficiently exploring of the formula's power set, and has already been applied to compute IVCs of software safety checking problems. The CAMUS algorithm for UNSAT core extraction exploits a duality between minimal correction subsets (MCSes) of a formula and minimal UNSAT cores. We construct a hybrid algorithm that subsumes both CAMUS and MARCO, adapt the algorithm to the hardware IVC context, and introduce novel domain-specific optimizations. Several instances of the hybrid algorithm are presented (including CAMUS and MARCO themselves, among other novel variants) and evaluated empirically on hardware model checking competition circuits, demonstrating the practicality of the proposed algorithm.

11:00
Verifying Large Multipliers by Combining SAT and Computer Algebra
PRESENTER: Daniela Kaufmann

ABSTRACT. We combine SAT and computer algebra to substantially improve the most effective approach for automatically verifying integer multipliers. In our approach complex final stage adders are detected and replaced by simple adders. These simplified multipliers are verified by computer algebra techniques and correctness of the replacement step by SAT solvers. Our new dedicated reduction engine relies on a Gröbner basis theory for coefficient rings which in contrast to previous work no longer are required to be fields. Modular reasoning allows us to verify not only large unsigned and signed multipliers much more efficiently but also truncated multipliers. We are further able to generate and check proofs an order of magnitude faster than in our previous work, relative to verification time, while other competing approaches do not provide certificates.

13:30-15:00 Session 7: Software Vertification
13:30
Unification-based Pointer Analysis without Oversharing

ABSTRACT. Pointer analysis is indispensable for effectively ver- ifying heap-manipulating programs. Even though it has been studied extensively, there are no publicly available pointer anal- yses that are moderately precise while scalable to large real-world programs. In this paper, we show that existing context-sensitive unification-based pointer analyses suffer from the problem of oversharing – propagating too many abstract objects across the analysis of different procedures, which prevents them from scaling to large programs. We present a new pointer analysis for LLVM, called TeaDsa, without such an oversharing. We show how to further improve precision and speed of TeaDsa with extra contextual information, such as flow-sensitivity at call- and return-sites, and type information about memory accesses. We evaluate TeaDsa on the verification problem of detecting unsafe memory accesses and compare it against two state-of- the-art pointer analyses: SVF and SeaDsa. We show that TeaDsa is one order of magnitude faster than either SVF or SeaDsa, strictly more precise than SeaDsa, and, surprisingly, sometimes more precise than SVF.

14:00
Concurrent Chaining Hash Map for Software Model Checking

ABSTRACT. Stateful model checking creates numerous states which need to be stored and checked if already visited. One option for such storage is a hash map and this has been used in many model checkers. In particular, we are interested in the performance of concurrent hash maps for use in multi-core model checkers with a variable state vector size. Previous research claimed that open addressing was the best performing method for the parallel speedup of concurrent hash maps. However, here we demonstrate that chaining lends itself perfectly for use in a concurrent setting.

We implemented 12 hash map variants, all aiming at multi-core efficiency. 8 of our implementations support variable-length key-value pairs. We compare our implementations and 22 other hash maps by means of an extensive test suite. Of these 34 hash maps, we show the representative performance of 11 hash maps.

Our implementations not only support state vectors of variable length, but also feature superior scalability compared to competing hash maps. Our benchmarks show that on 96 cores, our best hash map is 1.3 -- 2.6 times faster than competing hash maps, for load factor under 1. For higher load factors, it is an order of magnitude faster.

14:30
Proving Data Race Freedom in Task Parallel Programs with a Weaker Partial Order

ABSTRACT. Task parallel programming models such as Habanero Java help developers write idiomatic parallel programs and avoid common errors. Data race freedom is a desirable property for task parallel programs but is difficult to prove because every possible execution of the program must be considered. A partial order over events of an observed program execution induces an equivalence class of executions that the program may also produce. The Does-not-Commute (DC) relation is an efficiently computable partial order used for data race detection. Because the DC relation is weaker than other partial orders, it can represent larger equivalence classes of program executions. However, some of these executions may be infeasible, leading to false data race reports. The contribution of this paper is a mechanized proof that the DC relation is actually sound for commonly used task parallel programming models. Sound means that the first data race identified by the DC relation is guaranteed to be a real data race. A prototype analysis in the Java Pathfinder model checker shows that the DC relation can significantly reduces the number of explored states required to prove data race freedom in Habanero Java programs.

15:30-17:00 Session 8: Network Verification and Synthesis
Chair:
15:30
BDD-Based Algorithms for Packet Classification

ABSTRACT. Packet classifiers are the building blocks of modern networking. A classifier determines the action to take on a packet by matching its header against a set of rules. Efficient classification is achieved by using associative memory to perform the match operation in one clock cycle. This requires compressing large rule sets to fit in the small associative memory space available in modern network switches. Failure to do so leads to increased infrastructure cost and reduced performance. We propose two symbolic rule set compression algorithms based on binary decision diagrams. Following McGeer and Yalagandula, we formalize the problem as that of obtaining a sequential cover of the rule set. We develop a simple BDD-based algorithm for computing sequential covers, which significantly outperforms state of the art algorithms in terms of compression ratio|a surprising result that highlights the unexplored potential of symbolic techniques in packet classification. Despite this improvement, very large industrial classifiers are still beyond reach. We decompose such classifiers into a pipeline of smaller classifiers over subsets of packet header fields. We then compress each classifier using the sequential cover technique. Our algorithm is able to compress industrial rule sets with hundreds of thousands rules to readily fit in the memory of network switches.

16:00
TSNsched: Automated Schedule Generation for Time Sensitive Networking

ABSTRACT. Time Sensitive Networking (TSN) is a set of standards enabling high performance deterministic communication using time scheduling. Due to the size of industrial networks, configuring TSN networks is challenging to be done manually. We present TSNsched, a tool for generating automatically schedules for TSN. TSNsched takes as input the logical topology of a network, expressed as flows, and outputs schedules for TSN switches by using an SMT-solver. The generated schedule guarantees the desired network performance (specified in terms of latency and jitter), if such schedules exist. TSNsched supports unicast and multicast flows, such as, in Publish Subscribe networks. TSNsched can be run as a standalone tool and also allows rapid prototyping with the available JAVA API. We evaluate TSNsched on a number of realistic-size network topologies. TSNsched can generate high performance schedulers, with average latency less than 1000μs, and average jitter less than 20μs, for TSN networks, with up to 73 subscribers and up to 10 multicast flows.

16:30
Verification and Synthesis of Symmetric Uni-Rings for Leads-To Properties

ABSTRACT. This paper investigates the verification and synthesis of parameterized protocols that satisfy global leadsto properties $R \leadsto Q$ on symmetric unidirectional rings (a.k.a. uni-rings) of deterministic and constant-space processes under the interleaving semantics and no fairness assumption, where $R$ and $Q$ denote global state predicates. First, we show that verifying $R \leadsto Q$ for parameterized protocols on symmetric uni-rings is undecidable, even for deterministic and constant-space processes, and conjunctive state predicates. Then, we show that surprisingly synthesizing symmetric uni-ring protocols that satisfy $R \leadsto Q$ is actually decidable. We identify necessary and sufficient conditions for the decidability of synthesis based on which we devise a sound and complete algorithm that takes the predicates $R$ and $Q$, and automatically generates a parameterized protocol that satisfies $R \leadsto Q$ for unbounded (but finite) ring sizes. We use our algorithm to synthesize some parameterized protocols, including agreement and parity protocols.