Pattern-based Synthesis of Synchronization for the C++ Memory Model

SPEAKER: unknown

ABSTRACT. We address the problem of synthesizing efficient and correct synchronization for programs running under the C++ relaxed memory model.
Given a finite-state program $P$ and a safety property $S$ such that $P$ satisfies $S$ under a sequentially consistent (SC) memory model, our approach automatically eliminates concurrency errors in $P$ due to the relaxed memory model, by creating a new program $P’$ with additional synchronization. Our approach works by automatically exploring the space of programs that can be created from $P$ by adding synchronization operations. To explore this (vast) space, our algorithm:
\begin{inparaenum}[(i)]
\item explores bounded error traces to detect memory access patterns that can occur under the C++ memory model but not under SC, and
\item eliminates these error traces by adding appropriate synchronization operations.
\end{inparaenum}

We implemented our approach using CDSChecker as an oracle for detecting error traces and Z3 to symbolically explore the space of possible solutions.
Our tool successfully synthesized synchronization operations for several challenging concurrent algorithms, including a state of the art Read-Copy-Update (RCU) algorithm.

An SMT-based Approach to Fair Termination Analysis

SPEAKER: unknown

ABSTRACT. Algorithms for the coverability problem have been successfully applied to safety checking for concurrent programs. In a former paper (An SMT-based Approach to Coverability Analysis, CAV14) we have revisited a constraint approach to coverability based on classical Petri net analysis techniques and implemented it on top of state-of-the-art SMT solvers. In this paper we extend the approach to fair termination; many other liveness properties can be reduced to fair termination using the automata-theoretic approach to verification. We use T-invariants to identify potential infinite computations of the system, and design a novel technique to discard false positives, that is, potential computations that are not actually executable. We validate our technique on a large number of case studies.

Verification of Cache Coherence Protocols wrt. Trace Filters

SPEAKER: unknown

ABSTRACT. We address the problem of parameterized verification of cache coherence protocols for hardware accelerated transactional memories. In this setting, transactional memories leverage on the versioning capabilities of the underlying cache coherence protocol. The length of the transactions, their number, and the number of manipulated variables (i.e., cache lines) are parameters of the verification problem. Caches in such systems are finite-state automata communicating via broadcast and shared variables. We augment our system with filters that restrict the set of possible executable traces according to existing conflict resolution policies. We show that under some reasonable assumptions the verification problem for parameterized systems with filters can be reduced to systems with only a finite number of cache lines.
For verification, we show how to account for the effect of the adopted filters in a symbolic backward reachability algorithm based on the framework of constrained monotonic abstraction. We have implemented our method and used it to verify transactional memory coherence protocols with respect to different conflict resolution policies.

Transaction Flows and Executable Models: Formalization and Analysis of Message passing Protocols

SPEAKER: unknown

ABSTRACT. The lack of models is often the biggest hurdle in using formal
methods in the industry. Creating formally analyzable models of
industrial size designs is a challenging task, one that we believe has
not been sufficiently addressed by existing research. We address this
problem for distributed message passing protocols. We show how to
synthesize formal executable models of these protocols from
transaction message flows, which are readily available in architecture
descriptions. We present industrial case studies showing that this
approach to creating formal models is effective in practice. We also
show that going the other way, \ie, extracting flows from executable
models, is at least as hard as the model checking problem. Our
results indicate that transaction flows may provide a superior
approach to capture design intent than executable models.

Template-based Synthesis of Instruction-Level Abstractions for SoC Verification

SPEAKER: unknown

ABSTRACT. Contemporary integrated circuits are complex system-on-chip (SoC) designs consisting of programmable cores along with accelerators and peripherals controlled by firmware running on the cores. Verification of such SoCs is challenging, especially for system-level properties maintained by a combination of firmware and hardware. Attempting to formally verify the full SoC design considering both firmware and hardware is not scalable while separate verification can miss bugs.

An abstraction that can be used instead of the cycle-accurate bit-precise hardware implementation can be helpful in scalably verifying system-level properties of SoCs. However, there are two challenges in applying this technique in practice. First, constructing the abstraction to capture all the required details and interactions is error-prone, tedious and time-consuming. The second is ensuring correctness so that properties proven using it are valid.

In this paper, we introduce a methodology for SoC verification that is based on
synthesizing an instruction-level abstraction (ILA) that {precisely captures} updates to all firmware-accessible state spanning the cores, accelerators and peripherals. The synthesis algorithm uses a blackbox simulator to synthesize the ILA from a template specification. A "golden-model" generated from the ILA is used to verify that the hardware implementation matches the ILA. We demonstrate the methodology using a small SoC design consisting of the 8051 microcontroller and two cryptographic accelerators built using open source components. The methodology uncovered 14 bugs.

ABSTRACT. Reverse engineering is the extraction of word level information from a gate-level netlist. It has applications in formal verification, hardware trust, information recovery, and general technology mpping. A preprocessing step finds blocks in a circuit in which word level components are expected. A second step searches for word level components in these blocks. For this second step, we propose two variants of equivalence checking that consider subfunction containment. We propose algorithms to solve these variants by reducing the problem to subgraph isomorphism. A simulation graph (SG) is constructed for the block and for each library component, given a set of carefully chosen simulation vectors. If a library component SG is a subgraph of the block SG, we have a candidate match, which is then checked by standard equivalence checking. We extend a state-of-the-art subgraph isomorphism algorithm, LAD, to handle simulation graphs efficiently and also propose a SAT-based formulation. Experimental evaluations show that our algorithms can efficiently find 32-bit arithmetic components in blocks with over 300 primary inputs.

Formal Verification of Automatic Circuit Transformations for Fault-Tolerance

SPEAKER: unknown

ABSTRACT. We present a language-based approach to certify fault-tolerance techniques for digital circuits. Circuits are expressed in a gate-level Hardware Description Language (HDL), fault-tolerance techniques are described as automatic circuit transformations in that language, and fault-models are specified as particular semantics of the HDL. These elements are formalized in the Coq proof assistant and the properties, ensuring that for all circuits their transformed version masks all faults of the considered fault-model, can be expressed and proved. In this article, we consider Single-Event Transients (SETs) and fault models of the form “at most 1 SET within k clock cycles”. The primary motivation of this work was to certify the Double-Time Redundant Transformation (DTR), a new technique proposed recently [1]. The DTR transformation combines double-time redundancy, micro-checkpointing, rollback, several execution modes and input/output buffers. That intricacy requested a formal proof to make sure that no single-point of failure existed. The correctness of DTR as well as two other transformations for fault-tolerance have been proved in Coq.