CAV 2020: 32ND INTERNATIONAL CONFERENCE ON COMPUTER AIDED VERIFICATION
PROGRAM FOR TUESDAY, JULY 21ST
Days:
previous day
next day
all days

View: session overviewtalk overview

08:00-09:00 Session 2: Logic Lounge
08:00
Working as Intended: Surveillance Capitalism is not a Rogue Capitalism

ABSTRACT. With every click and keystroke in the world wide web we produce an abundance of data, voluntarily or not. Big Tech companies are providing free services that billions of people use, which in turn enables them to monitor the behaviour of their users in astonishing detail — and very often without their explicit consent. This new form of capitalism that found a way to use tech for its purposes has recently been named “Surveillance capitalism” — companies spy because data allows them to influence their operations for their benefit in a devastatingly effective way.

However, what if Big Tech’s ability to command billions for ads have more to do with cornering markets and eking out marginal gains through targeting, with stale data being largely useless for commercial purposes — but still full of juicy kompromat for greedy state surveillance agencies?

Bio:

Cory Doctorow is a science fiction novelist, journalist, and technology activist, working as a special advisor for the Electronic Frontier Foundation, after serving as a European director of the Electronic Frontier Foundation, a non-profit civil liberties group that defends freedom in technology law, policy, standards, and treaties.

He holds an honorary doctorate in computer science from the Open University (UK), where he is a Visiting Professor; he is also a MIT Media Lab Research Affiliate and a Visiting Professor of Practice at the University of South Carolina’s School of Library and Information Science. In 2007, he served as the Fulbright Chair at the Annenberg Center for Public Diplomacy at the University of Southern California.

His novels have been translated into dozens of languages and are published by Tor Books, Head of Zeus (UK), Titan Books (UK) and HarperCollins (UK). He has won the Locus, Prometheus, Copper Cylinder, White Pine and Sunburst Awards, and been nominated for the Hugo, Nebula and British Science Fiction Awards.

He co-founded the open source peer-to-peer software company OpenCola, and serves on the boards and advisory boards of the Participatory Culture Foundation, the Clarion Foundation, the Open Technology Fund and the Metabrainz Foundation. Cory Doctorow is also the co-founder of the UK Open Rights Group.

Organizers:

The LogicLounge is hosted by the 32nd International Conference on Computer-Aided Verification (CAV) and organized by Aina Niemetz and Mathias Preiner of Stanford University in collaboration with the Vienna Center for Logic and Algorithms at TU Wien (VCLA).

09:15-10:45 Session 3A: CAV Conference: AI Verification
09:15
Improved Geometric Path Enumeration for Verifying ReLU Neural Networks

ABSTRACT. Neural networks provide fast approximations to complex functions, and have been increasingly used in perception as well as control tasks. For use in mission-critical and safety-critical applications, however, it is important to be able to analyze what a neural network can and cannot do. For feed-forward neural networks with ReLU activation functions, although exact analysis is NP-complete, recently-proposed verification methods can sometimes succeed.

The main practical problem with neural network verification is excessive analysis runtime. Even on small networks, tools that are theoretically complete can sometimes run for days without producing a result. In this paper, we work to address the runtime problem by improving upon a recently-proposed geometric path enumeration method. Through a series of optimizations, several of which are new algorithmic improvements, we demonstrate significant speed improvement of exact analysis on the well-studied ACAS Xu benchmarks, sometimes hundreds of times faster than the original implementation. On more difficult benchmark instances, our optimized approach is often the fastest, even outperforming inexact methods that leverage overapproximation and refinement.

09:35
An Abstraction-Based Framework for Neural Network Verification

ABSTRACT. Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several approaches have recently been proposed to formally verify them. However, network size is often a bottleneck for such approaches and it can be difficult to apply them to large networks. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network — thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counter-example. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks.

09:55
Verification of Deep Convolutional Neural Networks Using ImageStars

ABSTRACT. Convolutional Neural Networks (CNN) have redefined state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output in even well-trained networks. Set-based analysis methods can detect or prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology. Unfortunately, existing verification approaches have limited scalability in terms of the size of networks that can be analyzed.

In this paper, we describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet. Our approach is based on a new set representation called the ImageStar, which enables efficient exact and over-approximative analysis of CNNs. ImageStars perform efficient set-based analysis by combining operations on concrete images with linear programming (LP). Our approach is implemented in a tool called NNV, and can verify the robustness of VGG networks with respect to a small set of input states, derived from adversarial attacks, such as the DeepFool attack. The experimental results show that our approach is less conservative and faster than existing zonotope methods, such as those used in DeepZ, and the polytope method used in DeepPoly.

10:15
Systematic Generation of Diverse Benchmarks for DNN Verifiers

ABSTRACT. The field of verification has advanced due to the interplay of theoretical development and empirical evaluation. Benchmarks play an important role in this by supporting the assessment of the state-of-the-art and comparison of alternative verification approaches. Recent years have witnessed significant developments in the verification of deep neural networks, but diverse benchmarks representing the range of verification problems in this domain do not yet exist. This paper describes a neural network verification benchmark generator, GDVB, that systematically varies aspects of problems in the benchmark that influence verifier performance. Through a series of studies, we illustrate how GDVB can assist in advancing the sub-field of neural network verification by more efficiently providing richer and less biased sets of verification problems.

10:35
NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems

ABSTRACT. This paper presents the Neural Network Verification (NNV) software tool, a set-based verification framework for deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection of reachability algorithms that make use of a variety of set representations, such as polyhedra, star sets, zonotopes, and abstract-domain representations. NNV supports both exact (sound and complete) and over-approximate (sound) reachability algorithms for verifying safety and robustness properties of feed-forward neural networks (FFNNs) with various activation functions. For learning-enabled CPS, such as closed-loop control systems incorporating neural networks, NNV provides exact and over-approximate reachability analysis schemes for linear plant models and FFNN controllers with piecewise-linear activation functions, such as ReLUs. For similar neural network control systems (NNCS) that instead have nonlinear plant models, NNV supports over-approximate analysis by combining the star set analysis used for FFNN controllers with zonotope-based analysis for nonlinear plant dynamics building on CORA. We evaluate NNV using two real-world case studies: the first is safety verification of ACAS Xu networks, and the second deals with the safety verification of a deep learning-based adaptive cruise control system.

09:15-10:45 Session 3B: CAV Conference: Concurrency
09:15
Semantics, Specification, and Bounded Verification of Concurrent Libraries in Replicated Systems

ABSTRACT. Geo-replicated systems provide a number of desirable properties such as globally low latency, high availability, scalability, and built-in fault tolerance. Unfortunately, programming correct applications on top of such systems has proven to be very challenging, in large part because of the weak consistency guarantees they offer. While a large number of consistency policies have been proposed in recent years to aid programmers in developing applications suitable for these environments, profitably balancing correctness and efficiency by identifying the weakest policy under which an application can run correctly remains a highly non-trivial endeavor.

These complexities are exacerbated when we try to adapt existing highly-performant concurrent libraries developed for shared-memory environments to this setting. The use of these libraries, developed with performance and scalability in mind, is highly desirable. But, identifying a suitable notion of correctness to check their validity under a weakly consistent execution model has not been well-studied, in large part because it is problematic to na\"ively transplant criteria such as linearizability that has a useful interpretation in a shared-memory context to a distributed one where the cost of imposing a (logical) global ordering on all actions is prohibitive.

In this paper, we tackle these issues by proposing appropriate semantics and specifications for highly-concurrent libraries in a weakly-consistent, replicated setting. We use these specifications to develop a static analysis framework that can automatically detect correctness violations of library implementations parameterized with respect to the different consistency policies provided by the underlying system. We use our framework to analyze the behavior of a number of highly non-trivial library implementations, including stacks, queues, and exchangers. Our results provide the first demonstration that automated correctness checking of concurrent libraries in a weakly geo-replicated setting is both feasible and practical.

09:35
Refinement for Structured Concurrent Programs

ABSTRACT. This paper presents a foundation for refinement-based verification of concurrent programs with structured control flow. The problem is decomposed into subproblems that aid interactive program development, proof reuse, and automation. The formalization in this paper is the basis of a new design and implementation of the CIVL verifier.

09:55
Parameterized Verification of Systems with Global Synchronization and Guards

ABSTRACT. To facilitate the parameterized verification of distributed systems that are based on agreement protocols like consensus and leader election, we define a new computational model for parameterized systems that is based on a general global synchronization primitive and allows for global transition guards. Our model generalizes many existing models in the literature, including broadcast protocols and guarded protocols. We show that reachability properties are decidable for systems without guards, and give sufficient conditions under which they remain decidable in the presence of guards. Furthermore, we investigate cutoffs for reachability properties, showing that cutoffs in general grow at least quadratically with the local state space of a process, and provide sufficient conditions for small cutoffs in a number of cases that are inspired by our target applications.

10:15
Solver-aided Recency-Aware Replicated Objects

ABSTRACT. Replication is a common technique to build reliable and scalable systems. Traditional strong consistency maintains the same total order of operations across replicas. This total order is the source of multiple desirable consistency properties: integrity, convergence and recency. However, maintaining the total order has proven to inhibit availability and performance. Weaker notions exhibit responsiveness and scalability; however, they forfeit the total order and hence its favorable properties. This project revives these properties with as little coordination as possible. It presents a tool called Hampa that given a sequential object with the declaration of its integrity and recency requirements, automatically synthesizes a correct-by-construction replicated object that simultaneously guarantees the three properties. It features a relational object specification language and a syntax-directed analysis that infers optimum staleness bounds. Further, this paper defines coordination-avoidance conditions and the operational semantics of replicated systems that provably guarantees the three properties. It characterizes the computational power and presents a protocol for recency-aware objects. Hampa uses automatic solvers statically and embeds them in the runtime to dynamically decide the validity of coordination-avoidance conditions. The experiments show that recency-aware objects reduce coordination and response time.

10:35
Ivy: a Multi-Modal Verification Tool for Distributed Algorithms

ABSTRACT. Ivy is a multi-modal verification tool for correct design and implementation of distributed protocols and algorithms, supporting modular specification, implementation and proof~\cite{IVy}. Ivy supports proving safety and liveness properties of parameterized and infinite-state systems via three modes: deductive verification using an SMT solver, abstraction and model checking, and manual proofs using natural deduction. It supports light-weight formal methods via compositional specification-based testing and bounded model checking. Ivy can extract executable distributed programs by translation to efficient C++ code. It is designed to support decidable automated reasoning, to improve proof stability and to provide transparency in the case of proof failures. For this purpose, it presents concrete finite counterexamples, automatically audits proofs for decidability of verification conditions, and provides modular hiding of theories.

09:15-10:45 Session 3C: CAV Conference: Hardware Verification and Decision Procedures
09:15
Automated and Scalable Verification of Integer Multipliers

ABSTRACT. The automatic formal verification of multiplier designs has been pursued since the introduction of BDDs. We present a new rewriter-based method for efficient and automatic verification of signed and unsigned integer multiplier designs. We have proved the soundness of this method using the ACL2 theorem prover, and we are able to verify integer multiplier designs with various architectures automatically, including Wallace, Dadda, and 4-to-2 compressor trees, designed with Booth encoding and various types of final stage adders. Our experiments have shown that our approach scales well in terms of time and memory. With our method, we can confirm the correctness of 1024x1024-bit multiplier designs within minutes.

09:35
fault: A Python Embedded Domain-Specific Language For Metaprogramming Portable Hardware Verification Components

ABSTRACT. While hardware generators have drastically improved design productivity, they have introduced new challenges for the task of verification. To effectively cover the functionality of a sophisticated generator, verification engineers require tools that provide the flexibility of metaprogramming. However, flexibility alone is not enough; components must also be portable in order to encourage the proliferation of verification libraries and diversity in the tooling ecosystem. This paper introduces fault, a Python embedded domain-specific language that aims to empower design teams to realize the full potential of hardware generators and promote the practice of unit testing and test-driven development.

09:45
Interpolation-Based Semantic Gate Extraction and its Applications to QBF Preprocessing

ABSTRACT. We present a new semantic gate extraction technique for propositional formulas based on interpolation. While known gate detection methods are incomplete and rely on pattern matching or simple semantic conditions, this approach can detect any definition entailed by an input formula.

As an application, we consider the problem of computing unique strategy functions from Quantified Boolean Formulas (QBFs) and Dependency Quantified Boolean Formulas (DQBFs). Experiments with a prototype implementation demonstrate that functions can be efficiently extracted from formulas in standard benchmark sets, and that many of these definitions remain undetected by syntactic gate detection.

We turn this into a preprocessing technique by substituting unique strategy functions for input variables and test solver performance on the resulting instances. Compared to syntactic gate detection, we see a significant increase in the number of solved QBF instances.

10:05
Tinted, Detached, and Lazy CNF-XOR solving and its Applications to Counting and Sampling

ABSTRACT. Given a Boolean formula, the problem of counting seeks to estimate the number of solutions of F while the problem of uniform sampling seeks to sample solutions uniformly at random. Counting and uniform sampling are fundamental problems in computer science with a wide range of applications ranging from constrained random simulation, probabilistic inference to network reliability and beyond. The past few years have witnessed the rise of hashing-based approaches that use XOR- based hashing and employ SAT solvers to solve the resulting CNF formulas conjuncted with XOR constraints. Since over 99% of the runtime of hashing-based techniques is spent inside the SAT queries, improving CNF-XOR solvers has emerged as a key challenge.

In this paper, we identify the key performance bottlenecks in the recently proposed BIRD architecture, and we focus on overcoming these bottlenecks by accelerating the XOR handling within the SAT solver and on improving the solver integration through a smarter use of (partial) solutions. We integrate the resulting system, called BIRD2, with the state of the art approximate model counter, ApproxMC3, and the state of the art almost- uniform model sampler UniGen2. Through an extensive evaluation over a large benchmark set of over 1896 instances, we observe that BIRD2 leads to consistent speed up for both counting and sampling, and in particular, we solve 77 and 51 more instances for counting and sampling respectively.

10:25
SAW: A Tool for Safety Analysis of Weakly-hard Systems

ABSTRACT. We introduce SAW, a tool for safety analysis of weakly-hard systems, in which traditional hard timing constraints are relaxed to allow bounded deadline misses for improving design flexibility and runtime resiliency. Safety verification is a key issue for weakly-hard systems, as it ensures system safety under allowed deadline misses. Previous works are either for linear systems only, or limited to a certain type of nonlinear systems (e.g., systems that satisfy exponential stability and Lipschitz continuity of the system dynamics). In this work, we propose a new technique for infinite-time safety verification of general nonlinear weakly-hard systems. Our approach first discretizes the safe state set into grids and constructs a directed graph, where nodes represent the grids and edges represent the reachability relation. Based on graph theory and dynamic programming, our approach can effectively find the safe initial set (consisting of a set of grids), from which the system can be proven safe under given weakly-hard constraints. Experimental results demonstrate the effectiveness of our approach, when compared with the state-of-the-art. An open source implementation of our tool is available at \url{https://github.com/551100kk/SAW}. The virtual machine where the tool is ready to run can be found at \url{https://www.csie.ntu.edu.tw/~r08922054/SAW.ova}.

10:35
SeQuaiA: A Scalable Tool for Semi-Quantitative Analysis of Chemical Reaction Networks

ABSTRACT. Chemical reaction networks (CRNs) play a fundamental role in analysis and design of biochemical systems. They induce continuous-time stochastic systems, whose analysis is a computationally intensive task. We present a tool that implements the recently proposed semi-quantitative analysis of CRN. Compared to the proposed theory, the tool implements the analysis so that it is more flexible and more precise. Further, the GUI of the tool offers a wide range of visualization procedures that facilitate the interpretation of the analysis results as well as guidance to refine the analysis. Finally, based on the analysis, we define and implement a new notion of ``mean'' simulations, summarizing the typical behaviours of the system in a way directly comparable to standard simulations produced by other tools.