ATVA 2018: INTERNATIONAL SYMPOSIUM ON AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS
PROGRAM FOR TUESDAY, OCTOBER 9TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 15: Invited talk
09:00
DeepSafe: A Data-driven Approach for Assessing Robustness of Neural Networks

ABSTRACT. Deep neural networks have achieved impressive results in many complex
applications, including classification tasks for image and speech recognition,
pattern analysis or perception in self-driving vehicles. However, it has been
observed that even highly trained networks are very vulnerable to adversarial
perturbations. Adding minimal changes to inputs that are correctly classified can
lead to wrong predictions, raising serious security and safety concerns. Existing
techniques for checking robustness against such perturbations only consider
searching locally around a few individual inputs, providing limited guarantees.
We propose DeepSafe, a novel approach for automatically assessing the overall
robustness of a neural network. DeepSafe applies clustering over known labeled
data and leverages off-the-shelf constraint solvers to automatically identify and
check safe regions in which the network is robust, i.e. all the inputs in the region
are guaranteed to be classified correctly.We also introduce the concept of targeted
robustness, which ensures that the neural network is guaranteed not to misclassify
inputs within a region to a specific target (adversarial) label. We evaluate Deep-
Safe on a neural network implementation of a controller for the next-generation
Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu) and for
the well known MNIST network. For these networks, DeepSafe identified many
regions which were safe, and also found adversarial perturbations of interest.

10:00-10:30Coffee Break
10:30-12:00 Session 16: Stochastic systems 1
10:30
Bisimilarity Distances for Approximate Differential Privacy

ABSTRACT. Differential privacy is a widely studied notion of privacy for various models of computation. Technically, it is based on measuring differences between generated probability distributions. We study epsilon,delta-differential privacy in the setting of Labelled Markov Chains.

While the exact differences relevant to epsilon,delta-differential privacy are not computable in this framework, we propose a computable bisimilarity distance that yields a sound technique for measuring delta, the parameter that quantifies deviation from pure differential privacy. We show that the distance is always rational, the associated threshold problem is in NP, and the distance can be computed exactly with polynomially many calls to an NP oracle.

11:00
Synthesis in pMDPs: A Tale of 1001 Parameters

ABSTRACT. This paper considers parametric Markov decision processes (pMDPs) whose transitions are equipped with affine functions over a finite set of parameters. The synthesis problem is to find a parameter valuation such that the instantiated pMDP satisfies a (temporal logic) specification under all strategies. We show that this problem can be formulated as a quadratically-constrained quadratic program (QCQP) and is non-convex in general. To deal with the NP-hardness of such problems, we exploit a convex-concave procedure (CCP) to iteratively obtain local optima. An appropriate interplay between CCP solvers and probabilistic model checkers creates a procedure --- realized in the tool PROPheSY --- that solves the synthesis problem for models with thousands of parameters.

11:30
PSense: Automatic Sensitivity Analysis for Probabilistic Programs

ABSTRACT. PSense is a system for automatic evaluation of sensitivity of probabilistic programs. It takes a probabilistic program as input and symbolically analyzes the sensitivity of the computed posterior distribu- tions to the perturbations of the parameters of the prior distributions and observed data points. PSense characterizes the change in the posterior by computing the symbolical expressions of several standard statistical metrics or user-defined metrics. Our evaluation on a set of 66 proba- bilistic programs from the literature, with a total of 357 analysis runs demonstrates the effectiveness of PSense.

12:00-13:30Lunch Break
13:30-15:00 Session 17: Synthesis 2
13:30
Bounded Synthesis of Reactive Programs

ABSTRACT. Most algorithms for the synthesis of reactive systems focus on the construction of finite-state machines rather than actual programs. This often leads to badly structured, unreadable code. In this paper, we present a bounded synthesis approach that automatically constructs, from a given specification in linear-time temporal logic (LTL), a program in Madhusudan’s simple imperative language for reactive programs. We develop and compare two principal approaches for the reduction of the synthesis problem to a Boolean constraint satisfaction problem. The first reduction is based on a generalization of bounded synthesis to two-way alternating automata, the second reduction is based on a direct encoding of the program syntax in the constraint system. We report on preliminary experience with a prototype implementation, which indicates that the direct encoding outperforms the automata approach.

14:00
Reactive Synthesis of Register Transducers

ABSTRACT. Reactive synthesis aims at automatic construction of systems from their behavioural specifications. The research mostly focuses on syn- thesis of systems dealing with Boolean signals. But real-life systems are often described using bit-vectors, integers, etc. Bit-blasting would make such systems unreadable, hit synthesis scalability, and is not possible for infinite data-domains. One step closer to real-life systems are register transducers [10]: they can store data-input into registers and later output the content of a register, but they do not directly depend on the data- input, only on its comparison with the registers. Previously [5] it was proven that synthesis of register transducers from register automata is undecidable, but there the authors considered transducers equipped with the unbounded queue of registers. First, we prove the problem becomes decidable if bound the number of registers in transducers, by reducing the problem to standard synthesis of Boolean systems. Second, we show how to use quantified temporal logic, instead of automata, for specifications.

14:30
Maximum Realizability for Linear Temporal Logic Specifications

ABSTRACT. Automatic synthesis from linear temporal logic (LTL) specifications is widely used in robotic motion planning and control of autonomous systems. A common specification pattern in such applications consists of an LTL formula describing the requirements on the behaviour of the system, together with a set of additional desirable properties. We study the synthesis problem in settings where the overall specification is unrealizable, more precisely, when some of the desirable properties have to be (temporarily) violated in order to satisfy the system's objective. We provide a quantitative semantics of sets of safety specifications, and use it to formalize the "best-effort" satisfaction of such soft specifications while satisfying the hard LTL specification. We propose an algorithm for synthesizing implementations that are optimal with respect to this quantitative semantics. Our method builds upon the idea of bounded synthesis, and we develop a MaxSAT encoding which allows for maximizing the quantitative satisfaction of the soft specifications. We evaluate our algorithm on scenarios from robotics and power distribution networks.

15:00-15:30Coffee Break
15:30-17:30 Session 18: Static analysis and security
15:30
Quantifiers on Demand

ABSTRACT. Automated program verification is a difficult problem. It is undecidable even for transition systems over Linear Integer Arithmetic (LIA). Extending the transition system with theory of Arrays, further complicates the problem by requiring inference and reasoning with universally quantified formulas. In this paper, we present a new algorithm, Quic3, that extends IC3 to infer universally quantified invariants over the combined theory of LIA and Arrays. Unlike other approaches that use either IC3 or an SMT solver as a black box, Quic3 carefully manages quantified generalization (to construct quantified invariants) and quantifier instantiation (to detect convergence in the presence of quantifiers). While Quic3} is not guaranteed to converge, it is guaranteed to make progress by exploring longer and longer executions. We have implemented Quic3 within the Constrained Horn Clause solver engine of Z3 and experimented with it by applying Quic3 to verifying a variety of public benchmarks of array manipulating C programs.

16:00
Information Leakage in Arbiter Protocols

ABSTRACT. Resource sharing while preserving privacy is an increasingly important problem due to a wide-scale adoption of cloud computing. Under multitenancy, it is not uncommon to have multiple mutually distrustful "processes" (e.g. cores, threads, etc.) running on the same system simultaneously. This paper explores a new approach for automatically identifying and quantifying the information leakage in protocols that arbitrate utilization of shared resources between processes. Our approach is based on symbolic execution of arbiter protocols to extract constraints relating adversary observations to victim requests, then using model counting constraint solvers to quantify the information leaked. We present enumerative and optimized methods of model counting, and apply our methods to a set of nine different arbiter protocols, quantifying their leakage under different scenarios and allowing for informed comparison.

16:30
Robustness Testing of Intermediate Verifiers

ABSTRACT. Program verifiers are not exempt from the bugs that affect nearly every piece of software. In addition, they often exhibit brittle behavior: their performance changes considerably with details of how the input program is expressed— details that should be irrelevant, such as the order of independent declarations. Such a lack of robustness frustrates users who have to spend considerable time figuring out a tool’s idiosyncrasies before they can use it effectively. This paper introduces a technique to detect lack of robustness of program verifiers; the technique is lightweight and fully automated, as it is based on testing methods (such as mutation testing and metamorphic testing). The key idea is to generate many simple variants of a program that initially passes verification. All variants are, by construction, equivalent to the original program; thus, any variant that fails verification indicates lack of robustness in the verifier. We implemented our technique in a tool called μgie, which operates on programs written in the popular Boogie language for verification—used as intermediate representation in numerous program verifiers. Experiments targeting 135 Boogie programs indicate that brittle behavior occurs fairly frequently (16 programs) and is not hard to trigger Based on these results, the paper discusses the main sources of brittle behavior and suggests means of improving robustness.

17:00
Verifying Rust Programs with SMACK

ABSTRACT. Rust is an emerging systems programming language with guaranteed memory safety and modern language features that has been extensively adopted to build safety-critical software. However, there is currently a lack of automated software verifiers for Rust. In this work, we present our experience extending the SMACK verifier to enable its usage on Rust programs. We evaluate SMACK on a set of Rust programs to demonstrate a wide spectrum of language features it supports.

17:15
EthIR: A Framework for High-Level Analysis of Ethereum Bytecode

ABSTRACT. Analyzing Ethereum bytecode, rather than the source code from which it was generated, is a necessity when: (1) the source code is not available (e.g., the blockchain only stores the bytecode), (2) the information to be gathered in the analysis is only visible at the level of bytecode (e.g., gas consumption is specified at the level of EVM instructions), (3) the analysis results may be affected by optimizations performed by the compiler (thus the analysis should be done ideally after compilation). This paper presents EthIR, a framework for analyzing Ethereum bytecode, which relies on (an extension of) OYENTE, a tool that generates CFGs; EthIR produces from the CFGs, a rule-based representation (RBR) of the bytecode that enables the application of (existing) high-level analyses to infer properties of EVM code.