FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
CAV ON SUNDAY, JULY 15TH
Days:
previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

09:10-10:00 Session 102: CAV Award

The CAV 2018 Award is given to: SAT-based and Bounded Model Checking

  • Armin Biere
  • Alessandro Cimatti
  • Edmund M. Clarke
  • Daniel Kroening
  • Flavio Lerda
  • Yunshan Zhu

Bounded Model Checking has revolutionized the way model checking is used and perceived. It has increased the capabilities of model checkers by orders of magnitude, turning them into a standard tool for hardware verification and a very important component of the toolkit available for software verification. BMC changed the focus of model checking from full verification to bug-finding.  The BMC problem is defined as follows: Given a bound k, is there an erroneous computation of the system of length k?  This problem is transformed to a Boolean formula that is satisfiable if and only if the system includes a computation of length k, which violates the specification. Focusing on the bounded problem enabled the authors to exploit the progress that was made in SAT solving around the same time, and simultaneously to bootstrap the tremendous progress in satisfiability solving that we have seen since. While early implementation of BMC focused on hardware, CBMC has demonstrated how it can be applied to realistic programs written in C. The ability to apply verification directly to C programs gave an enormous boost to a very large spectrum of applications in hardware and software industry and in academic research. The application areas include error explanation and localization, concurrent programs, equivalence checking, cyber-physical systems and control systems, test vector generation, worst-case execution time, security, and many other practical applications.

Location: Maths LT1
10:00-10:30 Session 103A: Learning
Location: Maths LT1
10:00
Learning Abstractions for Program Synthesis
SPEAKER: Xinyu Wang

ABSTRACT. Many example-guided program synthesis techniques use abstractions to prune the search space. While abstraction-based synthesis has proven to be very powerful, a domain expert needs to provide a suitable abstract domain, together with the abstract transformers of each DSL construct. However, coming up with useful abstractions can be non-trivial, as it requires both domain expertise and knowledge about the synthesizer. In this paper, we propose a new technique for learning abstractions that are useful for instantiating a general synthesis framework in a new domain. Given a DSL and a small set of training problems, our method uses tree interpolation to infer reusable predicate templates that speed up synthesis in a given domain. Our method also learns suitable abstract transformers by solving a certain kind of second-order constraint solving problem in a data-driven way. We have implemented the proposed method in a tool called Atlas and evaluate it in the context of the Blaze meta-synthesizer. Our evaluation shows that (a) Atlas can learn useful abstract domains and transformers from few training problems, and (b) the abstractions learned by Atlas allow Blaze to achieve significantly better results compared to manually-crafted abstractions.

10:15
The Learnability of Symbolic Automata

ABSTRACT. Symbolic automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend classic automata to operate over infinite alphabets, such as the set of rational numbers. In this paper, we study the problem of learning symbolic automata and exactly characterize under what conditions symbolic automata can be learned efficiently. First, we present \alg, an $L^*$-style algorithm for learning symbolic automata using membership and equivalence queries, which treats the predicates appearing on transitions as their own learnable entities. The main novelty of \alg is that it can take as input an algorithm $\Lambda$ for learning predicates in the underlying alphabet theory and it uses $\Lambda$ to infer the predicates appearing on the transitions in the target automaton. Using this idea, \alg is able to learn automata operating over alphabets theories in which predicates are efficiently learnable using membership and equivalence queries. Furthermore, we prove that a necessary condition for efficient learnability of an \SFA is that predicates in the underlying algebra are also efficiently learnable using queries. Our results settle the learnability of a large class of \SFA instances, leaving open only a subclass of \SFAs over efficiently learnable alphabet theories. We implement \alg in an open-source library and show that it can efficiently learn automata that cannot be learned using existing algorithms and significantly outperform existing automata learning algorithms over large alphabets.

10:30-11:00Coffee Break
11:00-12:30 Session 104A: Runtime Verification, Hybrid and Timed Systems
Location: Maths LT1
11:00
Reachable Set Over-approximation for Nonlinear Systems Using Piecewise Barrier Tubes
SPEAKER: Hui Kong

ABSTRACT. We address the problem of analysing the reachable set of a nonlinear continuous system by computing or approximating the flow pipe of its dynamics. The most widely employed approach to tackle this problem is to perform a numerical integration over a given time horizon based on Taylor expansion and interval arithmetic. However, this method results to be very conservative when there is a large difference in speed between trajectories as time progresses. In this paper, we propose to use combinations of barrier functions, which we call piecewise barrier tube (PBT), to overapproximate flow pipe. The basic idea of PBT is that for each segment of a flow pipe, a coarse box which is big enough to contain the segment is constructed using sampled simulation and then in the box we compute by linear programming a set of barrier functions (called barrier tube or BT for short) which work together to form a tube surrounding the flow pipe. The benefit of using PBT is that (1) is independent of time and hence can avoid being stretched and deformed by time (2) a small number of BTs can form a tight overapproximation for the flow pipe, which means that the computation required to decide whether the BTs intersect the unsafe set can be reduced significantly. We implemented a prototype tool called PBTS in C++. Experiment on some benchmark systems show that our approach is effective.

11:15
Spacetime Interpolants

ABSTRACT. Reachability analysis is difficult for hybrid automata with affine differential equations, because the reach set needs to be approximated. Promising abstraction techniques usually employ interval methods or template polyhedra. Interval methods account for dense time and guarantee soundness, and there are interval-based tools that overapproximate affine flowpipes. But interval methods impose bounded and rigid shapes, which make refinement expensive and fixpoint detection difficult. Template polyhedra, on the other hand, can be adapted flexibly and can be unbounded, but sound template refinement for unbounded reachability analysis has been implemented only for systems with piecewise constant dynamics. We capitalize on the advantages of both techniques, combining interval arithmetic and template polyhedra, using the former to abstract time and the latter to abstract space. During a CEGAR loop, whenever a spurious error trajectory is found, we compute additional space constraints and split time intervals, and use these spacetime interpolants to eliminate the counterexample. Spacetime interpolation offers a lazy, flexible framework for increasing precision while guaranteeing soundness, both for error avoidance and fixpoint detection. To the best of out knowledge, this is the first abstraction refinement scheme for the reachability analysis over unbounded and dense time of affine hybrid systems, which is both sound and automatic. We demonstrate the effectiveness of our algorithm with several benchmark examples, which cannot be handled by other tools.

11:30
Monitoring Weak Consistency
SPEAKER: Michael Emmi

ABSTRACT. High-performance implementations of distributed and multicore shared objects often guarantee only the weak consistency of their concurrent operations, foregoing the de-facto yet performance-restrictive consistency criterion of linearizability. While such weak consistency is often vital for achieving performance requirements, practical automation for checking weak-consistency is lacking. In principle, algorithmically checking the consistency of executions according to various weak-consistency criteria is hard: in addition to the enumeration of linearizations of an execution’s operations, such criteria generally demand the enumeration of possible visibility relations among the linearized operations; a priori, both enumerations are exponential.

In this work we identify an optimization to weak-consistency checking: rather than enumerating every possible visibility relation, it suffices to consider only the minimal visibility relations which adhere to the various constraints of the given criterion, for a significant class of consistency criteria. We demonstrate the soundness of this optimization, and describe an associated minimal-visibility consistency checking algorithm. Empirically, we show that our algorithm significantly outperforms the baseline weak-consistency checking algorithm, which naïvely enumerates all visibilities, and adds only modest overhead to the baseline linearizability checking algorithm, which does not enumerate visibilities.

11:45
Monitoring CTMCs By Multi-Clock Timed Automata
SPEAKER: Yijun Feng

ABSTRACT. This paper presents a numerical algorithm to verify continuous-time Markov chains (CTMCs) against multi-clock deterministic timed automata (DTA). These DTA allow for specifying properties that cannot be expressed in CSL, the logic for CTMCs used by state-of-the-art probabilistic model checkers. The core problem is to compute the probability of timed runs by the CTMC ${\cal C}$ that are accepted by the DTA ${\cal A}$. These likelihoods equal reachability probabilities in an embedded piecewise deterministic Markov process (EPDP) obtained as product ofn${\cal C}$ and ${\cal A}$'s region automaton. This paper provides a numerical algorithm to efficiently solve the PDEs describing these reachability probabilities. The key insight is to solve an ordinary differential equation (ODE) that exploits the specific characteristics of the product EPDP. We provide the numerical precision of our algorithm and present experimental results with a prototypical implementation.

12:00
Start Pruning When Time Gets Urgent: Partial Order Reduction for Timed Systems

ABSTRACT. Partial order reduction for timed systems is a challenging topic due to the dependencies among events induced by time acting as a global synchronization mechanism. So far, there has only been a limited success in finding practically applicable solutions yielding significant state space reductions. We suggest a working and efficient method to facilitate stubborn set reduction for timed systems with urgent behaviour. We first describe the framework in the general setting of timed labelled transition systems and then instantiate it to the case of timed-arc Petri nets. The basic idea is that we can employ classical untimed partial order reduction techniques as long as urgent behaviour is enforced. Our solution is implemented in the model checker TAPAAL and the feature is now broadly available to the users of the tool in its latest release from January 2018. By a series of larger case studies, we document the benefits of our method and its applicability to real-world scenarios.

12:15
A Counting Semantics for Monitoring LTL Specifications over Finite Traces

ABSTRACT. We consider the problem of monitoring a Linear Time Logic (LTL) specification that is defined on infinite paths, over finite traces. For example, we may need to draw a verdict on whether the system satisfies or violates the property ``p holds infinitely often.'' The problem is that there is always a continuation of a finite trace that satisfies the property and a different continuation that violates it.

We propose a two-step approach to address this problem. First, we introduce a counting semantics that computes the number of steps to witness the satisfaction or violation of a formula for each position in the trace. Second, we use this information to make a prediction on inconclusive suffixes. In particular, we consider a \emph{good} suffix to be one that is shorter than the longest witness for a satisfaction, and a \emph{bad} suffix to be shorter than or equal to the longest witness for a violation. Based on this assumption, we provide a verdict assessing whether a continuation of the execution on the same system will presumably satisfy or violate the property.

12:30-14:00Lunch Break
14:00-15:30 Session 106A: Tools
Chair:
Location: Maths LT1
14:00
Rabinizer 4: From LTL to Your Favourite Deterministic Automaton

ABSTRACT. We present Rabinizer 4, a tool set for translating formulae of linear temporal logic to different types of deterministic omega-automata. The tool set implements and optimizes several recent constructions, including the first implementation translating the frequency extension of LTL. Further, we provide a distribution of PRISM that links Rabinizer and offers model checking procedures for probabilistic systems that are not in the official PRISM distribution. Finally, we evaluate the performance and in cases with any previous implementations we show enhancements both in terms of the size of the automata and the computational time, due to algorithmic as well as implementation improvements.

14:15
Strix: Explicit Reactive Synthesis Strikes Back!

ABSTRACT. Strix is a new tool for reactive LTL synthesis combining a direct translation of LTL formulas into deterministic parity automata (DPA) and an efficient, multi-threaded explicit state solver for parity games. In brief, Strix (1) decomposes the given formula into simpler formulas, (2) translates these on-the-fly into DPAs based on the queries of the parity game solver, (3) composes the DPAs into a parity game, and at the same time already solves the intermediate games using strategy iteration, and (4) finally translates the wining strategy, if it exists, into a Mealy automaton or an AIGER cicuit with optional minimization using external tools. We experimentally demonstrate the applicability of our approach by a comparison with Party, BoSy and ltlsynt using the SYNTCOMP2017 benchmarks. In these experiments, our prototype can compete with BoSy and ltlsynt with only Party performing slightly better. In particular, our prototype successfully synthesizes the full and unmodified LTL specification of the AMBA protocol for n = 2 masters.

14:30
BTOR2, BtorMC and Boolector 3.0
SPEAKER: Armin Biere

ABSTRACT. We describe a new word-level model checking format to capture models of hardware and potentially software in a bit-precise manner. This simple, line-based and easy to parse format can be seen as a sorted extension of the word-level format BTOR. It uses design principles from the bit-level format AIGER and follows the semantics of the SMT-LIB logics of bit-vectors with arrays. This intermediate format can be used in various verification flows and is perfectly suited to establish a word-level model checking competition. A new open source model checker and bit-vector solver support this format. We further provide real-world word-level benchmarks on which these tools are evaluated.

14:45
Nagini: A Static Verifier for Python
SPEAKER: Marco Eilers

ABSTRACT. We present Nagini, an automated, modular verifier for statically-typed, concurrent Python 3 programs, built on the Viper verification infrastructure. Combining established concepts with new ideas, Nagini can verify memory safety, functional properties, termination, deadlock freedom, and input/output behavior. Our experiments show that Nagini is able to verify non-trivial properties of real-world Python code.

15:00
Peregrine: A Tool for the Analysis of Population Protocols
SPEAKER: Stefan Jaax

ABSTRACT. We introduce Peregrine, the first tool for the analysis and parameterized verification of population protocols. Population protocols are a model of computation very much studied by the distributed computing community, in which mobile anonymous agents interact stochastically to achieve a common task. Peregrine allows users to design protocols, to simulate them both manually and automatically, to gather statistics of properties such as convergence speed, and to verify correctness automatically. This paper describes the features of Peregrine and their implementation.

15:15
ADAC: Automated Design of Approximate Circuits
SPEAKER: Jiri Matyas

ABSTRACT. Approximate circuits with relaxed requirements on functional correctness play an important role in the development of resource-efficient computer systems. Designing approximate circuits is a very complex and time-demanding process trying to find optimal trade-offs between the approximation error and resource savings. In this paper, we present ADAC -- a novel framework for automated design of approximate arithmetic circuits. ADAC integrates in a unique way efficient simulation and formal methods for approximate equivalence checking into a search-based circuit optimisation. To make ADAC easily accessible, it is implemented as a module of the ABC tool: a state-of-the-art system for circuit synthesis and verification. Within several hours, ADAC is able to construct high-quality Pareto sets of complex circuits (including even 32-bit multipliers), providing useful trade-offs between the resource consumption and the error that is formally guaranteed. This demonstrates outstanding performance and scalability compared with other existing approaches

15:30-16:00Coffee Break
16:00-17:00 Session 108A: Probabilistic Systems
Location: Maths LT1
16:00
Value Iteration for Simple Stochastic Games: Stopping Criterion and Learning Algorithm

ABSTRACT. Simple stochastic games can be solved by value iteration (VI), which yields a sequence of under-approximations of the value of the game. This sequence is guaranteed to converge to the value only in the limit. Since no stopping criterion is known, this technique does not provide any guarantees on its results. We provide the first stopping criterion for VI on simple stochastic games. It is achieved by additionally computing a convergent sequence of over-approximations of the value, relying on an analysis of the game graph. Consequently, VI becomes an anytime algorithm returning the approximation of the value and the current error bound. As another consequence, we can provide a simulation-based asynchronous VI algorithm, which yields the same guarantees, but without necessarily exploring the whole game graph.

16:15
Sound Value Iteration
SPEAKER: Tim Quatmann

ABSTRACT. Computing reachability probabilities is at the heart of probabilistic model checking. All model checkers compute these probabilities in an iterative fashion using value iteration. This technique approximates a fixed point from below by determining reachability probabilities for an increasing number of steps. To avoid results that are significantly off, variants have recently been proposed that converge from both below and above. These procedures require starting values for both sides. We present an alternative that does not require the a priori computation of starting vectors and that converges faster on many benchmarks. The crux of our technique is to give tight and safe bounds — whose computation is cheap — on the reachability probabilities. Lifting this technique to expected rewards is trivial for both Markov chains and MDPs. Experimental results on a large set of benchmarks show its scalability and efficiency.

16:30
Safety-Aware Apprenticeship Learning
SPEAKER: Wenchao Li

ABSTRACT. Apprenticeship learning (AL) is a class of “learning from demonstrations” techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent has to derive a good policy by observing an expert’s demonstrations. In this paper, we study the problem of how to make AL algorithms inherently safe while still meeting its learning objective. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure both safety and performance of the learnt policy. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.

16:45
Deciding Probabilistic Bisimilarity Distance One for Labelled Markov Chains
SPEAKER: Qiyi Tang

ABSTRACT. Probabilistic bisimilarity, due to Larsen and Skou, is an equivalence relation that captures which states of a labelled Markov chain behave the same. Since this behavioural equivalence only identifies states that transition to states that behave exactly the same with exactly the same probability, this notion of equivalence is not robust, as first observed by Giacalone, Jou and Smolka. The probabilistic bisimilarity distances, as first defined by Desharnais, Gupta, Jagadeesan and Panangaden, provide a quantitative generalization of probabilistic bisimilarity. The distance of states captures the similarity of their behaviour. The smaller the distance, the more alike the states behave. In particular, states are probabilistic bisimilar if and only if their distance is zero. This quantitative notion is robust in that small changes in the transition probabilities result in small changes in the distances.

During the last decade, several algorithms have been proposed to approximate and compute the probabilistic bisimilarity distances. The main result of this paper is an algorithm that decides distance one in O(n^2 + m^2), where n is the number of states and m is the number of transitions of the labelled Markov chain. The algorithm is the key new ingredient of our algorithm to compute the distances. The state of the art algorithm, that combines algorithms due to Derisavi, Hermanns and Sanders and due to Bacci, Bacci, Larsen and Mardare, can compute distances for labelled Markov chains up to 150 states. For one such labelled Markov chain, that algorithm takes more than 49 hours. In contrast, our new algorithm only takes 13 milliseconds. Furthermore, our algorithm can compute distances for labelled Markov chains with more than 10,000 states in less than 50 minutes.