Verification, Testing, and Runtime Monitoring of Automotive Exhaust Emissions
ABSTRACT. Emission cleaning in modern cars is controlled by embedded software. In this context, the diesel emissions scandal has made it apparent that the automotive industry is susceptible to fraudulent behaviour, implemented and effectuated by that control software. Mass effects make the individual controllers altogether have statistically significant adverse effects on people's health. This talk surveys recent work on the use of rigorous formal techniques to attack this problem. It starts off with an introduction into the dimension and facets of the problem from a software technology perspective. It then details approaches to use (i) model checking for the white-box analysis of the embedded software, (ii) model-based black-box testing to detect fraudulent behaviour under standardized conditions, and (iii) synthesis of runtime monitors for real driving emissions of cars in-the-wild. All these efforts aim at finding ways to eventually ban the problem of doped software, that is, of software that surreptitiously alters its behaviour in certain circumstances - against the interest of the owner or of society.
ABSTRACT. SMT-based program verification can achieve high precision using
bit-precise models or combinations of different theories. Often such
approaches suffer from problems related to scalability due to the
complexity of the underlying decision procedures. Precision is
traded for performance by increasing the abstraction level of the model.
As the level of abstraction increases, missing important details of the
program model becomes problematic.
In this paper we address this problem with an incremental
verification approach that alternates precision of the program modules
on demand.
The idea is to model a program using the lightest possible (i.e., less
expensive) theories that suffice to verify the desired property.
To this end, we employ safe over-approximations for the program based on
both function summaries and light-weight SMT theories. If during
verification it turns out that the precision is too low, our approach
lazily strengthens all affected summaries or the theory through an
iterative refinement procedure.
In order to connect different theories, we design a flexible function
summarization framework that avoids the need for prohibitively
complicated and expensive theory combination.
An experimental evaluation with a bounded model checker for C on a wide
range of benchmarks demonstrates that our approach scales well, often
effortlessly solving instances where the state-of-the-art model checker
CBMC runs out of time or memory.
ABSTRACT. The lookahead heuristic for binary-tree-based search in constraint
solving favors variables that propagate many other values on both
branches. The approach has recently been applied in instance
partitioning in divide-and-conquer-based parallelization, but in general
its connection to modern, clause-learning solvers is poorly understood.
We show two ways of combining lookahead heuristic with a modern
DPLL(T)-based SMT solver fully profiting from theory propagation,
clause learning, and restarts. Our thoroughly tested prototype
implementation is surprisingly efficient as an independent SMT solver on
certain instances, in particular when applied to a non-convex theory,
where the lookahead-based implementation solves 40% more unsatisfiable
instances compared to the standard implementation.
Arrays Made Simpler: An Efficient, Scalable and Thorough Preprocessing
ABSTRACT. The theory of arrays has a central place in software verification due to
its ability to model memory or data structures. Yet, this theory is
known to be hard to solve in both theory and practice, especially in the
case of very long formulas coming from unrolling-based verification
methods. Standard simplification techniques à la read-over-write suffer
from two main drawbacks: they do not scale on very long sequences of
stores, and they miss many simplification opportunities because of a too
crude syntactic (dis-)equality reasoning.
We propose a new approach to array constraint simplification based on a
new dedicated data structure together with original simplifications and
low-cost reasoning. The technique is efficient, scalable and it yields
significant simplification. The impact on formula resolution is always
positive, and it can be dramatic on some specific classes of problems,
e.g. very long formula or binary-level symbolic execution. While
currently implemented as a preprocessing step, the approach would
benefit from a deeper integration inside a dedicated array solver.
Symmetry breaking in a new stable model search method
ABSTRACT. In this work, we investigate the inclusion of symmetry breaking in the answer set programming (ASP) framework. The notion of symmetry is widely studied in various domains. Particularly, in the field of constraint programming, where symmetry breaking made a significant improvement in the performances of many constraint solvers. Usually, combinatorial problems contain a lot of symmetries that could render their resolution difficult for the solvers that do not consider them. Indeed, these symmetries guide the solvers in the useless exploration of symmetric and redundant branches of the search tree. The ASP framework is well-known in knowledge representation and reasoning. However, only few works on symmetry in ASP exist. We propose in this paper a new ASP solver based on a novel semantics that we enhance by symmetry breaking. This method with symmety elimination is
implemented and used for the resolution of a large variety of combinatorial problems. The obtained results are very promising and showcase an advantage when using our method in comparison to other known ASP methods.
On Disallowing Punctual Intervals in Reflexive Semantics of Halpern-Shoham Logic
ABSTRACT. Halpern-Shoham logic (HS) is a very expressive and elegant formalism for interval temporal reasoning in which the satisfiability problem is undecidable. One of the methods to obtain HS-fragments of lower computational complexity is to adopt the softened (reflexive) semantics of the accessibility relations. In the paper we consider disallowing punctual intervals in reflexive semantics. We show that in this case we gain additional expressive power, which over discrete orders of time points results in PSPACE-hardness of the Horn fragment of HS without diamond modal operators is and in undecidability of the core fragment of HS.
ABSTRACT. Metaquery (MQ) is a data mining tool for inferring relations between data items over a relational database. The concept of MQ leads to autonomous discovery of logical rules involving the database tables.
A central module of any MQ system is the MQ generator, which automatically generates all possible MQ to be tested against a database. The MQ generator is supposed to work in an efficient manner, while not missing any meaningful MQ. In this paper, we present an algorithm for MQ generation that works as a search algorithm in which the states are all possible MQs, and the search tree is pruned whenever possible. Preliminary experiments prove that, indeed, the approach we take leads to a significant reduction in computation resources.