VERIFY-2010:Papers with Abstracts

Papers
Abstract. Security protocols are short programs aiming at securing communications over a network. They are widely used in our everyday life. Their verification using symbolic models has shown its interest for detecting attacks and proving security properties. In particular, several automatic tools have been developed and are used to efficiently detect flaws.

In this talk, we will first review results and techniques that allow automatic analysis of security protocols. In a second part, we will present recent results that demonstrate that formal abstract models used for verification are actually sound with respect to much richer models that considers issues of complexity and probability. As a consequence, it is possible to derive strong, clear security guarantees still keeping the simplicity of reasoning in symbolic models.
Abstract. Proving that programs satisfy their specifications can benefit enormously from tool support but theorem proving tools can also constrain a user's thinking. This talk argues that, for large or complex programs, it is layers of abstraction that make or break the comprehensibility of developments.

However powerful a theorem proving tool is, it will make little long-term contribution to the understanding of programs if the user is forced to bend their steps of development to fit the tool. Abstraction is essential to achieve separation of issues and to help in the understanding of complex systems. The formalism chosen governs the difficulty of completing detailed proofs that can be verified with mechanically checkable rules.

This talk will emphasize abstractions and techniques for reasoning about the development of concurrent programs. In conclusion, the argument will turn to positive recommendations for tool developers.
Abstract. Formal verification techniques are used routinely in finite-state digital circuits. Theorem proving is also used successfully for infinite-state discrete systems. But many safety-critical computers are actually embedded in physical systems. Hybrid systems model complex physical systems as dynamical systems with interacting discrete transitions and continuous evolutions along differential equations. They arise frequently in many application domains, including aviation, automotive, railway, and robotics. There is a well-understood theory for proving programs. But what about complex physical systems? How can we prove that a hybrid system works as expected, e.g., an aircraft does not crash into another one?

This talk illustrates the complexities and pitfalls of hybrid systems verification. It describes a theoretical and practical foundation for deductive verification of hybrid systems called differential dynamic logic. The proof calculus for this logic is interesting from a theoretical perspective, because it is a complete axiomatization of hybrid systems relative to differential equations. The approach is of considerable practical interest too. Its implementation in the theorem prover KeYmaera has been used successfully to verify collision avoidance properties in the European Train Control System and air traffic control systems. The number of dimensions and nonlinearities in they hybrid dynamics of these systems is surprisingly tricky such that they are still out of scope for other verification tools.
Abstract. It is a common belief that the rise of standardized software certification schemes like the Common Criteria (CC) would give a boost to formal verification, and that software certification may be a killer application for program verification. However, while formal models are indeed used throughout high-assurance certification, verification of the actual implementation is not required by the CC and largely neglected in certification practice - despite the great advances in program verification over the last decade.

In this paper we discuss the gap between program verification and CC software certification, and we point out possible uses of code-level program verification in the CC certification process.
Abstract. Using the language of event orderings and event classes, and using a type of atoms to represent nonces, keys, signatures, and ciphertexts, we give an axiomatization of a theory in which authentication protocols can be formally defined and strong authentication properties proven. This theory is inspired by PCL, the protocol composition logic defined by Datta, Derek, Mitchell, and Roy.

We developed a general purpose <i>tactic</i> (in the NuPrl theorem prover), and applied it to automatically prove that several protocols satisfy a strong authentication property. Several unexpected subtleties exposed in this development are addressed with new concepts <i>legal protocols</i>, and a <i>fresh signature criterion</i> - and reasoning that makes use of a well-founded causal ordering on events.

This work shows that proofs in a logic like PCL can be automated, provides a new and possibly simpler axiomatization for a theory of authentication, and addresses some issues raised in a critique of PCL.
Abstract. Craig interpolation has become a versatile tool in formal verification, in particular for generating intermediate assertions in safety analysis and model checking. In this paper, we present a novel interpolation procedure for the theory of arrays, extending an interpolating calculus for the full theory of quantifier-free Presburger arithmetic, which will be presented at IJCAR this year. We investigate the use of this procedure in a software model checker for C programs. A distinguishing feature of the model checker is its ability to faithfully model machine arithmetic with an encoding into Presburger arithmetic with uninterpreted predicates. The interpolation procedure allows the synthesis of quantified invariants about arrays. This paper presents work in progress; we include initial experiments to demonstrate the potential of our method.
Abstract. Timed networks are parametrized systems of timed au\-to\-ma\-ta. Solving reachability problems (e.g., whether a set of unsafe states can ever be reached from the set of initial states) for this class of systems allows one to prove safety properties regardless of the number of processes in the network. The difficulty in solving this kind of verification problems is two-fold. First, each process has (at least one) clock variable ranging over an infinite set, such as the reals or the integers. Second, every system is parameterized with respect to the number of processes and to the topology of the network. Reachability problem for some restricted classes of parameterized timed networks is decidable under suitable assumptions by a backward reachability procedure. Despite these theoretical results, there are few systems capable of automatically solving such problems. Instead, the number $n$ of processes in the network is fixed and a tool for timed automata (like Uppaal) is used to check the desired property for the given $n$.

In this paper, we explain how to attack fully parameteric and timed reachability problems by translation to the declarative input language of \textsc{mcmt}, a model checker for infinite state systems based on Satisfiability Modulo Theories techniques. We show the success of our approach on a number of standard algorithms, such as the Fischer protocol. Preliminary experiments show that fully parametric problems can be more easily solved by \textsc{mcmt} than their instances for a fixed (and large) number of processes by other systems.
Abstract. Software Testing is the most used technique for software verification in industry. In the case of safety critical software, the test set can be required to cover a high percentage (up to 100%) of the software code according to some metrics. Unfortunately, attaining such high percentages is not easy using standard automatic tools for tests generation, and manual generation by domain experts is often necessary, thereby significantly increasing the associated costs.

In previous papers, we have shown how it is possible to automatize the test generation process of C programs via the bounded model checker CBMC. In particular, we have shown how it is possible to productively use CBMC for the automatic generation of test sets covering 100% of branches of 5 modules of ERTMS/ETCS, a safety critical industrial software by Ansaldo STS. Unfortunately, the test set we automatically generated, is of lower "quality" if compared to the test set manually generated by domain experts: Both test sets attained the desired 100% branch coverage, but the sizes of the automatically generated test sets are roughly twice the sizes of the corresponding manually generated ones. Indeed, the automatically generated test sets contain redundant tests, i.e. tests that do not contribute to reach the desired 100% branch coverage. These redundant tests are useless from the perspective of the branch coverage, are not easy to detect and then to eliminate a posteriori, and, if maintained, imply additional costs during the verification process.

In this paper we present a new methodology for the automatic generation of "high quality" test sets guaranteeing full branch coverage. Given an initially empty test set T, the basic idea is to extend T with a test covering as many as possible of the branches which are not covered by T. This requires an analysis of the control flow graph of the program in order to first individuate a path p with the desired property, and then the run of a tool (CBMC in our case) able to return either a test causing the execution of p or that such a test does not exist (under the given assumptions). We have experimented the methodology on 31 modules of the Ansaldo STS ERTMS/ETCS software, thus greatly extending the benchmarking set. For 27 of the 31 modules we succeeded in our goal to automatically generate "high quality" test sets attaining full branch coverage: All the feasible branches are executed by at least one test and the sizes of our test sets are significantly smaller than the sizes of the test sets manually generated by domain experts (and thus are also significantly smaller than the test sets automatically generated with our previous methodology). However, for 4 modules, we have been unable to automatically generate test sets attaining full branch coverage: These modules contain complex functions falling out of CBMC capacity.

Our analysis on 31 modules greatly extends our previous analysis based on 5 modules, confirming that automatic test generation tools based on CBMC can be productively used in industry for attaining full branch coverage. Further, the methodology presented in this paper leads to a further increase in the productivity by substantially reducing the number of generated tests and thus the costs of the testing phase.
Abstract. Interactive theorem proving is tackling ever larger formalization and verification projects, and there is a critical need for theory engineering techniques to support these efforts. One such technique is effective package management, which has the potential to simplify the development of logical theories by precisely checking dependencies and promoting re-use. This paper introduces a domain-specific language for defining composable packages of higher order logic theories, which is designed to naturally handle the complex dependency structures that often arise in theory development. The package composition language functions as a module system for theories, and the paper presents a well-defined semantics for the supported operations. Preliminary tests of the package language and its toolset have been made by packaging the theories distributed with the HOL Light theorem prover. This experience is described, leading to some initial theory engineering discussion on the ideal properties of a reusable theory.
Abstract. Machine verification of formal arguments can only increase our confidence in the correctness of those arguments, but the costs of employing machine verification still outweigh the benefits for some common kinds of formal reasoning activities. As a result, usability is becoming increasingly important in the design of formal verification tools. We describe the ``aartifact" lightweight verification system, designed for processing formal arguments involving basic, ubiquitous mathematical concepts. The system is a prototype for investigating potential techniques for improving the usability of formal verification systems. It leverages techniques drawn both from existing work and from our own efforts. In addition to a parser for a familiar concrete syntax and a mechanism for automated syntax lookup, the system integrates (1) a basic logical inference algorithm, (2) a database of propositions governing common mathematical concepts, and (3) a data structure that computes congruence closures of relations found in this database. Together, these components allow the system to better accommodate the expectations of users interested in verifying typical formal arguments involving algebraic manipulations of numbers, sets, vectors, and related operators and predicates. We demonstrate the reasonable performance of this system on typical formal arguments and briefly discuss how the system's design contributes to its usability in two use cases.
Abstract. Virtualisation is increasingly being used in security-critical systems to provide isolation between system components. Being the foundation of any virtualised system, hypervisors need to provide a high degree of assurance with regards to correctness and isolation. Microkernels, such as seL4, can be used as hypervisors. Functional correctness of seL4's uniprocessor C implementation has been formally verified. The framework employed to verify seL4 is tailored to facilitate reasoning about sequential programs. However, we want to be able to use the full power of multiprocessor/multicore systems, and at the same time, leverage the high assurance seL4 already gives us for uniprocessors.

This work-in-progress paper explores possible multiprocessor designs of seL4 and their amenability to verification. For the chosen design, it contributes a formal multiprocessor execution model to lift seL4's uniprocessor model and proofs into a multiprocessor context using only minor modifications. The theorems proving the validity of the lift operation are machine-checked in Isabelle/HOL and walked-through in the paper.
Abstract. Simpson's four-slot algorithm has been an instructive example in studying various assertional proof methods/logics geared towards shared variable concurrency. Previously, techniques like rely-guarantee, data refinement and resource separation have been applied to simplify the construction of its correctness proof. Still, an elegant, concise and insightful proof is elusive.

Recently with the new generation of logics coming of age which are, for the first time, equipped with ownership transfer, it becomes imperative to ask to what extent can ownership transfer facilitate a nice proof of the algorithm. Ownership transfer is especially promising here because the conflict resolution mechanism in the four-slot algorithm can be easily recast as an implementation based on ownership transfer.
Abstract. We present a machine-checked correctness proof for information flow noninterference based on interprocedural slicing. It reuses a correctness proof of the context-sensitive interprocedural slicing algorithm of Horwitz, Reps, and Binkley. The underlying slicing framework is modular in the programming language used; by instantiating this framework the correctness proofs hold for the respective language, without reproving anything in the correctness proofs for slicing and noninterference. We present instantiations with two different languages to show the applicability of the framework, and thus a verified noninterference algorithm for these languages. The formalization and proofs are conducted in the proof assistant Isabelle/HOL.