previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 5: Keynote Speech: Peter O'Hearn
Location: Panorama
Moving Fast with Program Verification Technology
10:00-10:30Coffee Break
10:30-12:30 Session 6A: Program Analysis
Location: Panorama
Modal Kleene Algebra Applied to Program Correctness

ABSTRACT. Modal Kleene algebras are relatives of dynamic logics that support program construction and verification by equational reasoning. We describe their application in implementing versatile program correctness components within interactive theorem provers such as Isabelle/HOL. Starting from a weakest precondition based component with a simple relational store model, we show how variants for Hoare logic, strongest postconditions and program refinement can be built in a principled way. Modularity of the approach is further demonstrated by outlining algebraic variants that capture recursion and program termination.

Mechanised Verification Patterns for Dafny

ABSTRACT. In Dafny, the program text is used to both specify and implement programs in the same language. It then uses a fully automated theorem prover to verify that the implementation satisfies the specification. However, the prover often needs further guidance from the user, and another role of the language is to provide such necessary hints and guidance. In this paper, we present a set of verification patterns to support this process. In previous work, we have developed a tactic language for Dafny, where users can encode their verification patterns and re-apply them for several proof tasks. We extend this language with new features, implement our patterns in this tactic language and show, through experiments, generality of the patterns, and applicability of the tactic language.

Upper and Lower Amortized Cost Bounds of Programs Expressed as Cost Relations

ABSTRACT. Resource analysis aims at statically obtaining bounds on the resource consumption of programs in terms of input parameters. A well known approach to resource analysis is based on transforming the target program into a set of cost relations, then solving these into a closed-form bound. In this paper we develop a new analysis for computing upper and lower cost bounds of programs expressed as cost relations. The analysis is incremental: it computes the cost of each loop or function separately and composes the obtained expressions to obtain the total cost. Despite being modular, the analysis can obtain precise upper and lower bounds of programs with amortized cost. The key is to obtain bounds that depend on the values of the variables at the beginning and at the end of each program part. In addition we use a novel cost representation called cost structure. It allows to reduce the inference of complex polynomial expressions to a set of linear problems that can be solved efficiently. We implemented our method and performed an extensive experimental evaluation that demonstrates its power.

Sound and Complete Mutation-Based Program Repair

ABSTRACT. This work presents a novel approach for automatically repairing an erroneous program with respect to a given set of assertions. Programs are repaired using a predefined set of mutations. We refer to a bounded notion of correctness, even though, for a large enough bound all returned programs are fully correct. To ensure no changes are made to the original program unless necessary, if a program can be repaired by applying a set of mutations $Mut$, then no superset of $Mut$ is later considered. Programs are checked in increasing number of mutations, and every minimal repaired program is returned as soon as found.

We impose no assumptions on the number of erroneous locations in the program, yet we are able to guarantee soundness and completeness. That is, we assure that a program is returned iff it is minimal and bounded correct.

Searching the space of mutated programs is reduced to searching unsatisfiable sets of constraints, which is performed efficiently using a sophisticated cooperation between SAT and SMT solvers. Similarities between mutated programs are exploited in a new way, by using both the SAT and the SMT solvers incrementally.

We implemented a prototype of our algorithm, compared it with a state-of-the-art repair tool and got very encouraging results.

10:30-12:30 Session 6B: Industry Track
Location: Phoenix
Simulink to UPPAAL Statistical Model Checker: Analyzing Automotive Industrial Systems

ABSTRACT. The advanced technology used for developing modern automotive systems increases their complexity, making their correctness assurance very tedious. To enable analysis by simulation, engineers use MATLAB/Simulink modeling during system development. In this paper, we provide further analysis means to industrial Simulink models by proposing a pattern-based, execution-order preserving transformation of Simulink blocks into the input language of UPPAAL Statistical Model checker, that is, timed/hybrid automata with stochastic semantics. The approach leads to being able to analyze complex Simulink models of automotive systems, and we report our experience with two vehicular systems, the Brake-by-Wire and the Adjustable Speed Limiter.

Rule-based Incremental Verification Tools Applied to Railway Designs and Regulations

ABSTRACT. When railway engineers design railway infrastructures they need to keep in mind numerous regulations (and uncodified, ``rule-of-thumb'', empirical knowledge) for ensuring safety. Many of these regulations are simple, but demonstrably conforming with them often involves tedious manual work. We worked on automating the verification of railway regulations against CAD designs, and integrated a verification tool and methodology into the tool chain of railway engineers. Automatically generating a model from the railway designs and running the verification tool on it is a valuable step forward, compared to manually review the design for compliance and consistency. However, to seamlessly integrate the consistency checking into the CAD work-flow of the design engineers requires a fast, on-the-fly mechanism, similar to real-time compilation done in standard programming tools.

In consequence, in this paper we turn to incremental verification and investigate existing rule-based tools, looking at various aspects relevant for engineering railway designs. We discuss existing state-of-the-art methods for incremental verification in the setting of rule-based modelling. We survey and compare relevant tools (ca. 30) and discuss if/how they could be integrated in a railway design environment, such as CAD software. We settle on four promising approaches: XSB Prolog, a standard tool in the Datalog community, RDFox from the semantic web community, Dyna from the AI community, and LogicBlox, a proprietary solution.

RIVER: A Binary Analysis Framework using Symbolic Execution and Reversible x86 Instructions

ABSTRACT. We present a binary analysis framework based on symbolic execution with the distinguishing capability to execute stepwise forward and also backward through the execution tree. It was developed internally at Bitdefender and code-named RIVER. The framework provides components such as a taint engine, a dynamic symbolic execution engine, and integration with Z3 for constraint solving.

Model-Based Design of an Energy-System Embedded Controller using Taste

ABSTRACT. Model-based design has become a standard practice in the development of control systems. Many solutions provide simulation, code generation, and other functionalities to minimize the design time and optimize the resulting control system implementation.

In this paper, we report on the experience of using Taste as the design environment for the controller of an energy system comprising a parabolic dish collector and a Stirling engine. Besides standard advantages of model-based design, an appealing feature of Taste is the possibility of specifying the design model with a formal language such as SDL. The complexity of the designed system stressed the tool's performances and usability. Nevertheless, the functionalities provided by Taste were essential to manage such complexity.

12:30-14:00Lunch Break and FME Business Meeting
14:00-15:45 Session 7A: Learning
Location: Panorama
Learning Moore Machines from Input-Output Traces

ABSTRACT. The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We introduce three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes that machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample.

Finite Model Finding Using the Logic of Equality with Uninterpreted Functions

ABSTRACT. The problem of finite model finding, finding a satisfying model for a set of first-order logic formulas for a finite scope, is an important step in many verification techniques. In MACE-style solvers, the problem is mapped directly to a SAT problem. We investigate an alternative solution of mapping the problem to the logic of equality with uninterpreted functions (EUF), a decidable logic with many well-supported tools (e.g., SMT solvers). EUF reasoners take advantage of the typed functional structures found in the formulas to improve performance. The challenge is that EUF reasoning is not inherently finite scope. We present an algorithm for mapping a finite model finding problem to an equisatisfiable EUF problem. We present results that show our method has better overall performance than existing tools on a range of problems.

Exploring Model Quality for ACAS X

ABSTRACT. The next generation airborne collision avoidance system, ACAS X, targets robustness through a probabilistic model that represents sources of uncertainty. Dynamic programming then produces a look-up table, used during deployment to produce advisories. The model is not present in the final system and is therefore not included in the standard certification processes. Rather, the model is checked indirectly, by ensuring that ACAS X performs as well as, or better than, the state-of-the-art, TCAS. We claim that to build confidence in such systems, it is important to target model quality directly. We investigate this issue of model quality as part of our research on informing certification standards for autonomy. Using ACAS X as our driving example, we study relations between the probabilistic model and the real world, in an attempt to characterize the quality of the model for the purpose of building ACAS X. This paper presents model conformance and model coverage metrics that we developed, their application to ACAS X, and the results that we obtained from our study.

Towards Learning and Verifying Invariants of Cyber-Physical Systems by Code Mutation

ABSTRACT. Cyber-physical systems (CPS), which integrate algorithmic control with physical processes, often consist of physically distributed components communicating over a network. A malfunctioning or compromised component in such a CPS can lead to costly consequences, especially in the context of public infrastructure. In this short paper, we argue for the importance of constructing invariants (or models) of the physical behaviour exhibited by CPS, motivated by their applications to the control, monitoring, and attestation of components. To achieve this despite the inherent complexity of CPS, we propose a new technique for learning invariants that combines machine learning with ideas from mutation testing. We present a preliminary study on a water treatment system that suggests the efficacy of this approach, propose strategies for establishing confidence in the correctness of invariants, then summarise some research questions and the steps we are taking to investigate them.

14:00-15:00 Session 7B: Industry Track
Location: Phoenix
Taming Interrupts For Verifying Industrial Multifunction Vehicle Bus Controllers

ABSTRACT. Multifunction Vehicle Bus controllers (MVBC) are safety-critical sub-systems in the industrial train communication network. As an interrupt-driven system, MVBC is practically hard to verify. The reasons are twofold. First, MVBC introduces the concurrency semantics of deferred interrupt handlers and communication via hardware registers, making existing formalism infeasible. Second, verifying MVBC requires considering the environmental features (i.e., interrupt ordering), which is hard to model and reason. To overcome these limitations, we proposed a novel framework for formal verification on MVBC. First, we formalized the concurrency semantics of MVBC and described a sequentialization technique so that well-designed sequential analyses can be performed. Moreover, we introduced the happen-before interrupt graph to model interrupt dependency and further eliminate false alarms. The framework scaled well on an industrial MVBC product from CRRC Inc. and found 3 severe software bugs, which were all confirmed by engineers.

Safety-Assured Formal Model-Driven Design of the Multifunction Vehicle Bus Controller

ABSTRACT. In this paper, we present a formal model-driven engineering approach to establishing a safety-assured implementation of Multifunction vehicle bus controller (MVBC) based on the generic reference models and requirements described in the International Electrotechnical Commission (IEC) standard IEC-61375. First, the generic models described in IEC-61375 are translated into a network of timed automata, and some safety requirements tested in IEC-61375 are formalized as timed computation tree logic (TCTL) formulas. With the help of Uppaal, we check and debug whether the timed automata satisfy the formulas or not. Within this step, several logic inconsistencies in the original standard are detected and corrected. Then, we apply the tool Times to generate C code from the veried model, which was later synthesized into a real MVBC chip.Finally, the runtime verication tool RMOR is applied to verify some safety requirements at the implementation level. We set up a real platform with worldwide mostly used MVBC D113, and verify the correctness and the scalability of the synthesized MVBC chip more comprehensively. The errors in the standard has been conrmed and the resulted MVBC has been deployed in real train communication network.

15:45-16:30Coffee Break