REFINE 2019: REFINEMENT WORKSHOP 2019
PROGRAM FOR MONDAY, OCTOBER 7TH

View: session overviewtalk overview

11:00-12:00 Session 1: Invited talk
11:00
Refinement in practice: from an informal description to a formal specification

ABSTRACT. While the concept of refinement is rather intuitive, there exist different formal definitions for it, that reflect different semantics. Thus, from a practical point of view, the main issue to define a customized refinement process is to choose an appropriate formal definition. This talk presents two examples that illustrate this point.   

The first example concerns SysML/KAOS, a requirements engineering method which combines the traceability provided by SysML with goal expressiveness provided by KAOS. It allows the representation of requirements that must be satisfied by a system through a hierarchy of goals. The goal hierarchy is composed of a succession of refinements using two main operators: AND and OR. Once constructed, the goal model can be translated into a B System specification. As the refinement links defined between the formal components have to represent the SysML/KAOS refinements, which differ from B System refinement, new proof obligations are generated.

The second example presents three refinement patterns for algebraic state-transition diagrams (ASTDs): state refinement, transition refinement and loop-transition refinement. The ASTDs notation combines the graphical power of a visual notation like hierarchical state diagrams à la Statecharts with process algebra operators, which streamlines the specification of entity life cycles in information systems. The refinement patterns are derived from practice in using ASTDs for specifying information systems. For each pattern, proof obligations are proposed to ensure preservation of behavior through refinement.

12:00-12:30 Session 2
12:00
Transformations for Generating Type Refinements

ABSTRACT. We present transformations for incrementally defining both inductive sum/variant types and coinductive product/record types in a formal refinement setting. Inductive types are built by incrementally accumulating constructors. Coinductive types are built by incrementally accumulating observers. In each case, when the developer decides that the constructor (resp. observer) set is complete, a transformation is applied that generates a canonical definition for the type. It also generates definitions for functions that have been characterized in terms of patterns over the constructors (resp. copatterns over the observers). Functions that input a possibly-recursive sum/variant type are defined inductively via patterns on the input data. Dually, functions that output a possibly-recursive record type are defined coinductively via copatterns on the function's output. The transformations have been implemented in the Specware system \cite{Specware03} and have been used extensively in the automated synthesis of concurrent garbage collection algorithms \cite{SmithD10,SmithD15} and families of protocol-processing codes for distributed vehicle control \cite{SmithD16}.

12:30-14:00Lunch Break
14:00-15:00 Session 3
14:00
An abstract semantics of speculative execution for reasoning about security vulnerabilities

ABSTRACT. Reasoning about correctness and security of software is increasingly difficult due to the complexity of modern microarchitectural features such as out-of-order execution. A class of security vulnerabilities termed Spectre that exploits side effects of speculative, out-of-order execution was announced in 2018 and has since drawn much attention. In this paper we formalise speculative execution and its side effects as an extension of a framework for reasoning about out-of-order execution in weak memory models. Our goal is to allow speculation to be reasoned about abstractly at the software level. To this end we encode speculative execution explicitly by means of novel language construct and modify the definition of conditional statements correspondingly. Underlying this extension is a model that has sufficient detail to enable specification of the relevant microarchitectural features. We add an abstract cache to the global state of the system, and derive some general refinement rules that expose cache side effects due to speculative loads. The rules are encoded in a simulation tool, which we use to analyse an abstract specification of a Spectre attack and vulnerable code fragments.

14:30
Weakening correctness and linearizability for concurrent objects on multicore processors

ABSTRACT. In this paper, we argue that there are two fundamental ways of defining correctness of concurrent objects on the weak memory models of multicore processors: we can abstract from concurrent interleaving and weak memory effects at the specification level, or we can abstract from concurrent interleaving only, leaving weak memory effects, at the specification level. The first allows us to employ standard linearizability as the correctness criterion; a result proved in earlier work. The second requires a weakening of linearizability. We provide such a weakening and prove it sound and complete with respect to this notion of correctness.

15:00-15:30Coffee Break
15:30-17:00 Session 4
15:30
A Map of Asynchronous Communication Models

ABSTRACT. Asynchronous communication encompasses a variety of features besides the decoupling of send and receive events. Those include message-ordering policies which are often crucial to the correctness of a distributed algorithm. This paper establishes a map of communication models that exhibits the relations between them along two axes of comparison: the strength of the ordering property and the level of abstraction of the specification. This brings knowledge about which model can be substituted by another without breaking any safety property. Furthermore, it brings flexibility and ready-to-use modules when developing correct-by-construction distributed systems where model decomposition exposes the communication component. Both criteria of comparison are covered by refinement. We consider seven ordering policies and we model in Event-B these communication models at three levels of abstraction. The proofs of refinement between all of these are mechanized in Rodin.

16:00
Comparing Correctness-by-Construction with Post-hoc Verification - A Qualitative User Study

ABSTRACT. Correctness-by-Construction (CbC) is a refinement-based methodology to incrementally create formally correct programs. Programs are constructed using refinement rules which guarantee the resulting implementation is correct with respect to a pre-/postcondition specification. In contrast, with post-hoc verification (PhV) a specification and a program are created, and it is verified afterwards that the program satisfies the specification. In the literature, both methods are discussed with specific advantages and disadvantages. By letting participants construct and verify programs using CbC and PhV in a preliminary controlled experiment, we analyzed the claims in the literature. We evaluated defects in intermediate code snapshots and discovered a trial-and-error construction process to alter the code and specification. Furthermore, the participants appreciated the good feedback of CbC and state that CbC is better than PhV in helping to find defects.

16:30
Towards a Method for the Decomposition by Refinement in Event-B

ABSTRACT. Refinement consists in detailing the specification in order to get a more concrete model. However, the refinement process should be proved in order to ensure the coherence and the correctness of the system behavior modeling between two levels of refinement. In Event-B, there are several types of refinements such as data refinement and events refinement. Models decomposition is also another technique that completes the refinement. In this paper, we present a background of the different refinement techniques, a justification and a discussion about the refinement correctness. Then, we apply it to the multiple refinements.