Battery-Aware Scheduling in Low Orbit: The GomX-3 Case
ABSTRACT. When working with space systems the keyword is resources. For a
satellite in orbit all resources are sparse and the most critical resource of all is
power. It is therefore crucial to have detailed knowledge on how much power
is available for an energy harvesting satellite in orbit at every time – especially
when in eclipse, where it draws its power from onboard batteries. This paper
addresses this problem by a two-step procedure to perform task scheduling for
low-earth-orbit (LEO) satellites exploiting formal methods. It combines cost-
optimal reachability analyses of priced timed automata networks with a realistic
kinetic battery model capable of capturing capacity limits as well as stochastic
fluctuations. The procedure is in use for the automatic and resource-optimal
day-ahead scheduling of GomX–3, a power-hungry nanosatellite currently or-
biting the earth. We explain how this approach has overcome existing problems,
has led to improved designs, and has provided new insights.
ABSTRACT. To formally reason about the temporal quality of systems discounting was introduced to CTL and LTL. However, these logic are discrete and they cannot express duration properties. In this work we introduce discounting for a variant of Duration Calculus. We prove decidability of model checking for a useful fragment of discounted Duration Calculus formulas on timed automata under mild assumptions. Further, we provide an extensive example to show the usefulness of the fragment.
Validated Simulation-Based Verification of Delayed Differential Dynamics
ABSTRACT. Verification by simulation, based on covering the set of time-bounded trajectories of a dynamical system evolving from the initial state set by means of a finite sample of initial states plus a sensitivity argument, has recently attracted interest due to the availability of powerful simulators for rich classes of dynamical systems. System models addressed by such techniques involve ordinary differential equations (ODEs) and can readily be extended to delay differential equations (DDEs). In doing so, the lack of validated solvers for DDEs, however, enforces the use of numeric approximations such that the resulting verification procedures would have to resort to (rather strong) assumptions on numerical accuracy of the underlying simulators, which lack formal validation or proof. In this paper, we pursue a closer integration of the numeric solving and the sensitivity-related state bloating algorithms underlying verification by simulation, together yielding a safe enclosure algorithm for DDEs suitable for use in automated formal verification. The key ingredient is an on-the-fly computation of piecewise linear, local error bounds by nonlinear optimization, with the error bounds uniformly covering sensitivity information concerning initial states as well as integration error.
Local Planning of Multiparty Interactions with Bounded Horizons
ABSTRACT. Dynamic scheduling of distributed real-time systems with
multiparty interactions is acknowledged to be a very hard task.
For such systems, multiple schedulers are used to coordinate the
parallel activities of remotely running components. In order to ensure
global consistency and timing constraints, these schedulers must cope
with significant communication delays while moreover, use only
point-to-point message passing as communication primitive on the platform.
In this paper, we investigate a formal model for such systems as
compositions of timed automata subject to multiparty interactions and
we propose a distributed implementation method aiming to overcome the
communication delays problem through planning ahead interactions.
Moreover, we develop
static conditions allowing to make the planning decisions local to
different schedulers and thus to decrease the overall coordination
overhead. The method has been implemented and we report preliminary
results on benchmarks.
Refactoring Refinement Structures of Event-B Machines
ABSTRACT. Refinement in formal specifications has received significant attention as a method to gradually construct a rigorous model.
Although refactoring methods for formal specifications have been proposed, there are no methods for refactoring of refinement structures in formal specifications.
In this paper, we describe a method to restructure refinements in specifications of Event-B, a formal specification method with supports for refinement.
The core of our method is decomposition of refinements.
Namely, when an abstract Event-B machine $A$, a concrete machine $C$ refining $A$, and a slicing strategy are provided, our method constructs a consistent intermediate machine $B$, which refines $A$ and is refined by $C$.
We show effectiveness of our methods through two case studies on representative usages of our method: decomposition of large-scale refinements and extraction of reusable parts of specifications.
Finding Suitable Variability Abstractions for Family-Based Analysis
ABSTRACT. For program families (Software Product Lines), specially designed variability- aware static analyses allow analyzing all variants (products) of the family, simultaneously, in a single run without generating any of the variants explicitly. They are also known as lifted or family-based analyses. The variability-aware analyses may be too costly or even infeasible for families with a large number of variants. In order to make them computationally cheaper, we can apply variability abstractions which aim to tame the combinatorial explosion of the number of variants (configurations) and reduce it to something more tractable by manipulating the configuration space of the family. However, the number of possible abstractions is still intractably large to search naively, with most abstractions being too imprecise or too costly.
In this work, we propose a technique to efficiently find suitable variability abstractions from a large family of abstractions for a variability-aware static analysis. The idea is to use a pre-analysis to estimate the impact of variability-specific parts of the program family on the analysis's precision.
Then we use the pre-analysis results to find out when and where the analysis should turn off or on its variability-awareness. We demonstrate the practicality of this approach on several Java benchmarks.
ABSTRACT. This paper presents a novel counterexample guided abstraction refinement algorithm for the automatic verification of concurrent programs. Our algorithm proceeds in different steps. It first constructs an abstraction of the original program by slicing away a given subset of variables. Then, it uses an external model checker as a backend tool to analyze the correctness of the abstract program. If the model checker returns that the abstract program is safe then we conclude that the original one is also safe. If the abstract program is unsafe, we extract an ``abstract" counter-example. In order to check if the abstract counter-example can lead to a real counter-example of the original program, we add back to the abstract counter-example all the omitted variables (that have been sliced away) to obtain a new program. Then, we call recursively our algorithm on the new obtained program. If the recursive call of our algorithm returns that the new program is unsafe, then we can conclude that the original program is also unsafe and our algorithm terminates. Otherwise, we refine the abstract program by removing the abstract counter-example from its set of possible runs. Finally, we repeat the procedure with the refined abstract program. We have implemented our algorithm, and run it successfully on the concurrency benchmarks in SV-COMP15. Our experimental results show that our algorithm significantly improve the performance of the backend tool.
Decoupled simulating abstractions of non-linear ordinary differential equations
ABSTRACT. We investigate the problem of decoupling, in which one seeks to reduce a given system of ordinary differential equations (ODEs) to another system that features sub-systems which are completely independent (i.e. decoupled) and can be considered as separate systems in their own right. Beyond a purely mathematical interest as a tool for the qualitative analysis of ODEs, decoupling can be applied to verification problems arising in the fields of control and hybrid systems. Existing verification technology often scales poorly with dimension. Thus, reducing a verification problem to a number of independent verification problems for systems of smaller dimension may enable one to prove properties that are otherwise seen as too difficult.
SpecCert: Specifying and Verifying Hardware-based Security Enforcement
ABSTRACT. Over time, hardware designs have constantly grown in complexity and modern platforms involve multiple of interconnected hardware components. During the last decade, several vulnerability disclosures have proven that trust in hardware can be misplaced. In this article, we give a formal definition of Hardware-based Security Enforcement (HSE) mechanisms, a class of security enforcement mechanisms where a software component relies on the underlying hardware platform to enforce a security policy. We then model a subset of a x86-based hardware platform specifications and we prove the soundness of a realistic HSE mechanism within this model using Coq, a proof assistant system.
Combining Mechanized Proofs and Model-Based Testing in the Formal Analysis of a Hypervisor
ABSTRACT. Virtualization engines play a critical role in modern cybersecurity
products. In an effort to gain definitive confidence on critical
components, our company has invested on the formal verification of a
hypervisor, following recent advances in similar academic and
industrial operating-system verification projects.
There are inherent difficulties in applying formal methods to low-level
implementations, and even more under specific constraints arising in
commercial software development. In order to deal with these, the chosen
approach consists in the splitting of the verification effort by combining the
definition of an abstract model of the hypervisor, the verification of
fundamental security properties over this model, and testing the conformance
of the model w.r.t. the hypervisor implementation.
This article reports on our experiences in applying formal methods to verify
a hypervisor for commercial purposes. It describes the verification approach,
and the security properties under consideration, and reports the results obtained.
Automated Verification of Timed Security Protocols with Clock Drift
ABSTRACT. Time is frequently used in security protocols to provide better security. For instance, critical credentials often have limited lifetime which improves the security against brute-force attacks. However, it is challenging to correctly use time in protocol design, due to the existence of clock drift in practice. In this work, we develop a systematic method to formally specify as well as automatically verify timed security protocols with clock drift. We first extend the previously proposed timed applied π-calculus as a formal specification language for timed protocols with clock drift. Then, we define its formal semantics based on timed logic rules, which facilitates efficient verification against various security properties. Clock drift is encoded as parameters in the rules. The verification result shows the constraints associated with clock drift that are required for the security of the protocol, e.g., the maximum drift should be less than some constant. We evaluate our method with multiple timed security protocols. We find a time-related security threat in the TESLA protocol, a complex time-related broadcast protocol for lossy channels, when the clocks used by different protocol participants do not share the same clock rate.
Hybrid Statistical Estimation of Mutual Information for Quantifying Information Flow
ABSTRACT. Analysis of a probabilistic system often requires to learn the joint probability distribution of its random variables. The computation of the exact distribution is usually an exhaustive precise analysis on all executions of the system. To avoid the high computational cost of such an exhaustive analysis, statistical analysis has been studied to efficiently obtain approximate estimates by analyzing only a small but representative subset of the system's behavior. In this paper we propose a new hybrid statistical estimation method that combines precise and statistical analysis to estimate mutual information and its confidence interval. We show how to combine the analyses on different components of the system with different precision to obtain an estimate for the whole system. The new method can use importance sampling over different components and dynamically find the components' optimal sample sizes. Moreover it can reduce sample sizes by using prior knowledge about systems and a new abstraction-then-sampling technique based on qualitative analysis. To apply the method to the source code of a system, we show how to decompose the code into components and to determine the analysis method for each component. We show that our approach provides better estimates with narrower confidence intervals than the state of the art by showing applications to the quantification of information leakage.
An Executable Formalisation of the SPARCv8 Instruction Set Architecture: A Case Study for The LEON3 Processor
ABSTRACT. The SPARCv8 instruction set architecture (ISA) has been used in various processors for workstations, embedded systems, and space missions. However, there are no publicly available formal models for the SPARCv8 ISA. In this work, we give a formal model SPARCv8 ISA in Isabelle/HOL. We capture the operational semantics of the instructions using monadic definitions. Our model is both detailed and general. We consider many features specific to SPARC processors, such as delayed-write for control registers, windowed general registers, and more complex memory access etc.. Meanwhile we retain the model at an abstract level so that all SPARCv8 processors are supported. We prove some important properties in our formal model, such as a property that guarantees no failure during execution, and a property which ensures that the privilege is not escalated in execution when certain conditions are met. We then prove a non-interference property for the LEON3 processor. We extract executable code from our formalisation, giving us the first formally verified executable semantics for the SPARCv8 ISA. We have thoroughly tested our model against a LEON3 simulation board on both single-step executions and sequential execution of programs.
Formalising and Validating the Interface Description in the FMI standard
ABSTRACT. The Functional Mock-up Interface (FMI) aims to support the interoperability in an interdisciplinary formal methods setting by describing an interface between different formal models in a tool co-simulation setting. However, the FMI standard describes the requirements for the static limitations on the interfaces in an informal manner using tables and textual descriptions. In this short paper we demonstrate how this kind of static semantics can be formalised using the Vienna Development Method Specification Language (VDM-SL), and how the formalisation can be exhaustively examined and validated. Afterwards we present how this can be transferred into code in order to develop a tool that can be used by anyone using the FMI standard enabling a more well-founded basis in such an interdisciplinary setting.
Equivalence Checking of a Floating-point Unit Against a High-level C Model
ABSTRACT. Semiconductor companies have increasingly adopted a methodology that starts with a system-level design specification in C/C++/SystemC. This model is extensively simulated to ensure both correct functionality and performance. Later, a Register Transfer Level (RTL) implementation is created in Verilog, either manually by a hardware designer or automatically by a high-level synthesis tool. It is essential to check that the C and Verilog programs are consistent. In this paper, we present a two-step approach, embodied in two equivalence checking tools, VERIFOX and HW-CBMC, to validate designs at software level and RTL, respectively. VERIFOX is used for equivalence checking of an untimed software model in C against a high-level reference model in C. This is followed by the application of HW-CBMC, which verifies the equivalence of a Verilog RTL implementation against an untimed software model in C. To evaluate our tools, we applied them on a commercial floating-point arithmetic unit (FPU) from ARM and an open-source dual-path floating-point adder.
ABSTRACT. The widely-used compression format ``Deflate'' is defined in RFC 1951 and is based on prefix-free codings and backreferences. There are unclear points about the way these codings are specified, and several sources for confusion in the standard. We tried to fix this problem by giving a rigorous mathematical specification, which we formalized in Coq. We produced a verified implementation in Coq which achieves competitive performance on inputs of several megabytes. In this paper we present the several parts of our implementation: a fully verified implementation of canonical prefix-free codings, which can be used in other compression formats as well, and an elegant formalism for specifying sophisticated formats, which we used to implement both a compression and decompression algorithm in Coq which we formally prove inverse to each other -- the first time this has been achieved to our knowledge. The compatibility to other Deflate implementations can be shown empirically. We furthermore discuss some of the difficulties, specifically regarding memory and runtime requirements, and our approaches to overcome them.