FM24: 26TH INTERNATIONAL SYMPOSIUM ON FORMAL METHODS
PROGRAM FOR FRIDAY, SEPTEMBER 13TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:20 Session 25: Keynote 3
09:00
Formal Methods for Robotics and Human-Robot Interaction
10:00
RoboWorld: Verification of Robotic Systems with Environment in the Loop (Journal first)
PRESENTER: James Baxter

ABSTRACT. A robot affects and is affected by its environment, so that typically its behaviour depends on properties of that environment. For verification, we need to formalise those properties. Modelling the environment is very challenging, if not impossible, but we can capture assumptions. Here, we present RoboWorld, a domain-specific controlled natural language with a process algebraic semantics that can be used to define (a) operational requirements, and (b) environment interactions of a robot. RoboWorld is part of the RoboStar framework for verification of robotic systems. In this article, we define RoboWorld’s syntax and hybrid semantics, and illustrate its use for capturing operational requirements, for automatic test generation, and for proof. We also present a tool that supports the writing of RoboWorld documents. Since RoboWorld is a controlled natural language, it complements the other RoboStar notations in being accessible to roboticists, while at the same time benefitting from a formal semantics to support rigorous verification (via testing and proof).

https://doi.org/10.1145/3625563

10:20-10:50Coffee Break
10:50-12:30 Session 26A: Programming Languages 1
Location: Room 3.0.3
10:50
Extending Isabelle/HOL’s Code Generator with support for the Go programming language

ABSTRACT. The Isabelle proof assistant includes a small functional language, which allows users to write and reason about programs. So far, these programs could be extracted into a number of functional languages: Standard ML, OCaml, Scala, and Haskell. This work adds support for Go as a fifth target language for the Code Generator. Unlike the previous targets, Go is not a functional language and encourages code in an imperative style, thus many of the features of Isabelle’s language (particularly data types, pattern matching, and type classes) have to be emulated using imperative language constructs in Go. The developed Code Generation is provided as an add-on library that can be simply imported into existing theories.

11:10
Proving Functional Program Equivalence via Directed Lemma Synthesis

ABSTRACT. Proving equivalence between functional programs is a fundamental problem in program verification, which often amounts to reasoning about algebraic data types (ADTs) and compositions of structural recursions. Modern theorem provers address this problem by applying structural induction, which is insufficient for proving many equivalence theorems. In such cases, one has to invent a set of lemmas, prove these lemmas by additional induction, and use these lemmas to prove the original theorem. There is, however, a lack of systematic understanding of what lemmas are needed for inductive proofs and how these lemmas can be synthesized automatically. This paper presents directed lemma synthesis, an effective approach to automating equivalence proofs by discovering critical lemmas using program synthesis techniques. We first identify two induction-friendly forms of propositions that give formal guarantees to the progress of the proof. We then propose two tactics that synthesize and apply lemmas, thereby transforming the proof goal into induction-friendly forms. Both tactics reduce lemma synthesis to a specialized class of program synthesis problems with efficient algorithms. Experimental results demonstrate the effectiveness of our approach: Compared to state-of-the-art equivalence checkers employing heuristic-based lemma enumeration, directed lemma synthesis saves 95.47% runtime on average and solves 38 more tasks over an extended version of the standard benchmark set.

11:30
Staged Specification Logic for Verifying Higher-Order Imperative Programs

ABSTRACT. Higher-order functions and imperative states are language features supported by many mainstream languages. Their combination is expressive and useful, but complicates specification and reasoning, due to the use of yet-to-be-instantiated function parameters. One inherent limitation of existing specification mechanisms is its reliance on only two stages: an initial stage to denote the precondition at the start of the method and a final stage to capture the postcondition. Such two-stage specifications force abstract properties to be imposed on unknown function parameters, leading to less precise specifications for higher-order methods. To overcome this limitation, we introduce a novel extension to Hoare logic that supports multiple stages for a call-by-value higher-order language with ML-like local references. Multiple stages allow the behavior of unknown function-type parameters to be captured abstractly as uninterpreted relations; and can also model the repetitive behavior of each recursion as a separate stage. In this paper, we define our staged logic with its semantics, prove its soundness and develop a new automated higher-order verifier, called Heifer, for a core ML-like language.

11:50
Chamelon : a delta-debugger for OCaml

ABSTRACT. Tools that manipulate OCaml code can sometimes fail even on correct programs. Identifying and understanding the cause of the error usually involves manually reducing the size of the program in question, so as to obtain a shorter program causing the same error - a long, sometimes complex and rarely interesting task. Our work consists in automating this task using a minimiser, or \emph{delta-debugger}. To do so, we propose a list of unitary heuristics, i.e. applying small-scale reductions to the program, combined with a dichotomy-based state-of-the-art algorithm. These proposals are implemented in the free Chamelon tool. Although designed to assist the development of an OCaml compiler, Chamelon can be adapted to all kinds of projects that manipulate OCaml code. It can analyse multifile projects and efficiently minimise real programs, reducing their size by one to several orders of magnitude. It is currently used to assist the industrial development of the flambda2 optimising compiler.

12:10
Discourje: Run-Time Verification of Communication Protocols in Clojure -- Live at Last

ABSTRACT. Multiparty session typing (MPST) is a formal method to make concurrent programming simpler. The idea is to use type checking to automatically prove safety (protocol compliance) and liveness (communication deadlock freedom) of implementations relative to specifications. Discourje is an existing run-time verification library for communication protocols in Clojure, based on dynamic MPST. However, while the original version---presented in papers at TACAS'20, ESEC/FSE'21, ISoLA'21---can check for safety, it cannot check for liveness. In this paper, we present a major extension of Discourje to also check for liveness.

10:50-12:30 Session 26B: Embedded Systems
10:50
CauMon: An Informative Online Monitor for Signal Temporal Logic

ABSTRACT. In this paper, we present a tool for monitoring the traces of cyber-physical systems (CPS) at runtime, with respect to Signal Temporal Logic (STL) specifications. Our tool is based on the recent advances of causation monitoring, which reports not only whether an executing trace violates the specification, but also how relevant the increment of the trace at each instant is to the specification violation. In this way, it can deliver more information about system evolution than classic online robust monitors. Moreover, by adapting two dynamic programming strategies, our implementation significantly improves the efficiency of causation monitoring, allowing its deployment in practice. The tool is implemented as a C++ executable and can be easily adapted to monitor CPS in different formalisms. We evaluate the efficiency of the proposed monitoring tool, and demonstrate its superiority over existing robust monitors in terms of the information it can deliver about system evolution.

11:10
Switching Controller Synthesis for Hybrid Systems Against STL Formulas

ABSTRACT. Switching controllers play a pivotal role in directing hybrid systems (HSs) towards the desired objective, embodying a ``correct-by-construction'' approach to HS design. Identifying these objectives is thus crucial for the synthesis of effective switching controllers. While most of existing works focus on safety and liveness, few of them consider timing constraints. In this paper, we delves into the synthesis of switching controllers for HSs that meet system objectives given by a fragment of STL, which can specify safety, reach-avoid, and liveness with timing constraints. Our approach involves iteratively computing the state sets that can be driven to satisfy the STL specification. This technique supports to create switching controllers for both constant and non-constant HSs. We validate our method's soundness, and confirm its relative completeness for a certain subclass of HSs. Experiment results affirms the efficacy of our approach.

11:30
On Completeness of SDP-Based Barrier Certificate Synthesis over Unbounded Domains

ABSTRACT. Barrier certificates, serving as differential invariants that witness system safety, play a crucial role in the verification of cyber-physical systems (CPS). Prevailing computational methods for synthesizing barrier certificates are based on semidefinite programming (SDP) by exploiting Putinar Positivstellensatz. Consequently, these approaches are limited by the Archimedean condition, which requires all variables to be bounded, i.e., systems are defined over bounded domains. For systems over unbounded domains, unfortunately, existing methods become incomplete and may fail to identify potential barrier certificates.

In this paper, we address this limitation for the unbounded cases. We first give a complete characterization of polynomial barrier certificates by using homogenization, a recent technique in the optimization community to reduce an unbounded optimization problem to a bounded one. Furthermore, motivated by this formulation, we introduce the definition of homogenized systems and propose a complete characterization of a family of non-polynomial barrier certificates with more expressive power. Experimental results demonstrate that our two approaches are more effective while maintaining a comparable level of efficiency.

11:50
Reusable Specification Patterns for Verification of Resilience in Autonomous Hybrid Systems

ABSTRACT. Autonomous hybrid systems are systems that combine discrete and continuous behavior with autonomous decision-making, e.g., using reinforcement learning. Such systems are increasingly used in safety-critical applications such as self-driving cars, autonomous robots or water supply systems. Thus, it is crucial to ensure their safety and resilience, i.e. that they function correctly even in the presence of dynamic changes and disruptions. In this paper, we present an approach to obtain formal resilience guarantees for autonomous hybrid systems using the interactive theorem prover KeYmaera X. Our key ideas are threefold: First, we present a formalization of resilience that is tailored to autonomous hybrid systems. Second, we present reusable patterns for modeling stressors, detecting disruptions, and specifying resilience as a service level response in the differential dynamic logic (dL). Third, we combine these concepts with an existing approach for the safe integration of learning components using hybrid contracts, and extend it towards dynamic adaptations to stressors. By combining reusable patterns for stressors, observers, and adaptation contracts for learning components, we provide a systematic approach for the deductive verification of resilience of autonomous hybrid systems with reduced specification effort. We demonstrate the applicability of our approach with two case studies, an autonomous robot and an intelligent water distribution system.

12:10
Tolerance of Reinforcement Learning Controllers against Deviations in Cyber Physical Systems

ABSTRACT. Cyber-physical systems (CPS) with reinforcement learning (RL)-based controllers are increasingly being deployed in complex physical environments such as autonomous vehicles, the Internet-of-Things (IoT), and smart cities. An important property of a CPS is \emph{tolerance}; i.e., its ability to function safely under possible disturbances and uncertainties in the actual operation. In this paper, we introduce a new, expressive notion of tolerance that describes how well a controller is capable of satisfying a desired system requirement, specified using \emph{Signal Temporal Logic (STL)}, under possible \emph{deviations} in the system. Based on this definition, we propose a novel analysis problem, called the \emph{tolerance falsification problem}, which involves finding small deviations that result in a violation of the given requirement. We present a novel, two-layer simulation-based analysis framework and a novel search heuristic for finding small tolerance violations. To evaluate our approach, we construct a set of benchmark problems where system parameters can be configured to represent different types of uncertainties and disturbances in the system. Our evaluation shows that our falsification approach and heuristic can effectively find tolerance violations.

12:30-14:00Lunch Break
14:00-15:20 Session 27A: Programming Languages 2
Chair:
Location: Room 3.0.3
14:00
Reachability Analysis for Multiloop Programs Using Transition Power Abstraction

ABSTRACT. A wide variety of algorithms is employed for the reachability analysis of programs with loops but most of them are restricted to single loop programs. Recently a new technique called Transition Power Abstraction (TPA) showed promising results for safety checks of software. In contrast to many other techniques TPA efficiently handles loops with a large number of iterations. This paper introduces an algorithm that enables the effective use of TPA for analysis of multiloop programs. The TPA-enabled loop analysis reduces the dependency on the number of possible iterations. Our approach analyses loops in a modular manner and both computes and uses transition invariants incrementally, making program analysis efficient. The new algorithm is implemented in the Golem solver. Conducted experiments demonstrate that this approach outperforms the previous implementation of TPA and other competing tools on a wide range of multiloop benchmarks.

14:20
Rigorous Floating-Point Round-Off Error Analysis in PRECiSA 4.0

ABSTRACT. Small round-off errors in safety-critical systems can lead to catastrophic consequences. In this context, determining if the result computed by a floating-point program is accurate enough with respect to its ideal real-number counterpart is essential. This paper presents PRECiSA 4.0, a tool that rigorously estimates the accumulated round-off error of a floating-point program. PRECiSA 4.0 combines static analysis, optimization techniques, and theorem proving to provide a modular approach for computing a provably correct round-off error estimation. PRECiSA 4.0 adds several features to previous versions of the tool that enhance its applicability and performance. These features include support for data collections such as lists, records, and tuples; support for bounded loops and recursion schemas; an updated floating-point formalization that closely characterizes the IEEE-754 standard; an efficient and modular analysis of function calls that improves the performances for large programs; and a new user interface integrated into Visual Studio Code.

14:40
Accurate Static Data Race Detection for C

ABSTRACT. Data races are a particular kind of subtle, unintended program behaviour arising from thread interference in shared-memory concurrency. In this paper, we propose an automated technique for static detection of data races in multi-threaded C programs with POSIX threads. The key element of our technique is a reduction to reachability. Our prototype implementation combines such reduction with context-bounded analysis. The approach proves competitive against state-of-the-art tools, finding new issues in the implementation of well-known lock-free data structures, and shows a considerably superior accuracy of analysis in the presence of complex shared-memory access patterns.

15:00
Formal Semantics and Analysis of Multitask PLC ST Programs with Preemption

ABSTRACT. Programmable logic controllers (PLCs) are widely used in industrial applications. Ensuring the correctness of PLC programs is important due to their safety-critical nature. Structured text (ST) is an imperative programming language for PLC. Despite recent advances in executable semantics of PLC ST, existing methods neglect complex multitasking and preemption features. This paper presents an executable semantics of PLC ST with preemptive multitasking. Formal analysis of multitasking programs experiences the state explosion problem. To mitigate this problem, this paper also proposes state space reduction techniques for model checking multitask PLC ST programs.

14:00-15:20 Session 27B: Security
14:00
Compositional Verification of Cryptographic Circuits against Fault Injection Attacks

ABSTRACT. Fault injection attack is a class of active, physical attacks against cryptographic circuits. Design and implementation of countermeasures against such attacks are intricate, error-prone and laborious, necessitating formal verification to guarantee their correctness. However, existing formal verification approaches are limited in efficiency and scalability. In this paper, we propose the first compositional verification approach for round-based hardware implementations of cryptographic algorithms. Our approach decomposes a circuit into a set of single-round sub-circuits and verifies them individually by leveraging either SAT/SMT- or BDD-based approaches. Our approach is implemented as an open-source tool CLEAVE, which is evaluated extensively on realistic cryptographic circuit benchmarks. The experimental results show that our approach is significantly more effective and efficient than the state-of-the-art.

14:20
FVF-AKA: A Formal Verification Framework of AKA Protocols for Multi-server IoT (Journal First)
PRESENTER: Yuan Fei

ABSTRACT. As IoT in a multi-server environment increases resources’ utilization, more and more problems of IoT authentication and key agreement are being revealed. The Authentication and Key Agreement (AKA) protocol plays an important role in solving these problems. Many AKA protocols have been proposed, and some of them support their own verifications. However, a unifying verification framework for multi-server IoT is lacking. In this article, we propose a formal verification framework of AKA protocols for multi-server IoT (FVF-AKA). It supports the construction of CSP models for the AKA protocol, the implementation of the CSP models in PAT with C#, and the verification of formal models. With the help of C#, many complex functions in the AKA protocol can be implemented. We also design an algorithm to support automatic conversion from the CSP model to the PAT model. FVF-AKA can verify four fundamental properties (deadlock freedom, entity legitimacy, timeout delay, and session key consistency). It also supports the verification of security properties for the AKA protocol suffering from four different attacks (relay attacks, denial of service attacks, server spoofing attacks, and session key attacks). Our approach can be applied to most AKA protocols for multi-server IoT generally. By applying FVF-AKA to two AKA protocols, we can verify whether they satisfy the fundamental properties and analyze their security properties in vulnerable environments. Our work would help to analyze the AKA protocol for multi-server IoT and provide the foundation for the analysis of enhancing its security and robustness.

https://doi.org/10.1145/3599731

14:40
Fast Attack Graph Defense Localization via Bisimulation

ABSTRACT. System administrators, network engineers, and IT managers can learn much about he vulnerabilities of an organization's cyber system by constructing and analyzing analytical attack graphs (AAGs). An AAG consists of logical rule nodes, fact nodes, and derived fact nodes. It provides a graph-based representation that describes ways by which an attacker can achieve progress towards a desired goal, a.k.a. crown jewel. Given an AAG, different types of analyses can be performed to identify attacks on a target goal, measure the vulnerability of the network, and gain insights on how to make it more secure. However, as the size of the AAGs representing real-world systems may be very large, existing analyses are slow or practically impossible. In this paper, we introduce and show how to compute an AAG's defense core: a locally minimal subset of the AAG's rules whose removal will prevent an attacker from reaching a crown jewel. Most importantly, in order to scale-up the performance of the detection of a defense core, we introduce a novel application of the well-known notion of bisimulation to AAGs. Our experiments show that the use of bisimulation results in significantly smaller graphs and in faster detection of defense cores, making them practical.

15:00
Detecting speculative execution vulnerabilities on weak memory models

ABSTRACT. Speculative execution attacks affect all modern processors and much work has been done to develop techniques for detection of associated vulnerabilities. Modern processors also operate on weak memory models which allow out-of-order execution of code. Despite this, there is little work on looking at the interplay between speculative execution and weak memory models.

In this paper, we provide an information flow logic for detecting speculative execution vulnerabilities on weak memory models. The logic is general enough to be used with any modern processor, and designed to be extensible to allow detection of gadgets required for specific attacks. The logic has been proven sound with respect to an abstract model of speculative execution in Isabelle/HOL.

15:20-15:50Coffee Break
15:50-16:50 Session 28A: Distributed and Concurrent Systems
15:50
Unifying Weak Memory Verification using Potentials

ABSTRACT. Concurrency verification for weak memory models is inherently complex. Several deductive techniques based on proof calculi have recently been developed, but these are typically tailored towards a single memory model through specialised assertions and associated proof rules. In this paper, we propose an extension to the logic Piccolo to generalise reasoning across different memory models. Piccolo is interpreted on the semantic domain of thread potentials. By deriving potentials from weak memory model states, we can define the validity of Piccolo formulae for multiple memory models. We moreover propose unified proof rules for verification on top of Piccolo. Once (a set of) such rules has been shown to be sound wrt a memory model MM, all correctness proofs employing this rule set are valid for MM. We exemplify our approach on the memory models SC, TSO and SRA using a number of standard litmus tests.

16:10
Parameterized Verification of Round-based Distributed Algorithms via Extended Threshold Automata

ABSTRACT. Threshold automata are a computational model that has proven to be versatile in modeling threshold-based distributed algorithms and enabling completely automatic parameterized verification. We present novel techniques for the verification of threshold automata, based on well-structured transition systems, that allow us to extend the expressiveness of both the computational model and the specifications that can be verified. In particular, we extend the model to allow decrements and resets of shared variables, possibly on cycles, and the specifications to general coverability. While the extensions of the model in general lead to undecidability, our algorithms provide a semi-decision procedure. We demonstrate the benefit of our extensions by showing that our implementation can verify complex round-based algorithms that use resets of shared variables, going beyond what can be modeled in classical threshold automata.

16:30
Automated Static Analysis of Quality of Service Properties of Communicating Systems

ABSTRACT. We present MoCheQoS, a bounded [mo]del [che]cking tool to analyse Quality of Service ([QoS]) properties of message-passing systems. Building on the dynamic temporal logic, the choreographic model, and the bounded model checking algorithm defined in our ICTAC 2023 paper, MoCheQoS enables the static analysis of QoS properties of systems built out from the composition of services. We consider QoS properties on measurable application-level attributes as well as resource consumption metrics such as those relating a monetary cost to memory usage. The implementation of the tool is accompanied by an experimental evaluation. More precisely, we present two case studies meant to evaluate the applicability of MoCheQoS; the first is based on the AWS cloud while the second analyses a communicating system automatically extracted from code. Additionally, we consider synthetically generated experiments to assess the scalability of MoCheQoS. These experiments showed that our model can faithfully capture and effectively analyse QoS properties in industrial-strength scenarios.

15:50-17:10 Session 28B: Formal Methods in Practice
Location: Room 3.0.3
15:50
Stochastic Games for User Journeys

ABSTRACT. Industry is shifting towards service-based business models, for which user satisfaction is crucial. User satisfaction can be analyzed with user journeys, which model services from the user’s perspective. Today, these models are created manually and lack both formalization and tool-supported analysis. This limits their applicability to complex services with many users. Our goal is to overcome these limitations by automated model generation and formal analyses, enabling the analysis of user journeys for complex services and thousands of users. In this paper, we use stochastic games to model and analyze user journeys. Stochastic games can be automatically constructed from event logs and model checked to, e.g., identify interactions that most effectively help users reach their goal. Since the learned models may get large, we use property-preserving model reduction to visualize users’ pain points to convey information to business stakeholders. The applicability of the proposed method is here demonstrated on two complementary case studies.

16:10
Introducing SWIRL: An Intermediate Representation Language for Scientific Workflows

ABSTRACT. In the ever-evolving landscape of scientific computing, properly supporting the modularity and complexity of modern scientific applications requires new approaches to workflow execution, like seamless interoperability between different workflow systems, distributed-by-design workflow models, and automatic optimisation of data movements. In order to address this need, this article introduces SWIRL, an intermediate representation language for scientific workflows. In contrast with other product-agnostic workflow languages, SWIRL is not designed for human interaction but to serve as a low-level compilation target for distributed workflow execution plans.

The main advantages of SWIRL semantics are low-level primitives based on the send/receive programming model and a formal framework ensuring the consistency of the semantics and the specification of translating workflow models represented by Directed Acyclic Graphs (DAGs) into SWIRL workflow descriptions. Additionally, SWIRL offers rewriting rules designed to optimise execution traces, accompanied by corresponding equivalence. An open-source SWIRL compiler toolchain has been developed using the ANTLR Python3 bindings.

16:30
fm-weck: Containerized Execution of Formal Methods Tools

ABSTRACT. Reliable Software is an important foundation in the digital world. Next to testing and syntactic analysis, software verification emerged as a promising technique tackling this issue. However, most successful software verifiers are research projects posing several challenges to users unfamiliar with the tool, when they try to execute it. We present fm-weck, a command line utility that makes it easy to run, develop and experiment with formal methods tools. Our tool relies on a standardized data format, called fm-data, to extract all the necessary information to containerize and run a given verifier. With fm-weck, anyone gets fast and easy access to state-of-the-art formal verifiers, no expertise required.

16:50
HyGaViz: Visualizing Game-Based Certificates for Hyperproperty Verification

ABSTRACT. Hyperproperties relate multiple executions of a system and are commonly used to specify security and information-flow policies. While many verification approaches for hyperproperties exist, providing a convincing certificate that the system satisfies a given property is still a major challenge. In this paper, we propose strategies as a suitable form of certificate for hyperproperties specified in the temporal logic HyperLTL. Concretely, we interpret the verification of a HyperLTL property as a game between universal and existential quantification, allowing us to leverage strategies for the existential quantifiers as certificates. We present HyGaViz, a browser-based visualization tool that lets the user interactively explore an (automatically synthesized) witness strategy by taking control over universally quantified executions.