Free Facts: An Alternative to Inefficient Axioms in Dafny
ABSTRACT. Formal software verification relies on properties of functions and built-in operators. Unless these properties are handled directly by decision procedures, an automated verifier includes them in verification conditions by supplying them as universally quantified axioms or theorems. The use of quantifiers sometimes leads to bad performance, especially if automation causes the quantifiers to be instantiated many times.
This paper proposes free facts as an alternative to some axioms. A free fact is a pre-instantiated axiom that is generated alongside the formulas in a verification condition that can benefit from the facts. Replacing an axiom with free facts thus reduces the number of quantifiers in verification conditions. Free facts are statically triggered by syntactic occurrences of certain patterns in the proof terms. This is less powerful than the dynamically triggered patterns used during proof construction. However, the paper shows that free facts perform well in practice.
Understanding Synthesized Reactive Systems Through Invariants
ABSTRACT. In many applications that are attractive to apply reactive synthesis to, computed implementations need to have understandable behavior. While some existing synthesis approaches compute finite-state machines with a structure that supports their understandability, such approaches do not scale to specifications that can only be realized with a large number of states. Also, it can be argued that asking the engineer to understand the internal structure of the implementation is unnecessary when only the behavior of the implementation is to be understood.
In this paper, we present an approach to computing understandable safety invariants that every implementation satisfying a generalized reactivity(1) specification needs to fulfill. Together with the safety part of the specification, the invariants completely define which transitions between input and output proposition valuations any correct implementation can take.
We show with two case studies that the approach is suitable for computing understandable invariants in generalized reactivity(1) synthesis. We also demonstrate that the invariants highlight the strategic decisions that implementations for the given specification need to make, which not only helps the system designer to understand what the specification entails, but also supports specification debugging.
Efficient Formally Verified Maximal End Component Decomposition for MDPs
ABSTRACT. Identifying a Markov decision process's maximal end components is a prerequisite for applying sound probabilistic model checking algorithms. In this paper, we present the first mechanized correctness proof of a maximal end component decomposition algorithm, using the Isabelle/HOL theorem prover. We iteratively refine the high-level algorithm and proof into an imperative LLVM bytecode implementation that we integrate into the Modest Toolset's existing mcsta model checker. Our work verifies a MEC decomposition algorithm which is an important algorithm in model checking. We bring the benefits of interactive theorem proving into practice by reducing the trusted code base of a popular probabilistic model checker and we experimentally show that our new verified MEC component in mcsta performs on par with the tool's previous unverified implementation.
Combining Classical and Probabilistic Independence Reasoning to Verify the Security of Oblivious Algorithms
ABSTRACT. We consider the problem of how to verify the security of probabilistic oblivious algorithms formally and systematically. Unfortunately, prior program logics fail to support a number of complexities that feature in the semantics and invariant needed to verify the security of many practical probabilistic oblivious algorithms. We propose an approach based on reasoning over perfectly oblivious approximations, using a program logic that combines both classical Hoare logic reasoning and probabilistic independence reasoning to support reasoning about all the needed features. We formalise and prove our new logic sound in Isabelle/HOL and apply our approach to formally verify the security of several challenging case studies beyond the reach of prior methods for proving obliviousness.
The nonexistence of unicorns and many-sorted Löwenheim–Skolem theorems
ABSTRACT. Stable infiniteness, strong finite witnessability, and smoothness are model-theoretic properties relevant to theory combination in satisfiability modulo theories. Theories that are strongly finitely witnessable and smooth are called strongly polite, and such theories can be combined with other theories while preserving decidability. Toledo, Zohar, and Barrett conjectured that stably infinite and strongly finitely witnessable theories are smooth and therefore strongly polite. They called counterexamples to this conjecture unicorn theories on the grounds that their existence seemed unlikely. We prove that, indeed, unicorns do not exist. We additionally prove versions of the Löwenheim–Skolem theorem and the Łoś–Vaught test for many-sorted logic.
Partially Observable Stochastic Games with Neural Perception Mechanisms
ABSTRACT. Stochastic games are a well established model for multi-agent sequential decision making under uncertainty. In practical applications, though, agents often have only partial observability of their environment. Furthermore, agents increasingly perceive their environment using data-driven approaches such as neural networks trained on continuous data. We propose the model of neuro-symbolic partially-observable stochastic games (NS-POSGs), a variant of continuous-space concurrent stochastic games that explicitly incorporates neural perception mechanisms. We focus on a one-sided setting with a partially-informed agent using discrete, data-driven observations and another, fully-informed agent. We present a new method, called one-sided NS-HSVI, for approximate solution of one-sided NS-POSGs, which exploits the piecewise constant structure of the model. Using neural network pre-image analysis to construct finite polyhedral representations and particle-based representations for beliefs, we implement our approach and illustrate its practical applicability to the analysis of pedestrian-vehicle and pursuit-evasion scenarios.
VeriQR: A Robustness Verification Tool for Quantum Machine Learning Models
ABSTRACT. Adversarial noise attacks present a significant threat to quan tum machine learning (QML) models, similar to their classical counterparts. This is especially true in the current Noisy Intermediate-Scale Quantum era, where noise is unavoidable. Therefore, it is essential to ensure the robustness of QML models before their deployment. To address this challenge, we introduce VeriQR, the first tool designed specifically for formally verifying and improving the robustness of QML models, to the best of our knowledge. This tool mimics real-world quantum hardware's noisy impacts by incorporating random noise to formally validate a QML model's robustness. VeriQR supports exact (sound and complete) algorithms for both local and global robustness verification. For enhanced efficiency, it implements an under-approximate (complete) algorithm and a tensor network-based algorithm to verify local and global robustness, respectively. As a formal verification tool, VeriQR can detect adversarial examples and utilize them for further analysis and to enhance local robustness through adversarial training, as demonstrated by experiments on real-world quantum machine learning models. Moreover, it permits users to incorporate customized noise. Based on this feature, we assess VeriQR using various real-world examples, and experimental outcomes confirm that the addition of specific quantum noise can enhance the global robustness of QML models. These processes are made accessible through a user-friendly graphical interface provided by VeriQR, catering to general users without requiring a deep understanding of the counter-intuitive probabilistic nature of quantum computing.
Bridging Dimensions: Confident Reachability for High-Dimensional Controllers
ABSTRACT. Autonomous systems are increasingly implemented using end-to-end learning-based controllers. Such controllers make decisions that are executed on the real system, with images as one of the primary sensing modalities. Deep neural networks form a fundamental building block of such controllers. Unfortunately, the existing neural-network verification tools do not scale to inputs with thousands of dimensions --- especially when the individual inputs (such as pixels) are devoid of clear physical meaning. This paper takes a step towards connecting exhaustive closed-loop verification with high-dimensional controllers. Our key insight is that the behavior of a high-dimensional controller can be approximated with several low-dimensional controllers.
To balance the approximation accuracy and verifiability of our low-dimensional controllers, we leverage the latest verification-aware knowledge distillation. Then, we inflate low-dimensional reachability results with statistical approximation errors, yielding a high-confidence reachability guarantee for the high-dimensional controller. We investigate two inflation techniques --- based on trajectories and control actions --- both of which show convincing performance in three OpenAI gym benchmarks.
Certified Quantization Strategy Synthesis for Neural Networks
ABSTRACT. Quantization plays an important role in deploying neural networks on embedded, real-time systems with limited computing and storage resources (e.g., edge devices). It significantly reduces the model storage cost and improves inference efficiency by using fewer bits to represent the parameters. However, it was recently shown that critical properties may be broken after quantization, such as robustness and backdoor-freeness. In this work, we introduce the first method for synthesizing quantization strategies that verifiably maintain desired properties after quantization, leveraging a key insight that quantization leads to a data distribution shift in each layer. We propose to compute the preimage for each layer based on which the preceding layer is quantized, ensuring that the quantized reachable region of the preceding layer remains within the preimage. To tackle the challenge of computing the exact preimage, we propose an MILP-based method to compute its under-approximation. We implement our method into a tool Quadapter and demonstrate its effectiveness and efficiency by providing certified quantization that successfully preserves model robustness and backdoor-freeness.
A Zonotopic Dempster-Shafer Approach to the Quantitative Verification of Neural Networks
ABSTRACT. The reliability and usefulness of verification depend on the ability to represent appropriately the uncertainty. Most existing work on neural network verification relies on the hypothesis of either set-based or probabilistic information on the inputs. In this work, we rely on the framework of imprecise probabilities, specifically p-boxes, to propose a quantitative verification of ReLU neural networks, which can account for both probabilistic information and epistemic uncertainty on inputs. On classical benchmarks, including the ACAS Xu examples, we demonstrate that our approach improves the tradeoff between tightness and efficiency compared to related work on probabilistic network verification, while handling much more general classes of uncertainties on the inputs and providing fully guaranteed results.
Integrating Loop Acceleration into Bounded Model Checking
ABSTRACT. Bounded Model Checking (BMC) is a powerful technique for proving unsafety. However, finding deep counterexamples that require a large bound is challenging for BMC. On the other hand, acceleration techniques compute "shortcuts" that "compress" many execution steps into a single one. In this paper, we tightly integrate acceleration techniques into SMT-based bounded model checking. By adding suitable "shortcuts" to the SMT-problem on the fly, our approach can quickly detect deep counterexamples, even when only using small bounds. Moreover, using so-called blocking clauses, our approach can prove safety of examples where BMC diverges. An empirical comparison with other state-of-the-art techniques shows that our approach is highly competitive for proving unsafety, and orthogonal to existing techniques for proving safety.
ABSTRACT. MaxSAT modulo theories (MaxSMT) is an important generalization of Satisfiability modulo theories (SMT) with various applications. In this paper, we focus on MaxSMT with the background theory of Linear Integer Arithmetic, denoted as MaxSMT(LIA). We design the first local search algorithm for MaxSMT(LIA) called PairLS, based on the following novel ideas. A novel operator called pairwise operator is proposed for integer variables. It extends the original local search operator by simultaneously operating on two variables, enriching the search space. Moreover, a compensation-based picking heuristic is proposed to determine and distinguish the pairwise operations. Experiments are conducted to evaluate our algorithm on massive benchmarks. The results show that our solver is competitive with state-of-the-art MaxSMT solvers. Furthermore, we also apply the pairwise operation to enhance the local search algorithm of SMT, which shows its extensibility.
ABSTRACT. We present a new SAT backend for the B-Method to enable new applications of formal methods.
The new backend interleaves low-level SAT solving with high-level constraint solving.
It provides a ``bare metal'' access to SAT solving, while pre- and post-calculations can be done in the full B language, with access to higher-order or even infinite data values.
The backend is integrated into ProB, not as a general purpose backend, but as a dedicated backend for solving hard constraint satisfaction and optimisation problems on complex data.
In the article we present the approach, its origin in the proof of Cook's theorem, and illustrate and evaluate it on a few novel applications of formal methods, ranging from biology to railway applications.
Practical Approximate Quantifier Elimination for Non-linear Real Arithmetic
ABSTRACT. Quantifier Elimination (QE) concerns finding a quantifier-free formula that is semantically equivalent to a quantified formula in a given logic. For the theory of non-linear arithmetic over reals (NRA), QE is known to be computationally challenging. In this paper, we show how QE over NRA can be solved approximately and efficiently in practice using a Boolean combination of constraints in the linear arithmetic over reals (LRA). Our approach works by approximating the solution space of a set of NRA constraints when all real variables are bounded. It combines adaptive dynamic gridding with application of Handelman's Theorem to obtain the approximation efficiently via a sequence of linear programs (LP). We provide rigorous approximation guarantees, and also proofs of soundness and completeness (under mild assumptions) of our algorithm. Interestingly, our work allows us to bootstrap on earlier work (viz. Garcia-Contreras et al. '23) and solve quantified SMT problems over a combination of NRA and other theories, that are beyond the reach of state-of-the-art solvers. We have implemented our approach in a preprocessor for Z3 called POQER. Our experiments show that POQER+Z3EG outperforms state-of-the-art SMT solvers on non-trivial problems, adapted from a suite of benchmarks.
A Divide-and-Conquer Approach to Variable Elimination in Linear Real Arithmetic
ABSTRACT. We introduce a novel variable elimination method for conjunctions of linear real arithmetic constraints. In prior work, we derived a variant of the Fourier-Motzkin elimination, which uses case splitting to reduce the procedure's complexity from doubly to singly exponential. This variant, which we call FMplex, was originally developed for satisfiability checking, and it essentially performs a depth-first search in a tree of sub-problems. It can be adapted straightforwardly for the task of quantifier elimination, but yields disjunctions as output.
Our main contribution is to show how to efficiently extract an equivalent conjunction from the search tree. Besides the theoretical foundations, we explain how the procedure relates to other methods for quantifier elimination and polyhedron projection. An experimental evaluation demonstrates that our implementation is competitive with established tools.
Nonlinear Craig Interpolant Generation over Unbounded Domains by Separating Semialgebraic Sets
ABSTRACT. Interpolation-based techniques become popular in recent years, as it can improve the scalability of existing verification techniques due to its inherent modularity and local reasoning capabilities. Synthesizing Craig interpolants is the cornerstone of these techniques. However, existing approaches for synthesizing nonlinear Craig interpolants, essentially corresponding to the underlying mathematical problem to separate two disjoint semialgebraic sets, are subject to different restrictions, for example, the Archimedean condition, the concave condition, etc. In this paper, we investigate nonlinear Craig interpolant synthesis over unbounded domains by utilizing homogenization techniques, which allows us to remove the Archimedean condition. The synthesis problem is reduced to semidefinite programs that can be efficiently solved. Examples are provided to demonstrate the effectiveness and efficiency of our approach.
Code-level Safety Verification for Automated Driving: A Case Study
ABSTRACT. The formal safety analysis of automated driving vehicles poses unique challenges due to their dynamic operating conditions and significant complexity.
This paper presents a case study of applying formal safety verification to adaptive cruise controllers. Unlike the majority of existing verification approaches in the automotive domain, which only analyze (potentially imperfect) controller models, employ simulation to find counter-examples or use online monitors for runtime verification, our method verifies controllers at code level by utilizing bounded model checking. Verification is performed against an invariant set derived from formal specifications and an analytical model of the required behavior. For neural network controllers, we propose a scalable three-step decomposition, which additionally uses a neural network verifier. We show that both traditionally implemented as well as neural network controllers are verified within minutes. The dual focus on formal safety and implementation verification provides a comprehensive framework applicable to similar cyber-physical systems.
AGVTS: Automated Generation and Verification of Temporal Specifications for Aeronautics SCADE Models
ABSTRACT. SCADE is both a formal language and a model-based development environment, widely used to build and verify the models of safety-critical system (SCS). The SCADE Design Verifier (DV) provides SAT-based verification. However, DV cannot adequately express complex temporal specifications, and it may fail due to complexity problems such as floating numbers which are often used in the aeronautics domain. In addition, manually writing temporal specifications is not only time-consuming but also error-prone. To address these challenges, we propose an AGVTS method that can automate the task of generating temporal specifications and verifying aeronautics SCADE models. At first, we define a modular pattern language for precisely expressing Chinese natural language requirements. Then, we present a rule-based translation augmented with BERT, which translates restricted requirements into LTL and CTL. In addition, SCADE model verification is achieved by transforming it into nuXmv which supports both SMT-based and SAT-based verification. Finally, we illustrate a successful application of our methodology with an ejection seat control system, and convince our industrial partners of the usefulness of formal methods for industrial systems.
UnsafeCop: Towards Memory Safety for Real-World Unsafe Rust Code with Practical Bounded Model Checking
ABSTRACT. Rust has gained popularity as a safer alternative to C/C++ for low-level programming due to its memory safety features and minimal runtime overhead. However, the use of the "unsafe" keyword allows developers to bypass safety guarantees, posing memory safety risks. Bounded Model Checking (BMC) is commonly used to detect memory safety problems, but it has limitations for large-scale programs, as it can only detect bugs within a bounded number of executions.
In this paper, we introduce UnsafeCop, an enhancement of BMC tailored for analyzing memory safety in real-world unsafe Rust code. Our methodology incorporates harness design, loop bound inference, and both loop and function stubbing for comprehensive analysis. We optimize verification efficiency through a strategic function verification order, leveraging both types of stubbing. UnsafeCop was evaluated on TECC (Trusted-Environment-based Cryptographic Computing), a proprietary framework consisting of 30,174 lines of code, including 3,019 lines of unsafe code, developed by Ant Group. Experimental results demonstrate that UnsafeCop effectively detects and verifies dozens of memory safety issues, reducing verification time by 73.71% compared to the traditional non-stubbing approach, highlighting its practical effectiveness.
Systematic Test Case Generation for Distributed Redundant Controllers using Model Checking (EXTENDED ABSTRACT)
ABSTRACT. In testing fault-tolerant systems, it is crucial to encompass the system's state space as comprehensively as possible.
Utilizing a model checker to make informed decisions about suitable tests in the test space can streamline the scope and enhance certainty in test coverage.
We use Timed Rebeca to model and verify an industry-suggested
algorithm for role selection in distributed control systems with redundancy.
The algorithm prioritizes consistency over availability in trade-off situations.
During model checking, we designed scenarios to simulate the environment and the possible faults. These scenarios are based on considering all the possible events in the system. In this paper, we show how these scenarios can be used for systematic test case generation. We then use the coordination language, Lingua Franca, to ensure the alignment of the model and the implementation. Lingua Franca provides an implementation for the nodes. It can also provide a simulation for network switches, which enables setting up test scenarios that include network degradation, such as switch failures, packet losses, and excessive latency. This can be set up as a hardware-in-the-loop simulation, where the actual node implementations interact with simulated switches and the network.
Employing Formal Equivalence Methods to Ensure Correspondence between a High-Level C/C++ Model and its RTL Design
ABSTRACT. In the field of communication system products, engineers often start by creating models of Digital Signal Processing (DSP) using high-level languages such as MATLAB® or C/C++. The design engineers then convert the models into Register Transfer Level (RTL). How to verify that the RTL is equivalent to the high-level model? The conventional approach involves extensive Universal Verification Methodology (UVM) dynamic simulations that can take several months, but even with this through process, some elusive errors might go undetected.
To enhance the verification process, Formal Equivalence Verification (FEV) has become a powerful complement to dynamic simulations. With recent advancements in formal solver technology, FEV provides a distinct benefit by using mathematical methods to ensure that the RTL implementation (timed) matches the original high-level C/C++ model (untimed). This drastically reduces the verification time and ensures the exhaustive coverage of the design state space.
This paper presents an in-depth exploration of complex Finite State Ma-chine (FSM) with datapath verification, specifically focusing on Multiply-Accumulate (MAC), Tone Generator, and Automatic Gain Control (AGC), by employing the formal equivalence methodology. Although these signal processing blocks were previously validated throughout UVM dynamic simulations, FEV was able to identify hard-to-find bugs in just a few weeks by utilizing the new workflow, thereby streamlining the verification process.
Beyond the Bottleneck: Enhancing High-Concurrency Systems with Lock Tuning
ABSTRACT. High-concurrency systems often suffer from performance bottlenecks. This is often caused by waiting and context switching caused by fierce competition between threads for locks. As a cloud computing company, we place great emphasis on maximizing performance. In this regard, we transform the lightweight spin lock and proposes a concise parameter fine-tuning strategy, which can break through the system performance bottleneck with minimal risk conditions. This strategy has been validated in Apache RocketMQ, a high-throughput message queue system. Through this strategy, we achieved performance improvements of up to 37.58% on X86 CPU and up to 32.82% on ARM CPU. Furthermore, we confirmed the method’s consistent effectiveness across different code versions and IO flush strategies, demonstrating its broad applicability in real-world settings. This work offers not only a practical tool for addressing performance issues in high-concurrency systems but also highlights the practical value of formal techniques in solving engineering problems.