Convergence Between Model - and Data – driven Design for Cyber-Physical Systems
ABSTRACT. Cyber-Physical Systems (CPS) give rise to a heterogeneous but tightly coupled engineering design domain. CPS design requires engineering processes that span multiple design disciplines, complex design flows and extensive tool suites. One of the challenges of model-based design automation of CPS is that design trade-offs across traditionally isolated design domains require the deep integration of models, design flows and tool chains. The first part of the talk covers the evaluation of model-based methods gained along the implementation of an experimental design automation tool suite, OpenMETA developed for DARPA’s Adaptive vehicle Make (AVM) program. Experience with OpenMETA showed fundamental benefits as well as practical limitations of model-based design. Cost of developing component models and reusable component model libraries, the semantic complexity of compositional design of systems using heterogeneous components and scalability concerns of design space exploration represent challenges that slow down progress. Recent advances in data-driven methods that has been inspired by the successes of machine learning and AI applications offer partial answer to these challenges while introducing others. The second part of the talk presents recent results in introducing Learning-Enabled Components (LECs) in CPS design and new methods in design space exploration using surrogate models. Progress in the assurance of CPS designs incorporating LECs and developing surrogate models that merge symbolic and data-driven elements show that the convergence of model- and data-driven design is a promising direction that has the potential of accelerating industrial impact. The talk will conclude with the impact of this convergence on tool suites supporting the design of mission and safety critical systems CPS.
Conformal Quantitative Predictive Monitoring of STL Requirements for Stochastic Processes
ABSTRACT. We consider the problem of predictive monitoring (PM), i.e., predicting at runtime the satisfaction of a desired property from the current system's state. Due to its relevance for runtime safety assurance and online control, PM methods need to be efficient to enable timely interventions against predicted violations, while providing correctness guarantees.
We introduce quantitative predictive monitoring (QPM), the first PM method to support stochastic processes and rich specifications given in Signal Temporal Logic (STL). Unlike most of the existing PM techniques that predict whether or not some property $\phi$ is satisfied, QPM provides a quantitative measure of satisfaction by predicting the quantitative (aka robust) STL semantics of $\phi$. QPM derives prediction intervals that are highly efficient to compute and with probabilistic guarantees, in that the intervals cover with arbitrary probability the STL robustness values relative to the stochastic evolution of the system. To do so, we take a machine-learning approach and leverage recent advances in conformal inference for quantile regression, thereby avoiding expensive Monte-Carlo simulations at runtime to estimate the intervals.
We also show how our monitors can be combined in a compositional manner to handle composite formulas, without retraining the predictors nor sacrificing the guarantees.
We demonstrate the effectiveness and scalability of QPM over a benchmark of four discrete-time stochastic processes with varying degrees of complexity.
An STL-based Approach to Resilient Control for Cyber-Physical Systems
ABSTRACT. We present ResilienC, a framework for resilient control of Cyber-Physical Systems subject to STL-based requirements. ResilienC utilizes a recently developed formalism for specifying CPS resiliency in terms of sets of (rec,dur) real-value pairs, where rec represents the system’s capability to rapidly recover from a property violation (recoverability), and dur is reflective of its ability to avoid violations post-recovery (durability). We define the resilient STL control problem as one of multi-objective optimization, where the recoverability and durability of the desired STL specification are maximized. When neither objective is prioritized over the other, the solution to the problem is a set of Pareto-optimal system trajectories. We present a precise solution method to the resilient STL control problem using a mixed-integer linear programming encoding and an a posteriori epsilon-constraint approach for efficiently retrieving the complete set of optimally resilient solutions. In our ResilienC framework, at each time-step, the optimal control action selected from the set of Pareto-optimal solutions by a Decision Maker strategy is used in a Model Predictive Control setting. We demonstrate the practical utility of the ResilienC framework on two significant case studies: autonomous vehicle lane keeping and deadline-driven, multi-region package delivery.
Mixed Integer Linear Programming Approach for Control Synthesis with Weighted Signal Temporal Logic
ABSTRACT. This work presents an optimization-based control synthesis approach for an extension of Signal Temporal Logic (STL) called
weight-ed Signal Temporal Logic (wSTL). wSTL was proposed to
accommodate user preferences for importance and priorities over
concurrent and sequential tasks as well as satisfaction times denoted
by weights over the logical and temporal operators, respectively.
We propose a Mixed Integer Linear Programming (MILP) based
approach for synthesis with wSTL specifications. These specifications have the same qualitative semantics as STL but differ in
their quantitative semantics, which is recursively modulated with
weights. Additionally, we extend the formal definition of wSTL
to include the semantics for until and release temporal operators
and present an efficient encoding for these operators in the MILP
formulation. As opposed to the original implementation of wSTL
where the arithmetic-geometric mean robustness was used with
gradient-based methods prone to local optima, our encoding allows
the use of a weighted version of traditional robustness and efficient
global MILP solvers. We demonstrate the operational performance
of the proposed formulation using multiple case studies, showcasing the distinct functionalities over Boolean and temporal operators.
Moreover, we elaborate on multiple case studies for synthesizing
controllers for an agent navigating a non-convex environment un-
der different constraints highlighting the difference in synthesized
control plans for STL and wSTL. Finally, we compare the time and
complexity performance of encodings for STL and wSTL.
Quantitative Verification for Neural Networks using ProbStars
ABSTRACT. Most deep neural network (DNN) verification research focuses on qualitative verification, which answers whether or not a DNN violates a safety/robustness property. This paper proposes an approach to convert qualitative verification into quantitative verification for neural networks. The resulting quantitative verification method not only can answer YES or NO questions but also can compute
the probability of a property being violated. To do that, we introduce the concept of a probabilistic star (or shortly ProbStar), a new variant of the well-known star set, in which the predicate variables belong to a Gaussian distribution and propose an approach to compute the probability of a probabilistic star in high-dimensional
space. Unlike existing works dealing with constrained input sets, our work considers the input set as a truncated multivariate normal (Gaussian) distribution, i.e., besides the constraints on the input variables, the input set has a probability of the constraints being satisfied. The input distribution is represented as a probabilistic star set and is propagated through a network to construct the output reachable set containing multiple ProbStars, which are used to verify the safety or robustness properties of the network. In case of a property is violated, the violation probability can be computed precisely by an exact verification algorithm or approximately by an overapproximate verification algorithm. The proposed approach is
implemented in a tool named StarV and is evaluated using the well-known ACASXu networks and a recent rocket landing controller benchmark.
Fully-Automated Verification of Linear Systems Using Reachability Analysis with Support Functions
ABSTRACT. While reachability analysis is one of the major techniques for formal verification of dynamical systems, the requirement to adequately tune algorithm parameters often prevents its widespread use in practical applications. In this work, we fully automate the verification process for linear time-invariant systems: Based on the computation of tight upper and lower bounds for the support function of the reachable set along a given direction, we present a fully-automated verification algorithm, which is based on iterative refinement of the upper and lower bounds and thus always returns the correct result in decidable cases. While this verification algorithm is particularly well suited for cases where the specifications are represented by halfspace constraints, we extend it to arbitrary convex unsafe sets using the Gilbert-Johnson-Keerthi algorithm. In summary, our automated verifier is applicable to arbitrary convex initial sets, input sets, as well as unsafe sets, can handle time-varying inputs, automatically returns a counterexample in case of a safety violation, and scales to previously unanalyzable high-dimensional state spaces. Our evaluation on several challenging benchmarks shows significant improvements in computational efficiency compared to verification using other state-of-the-art reachability tools.
Verification of Recurrent Neural Networks with Star Reachability
ABSTRACT. The paper extends the recent star reachability method to verify the robustness of recurrent neural networks (RNNs) for use in safety-critical applications.
RNNs are a popular machine learning method for various applications, but they are vulnerable to adversarial attacks, where slightly perturbing the input sequence can lead to an unexpected result.
Recent notable techniques for verifying RNNs include unrolling, and invariant inference approaches. The first method has scaling issues, since unrolling an RNN creates a large feedforward neural network. The second method, using invariant sets, has better scalability but can produce unknown results due to the accumulation of overapproximation errors over time.
This paper introduces a complementary verification method for RNNs that is both sound and complete. A relaxation parameter can be used to convert the method into a fast overapproximation method that still provides soundness guarantees. The method is designed to be used with NNV, a tool for verifying deep neural networks and learning-enabled cyber-physical systems. Compared to state-of-the-art methods, the extended exact reachability method is $10\times$ faster, and the overapproximation method is $100\times$ to $5000\times$ faster.