Efficiently Enabling Rich and Trustworthy Inferences at the Extreme Edge
ABSTRACT. Computing systems intelligently performing perception-cognition-action (PCA) loops are essential to interfacing our digitized society with the analog world it is embedded in. They employ distributed edge-cloud computing hierarchies and deep learning methods to make sophisticated inferences and decisions from high-dimensional unstructured sensory data in our personal, social, and physical spaces. While the adoption of deep learning has resulted in considerable advances in accuracy and richness, they have also resulted in challenges such as generalizing to novel situations, assuring robustness in the face of uncertainty, engendering trust in opaque modes, reasoning about complex spatiotemporal events, and implementing in ultra resource-constrained edge devices. This talk presents ideas for addressing these challenges with physics-aware neuro-symbolic models, automatic platform-aware architecture search, and sharing of edge resources, and describes our experience in applying them in varied application domains such as mobile health, agricultural robotics, etc.
Pattern Matching and Parameter Identification for Parametric Timed Regular Expressions
ABSTRACT. Timed formalisms such as Timed Automata (TA), Signal Temporal Logic (STL) and Timed Regular expressions (TRE) have been previously applied as behaviour specifications for monitoring or runtime verification, in particular, under the form of pattern-matching, i.e. computing the set of all the segments of a given system run that satisfy the specification.
In this work, timed regular expressions with parameters (for timing delays and for signal values) are considered. We define several classes of parametric expressions (based on Boolean or real-valued signals and discrete events), and tackle the problem of computing a parametric match-set, i.e. the parameter values and time segments of data that give a match for a given expression. We propose efficient data structures for representing match-sets (combining zones and polytopes), and devise pattern-matching algorithms. All these different types and algorithms are combined into a single implementation under a tool named parameTRE. We illustrate the approach on several examples, from electrocardiograms to driving patterns.
Exploring Signal Temporal Logic and specification robustness for the analysis of CPS security against stealthy attacks
ABSTRACT. We propose a framework for security vulnerability analysis for Cyber-Physical Systems (CPS). Our framework imposes only minimal assumptions on the structure of the CPS.
Namely, we consider CPS with feedback control loops, state observers, and anomaly detection algorithms. Moreover, our framework does not require any knowledge about the dynamics or the algorithms used in the CPS. Under this common CPS architecture, we develop tools that can identify vulnerabilities in the system and their impact on the functionality of the CPS. We pose the CPS security problem as a falsification (or Search Based Test Generation (SBTG)) problem guided by security requirements expressed in Signal Temporal Logic (STL). We propose two different categories of security requirements encoded in STL: (1) detectability (stealthiness) and (2) effectiveness (impact on the CPS function). Finally, we demonstrate in simulation on an inverted pendulum and on an Unmanned Aerial Vehicle (UAV) that both specifications are falsifiable using our SBTG techniques.
ABSTRACT. Sampling time words out of a time language described as a timed automaton may seem a simple task: start from the initial state, choose a transition and a delay and repeat until an accepting state is reached. Unfortunately, simple approach based on local, on-the-fly rules produces time words from distributions that are biased in some unpredictable ways. For this reason, approaches have been developed to guarantee that the sampling follows a more desirable distribution defined over the timed language and not over the automaton. One such distribution is the maximal entropy distribution, whose implementation requires several non-trivial computational steps. In this paper, we present Wordgen which combines those different necessary steps into a lightweight standalone tool. The resulting time words can be mapped to signals used for model-based testing and falsification of cyber-physical systems thanks to a simple interface with the Breach tool.
Reachability Analysis for Linear Systems with Uncertain Parameters using Polynomial Zonotopes
ABSTRACT. In real world applications, uncertain parameters are the rule rather than the exception. We present a reachability algorithm for linear systems with uncertain parameters and inputs using set propagation of polynomial zonotopes. In contrast to previous methods, our approach is able to tightly capture the non-convexity of the reachable set. Building up on our main result, we show how our reachability algorithm can be extended to handle linear time-varying systems as well as linear systems with time-varying parameters. Moreover, our approach opens up new possibilities for reachability analysis of linear time-invariant systems, nonlinear systems, and hybrid systems. We compare our approach to other state of the art methods, with superior tightness on several benchmarks including a 9-dimensional vehicle platooning system.
Automatic Abstraction Refinement in Neural Network Verification using Sensitivity Analysis
ABSTRACT. The formal verification of neural networks is essential for their application in safety-critical environments. However, the set-based verification of neural networks using linear approximations often obtains overly conservative results, while nonlinear approximations quickly become computationally infeasible in deep neural networks. We address this issue for the first time by automatically balancing between precision and computation time without splitting the propagated set. Our work introduces a novel automatic abstraction refinement approach using sensitivity analysis to iteratively reduce abstraction error on the neuron level until either the specifications are met or a maximum number of iterations is reached. Our evaluation shows that we can tightly over-approximate the output sets of deep neural networks and are up to a thousand times faster than a naive approach. We further demonstrate the applicability of our approach in closed-loop settings.
BERN-NN: Tight Bound Propagation For Neural Networks Using Bernstein Polynomial Interval Arithmetic
ABSTRACT. In this paper, we present BERN-NN as an efficient tool to perform bound propagation of Neural Networks (NNs). Bound propagation is a critical step in wide range of NN model checkers and reachability analysis tools. Given a bounded input set, bound propagation algorithms aim to compute tight bounds on the output of the NN. So far, linear and convex optimizations have been used to perform bound propagation. Since neural networks are highly non-convex, state-of-the-art bound propagation techniques suffer from introducing large errors. To circumvent such drawback, BERN-NN approximates the bounds of each neuron using a class of polynomials called Bernstein polynomials. Bernstein polynomials enjoy several interesting properties that allow BERN-NN to obtain tighter bounds compared to those relying on linear and convex approximations. BERN-NN is efficiently parallelized on graphic processing units (GPUs). Extensive numerical results show that bounds obtained by BERN-NN are orders of magnitude tighter than those obtained by state-of-the-art verifiers such as linear programming and linear interval arithmetic. Moreoveer, BERN-NN is both faster and produces tighter outputs compared to convex programming approaches like alpha-CROWN.
Quantitative Robustness Analysis of Sensor Attacks on Cyber-Physical Systems
ABSTRACT. This paper contributes a formal framework for quantitative analysis of bounded sensor attacks on cyber-physical systems, using the formalism of differential dynamic logic. Given a precondition and postcondition of a system, we formalize two quantitative safety notions, quantitative forward and backward safety, which respectively express (1) how strong the strongest postcondition of the system is with respect to the specified postcondition, and (2) how strong the specified precondition is with respect to the weakest precondition of the system needed to ensure the specified postcondition holds. We introduce two notions, forward and backward robustness, to characterize the robustness of a system against sensor attacks as the loss of safety. To reason about robustness, we introduce two simulation distances, forward and backward simulation distances, which are defined based on the behavioral distances between the original system and the system with compromised sensors. Forward and backward simulation, respectively, characterize upper bounds of the degree of forward and backward safety loss caused by the sensor attacks. We verify the two simulations by expressing them as formulas of differential dynamic logic, and prove the formulas with existing tool support. We showcase an example of an autonomous vehicle that needs to avoid collision.