Characterizations of Safety in Hybrid Inclusions via Barrier Functions
ABSTRACT. This paper investigates characterizations of safety in terms of
barrier functions for hybrid systems modeled by hybrid inclusions.
After introducing an adequate definition of safety for hybrid inclusions,
sufficient conditions using continuously differentiable as well
as lower semicontinuous barrier functions are proposed. Furthermore,
the lack of existence of autonomous and continuous barrier
functions certifying safety, guides us to propose, using a converse
Lyapunov theory for only stability, nonautonomous barrier functions
and conditions that are shown to be both necessary as well as
sufficient, provided that mild regularity condition on the system’s
dynamics holds.
On topological entropy and stability of switched linear systems
ABSTRACT. This paper studies topological entropy and stability properties of switched linear systems. First, we show that the exponential growth rates of solutions of a switched linear system are essentially upper bounded by its topological entropy. Second, we estimate the topological entropy of a switched linear system by decomposing it into a part that is generated by scalar multiples of the identity matrix and a part that has zero entropy, and proving that the overall topological entropy is upper bounded by that of the former. Third, we prove that a switched linear system is globally exponentially stable if its topological entropy remains zero under a destabilizing perturbation. Finally, the entropy estimation via decomposition and the entropy-based stability condition are applied to three classes of switched linear systems to construct novel upper bounds for topological entropy and sufficient conditions for global exponential stability.
Robust Invariant Sets Generation for State-Constrained Perturbed Polynomial Systems
ABSTRACT. In this paper we study the (maximal) robust invariant set generation problem for perturbed polynomial continuous-time systems under semi-algebraic state constraints, in which a (maximal) robust invariant set is a set of (all) states such that every possible trajectory starting from it will always respect specified state constraints, regardless of actions of pertubations. We explore the characterization of the maximal robust invariant set within the Hamilton-Jacobi reachability analysis framework. In this framework, the maximal robust invariant set is described mathematically as the zero sublevel set of either the minimal lower semi-continuous or uniquely Lipschitz continuous viscosity solution to Hamilton-Jacobi-Bellman (HJB) partial differential equations, depending on whether a parameter introduced is equal to or larger than zero, consequently resulting in two types of numerical methods for synthesizing robust invariant sets: grid-based numerical methods and semi-definite programming based methods. The former one, which performs computations through constructing a grid for the continuous state space, is capable of generating the maximal robust invariant set, and the latter, which consists of solving a single semi-definite program and is more efficient in terms of computation time than the former one, primarily targets the computation of robust invariant sets instead. These two numerical methods are demonstrated and compared on some illustrative examples.
A complete characterization of the ordering of path-complete methods
ABSTRACT. We study criteria allowing to compare stability certificates for switching systems in terms of their conservativeness.
The stability certificates under consideration are Path-Complete Lyapunov functions (PCLFs), which are multiple Lyapunov functions with an underlying combinatorial structure.
Several criteria exist to compare these certificates and we focus here on the recently introduced concept of simulation.
This criterion, which is borrowed from automata theory, relies only on the combinatorial structures that underlie the PCLFs.
We show that the concept of simulation provides a complete characterization of the ordering relation between PCLFs, independently of the algebraic properties of the dynamical system studied and the template of Lyapunov functions used. We summarize the state of the art on the comparison of PCLFs and raise future research directions.
ABSTRACT. Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in incorporating them as controllers in safety-critical systems is the great difficulty in providing formal guarantees about their behavior. In recent years, attempts have been made to address this obstacle by formally verifying neural networks. However, neural network verificaiton is a computationally difficult task, and traditional verification techniques have often had only limited success - leading to the development of novel techniques. In this talk we will survey the state of the art in neural network verification, focusing on Satisfiability Modulo Theories (SMT) based approaches, and discuss some of the open challenges in the field.
Formal Verification of Neural Network Controlled Autonomous Systems
ABSTRACT. In this paper, we consider the problem of formally verifying the safety of an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given a workspace that is characterized by a set of polytopic obstacles, our objective is to compute the set of safe initial conditions such that a robot trajectory starting from these initial conditions is guaranteed to avoid the obstacles. Our approach is to construct a finite state abstraction of the system and use standard reachability analysis over the finite state abstraction to compute the set of the safe initial states. The first technical problem in computing the finite state abstraction is to mathematically model the imaging function that maps the robot position to the LiDAR image. To that end, we introduce the notion of imaging-adapted sets as partitions of the workspace in which the imaging function is guaranteed to be affine. Based on this notion, and resting on efficient algorithms in the literature of computational geometry, we develop a polynomial-time algorithm to partition the workspace into imaging-adapted sets along with computing the corresponding affine imaging functions. Given this workspace partitioning, a discrete-time linear dynamics of the robot, and a pre-trained NN controller with Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge is to analyze the behavior of the neural network. To that end, and thanks to the ReLU functions being piecewise linear functions, we utilize a Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible segments of different ReLUs. SMC solvers then use a Boolean satisfiability solver and a convex programming solver and decompose the problem into smaller subproblems. At each iteration, the Boolean satisfiability solver searches for a candidate assignment for the different ReLU segments while completely abstracting the robot dynamics. Convex programming is then used to check the feasibility of the proposed ReLU phases against the dynamic and imagining constraints, or generate succinct explanations for their infeasibility to reduce the search space. To accelerate this process, we develop a pre-processing algorithm that could rapidly prune the space feasible ReLU segments. Finally, we demonstrate the efficiency of the proposed algorithms using numerical simulations with increasing complexity of the neural network controller.
Reachability Analysis for Neural Feedback Systems using Regressive Polynomial Rule Inference
ABSTRACT. We present an approach to construct reachable set overapproximations for continuous-time dynamical systems controlled using neural network feedbacksystems. Feedforward deep neural networks are now widely used as a means for learning control laws through techniques such as reinforcement learning and data-driven predictive control. However, the learning algorithms for these networks do not guarantee correctness properties on the resulting closed-loop systems. Our approach seeks to construct over approximate reachable sets by integrating a Taylor model-based flowpipe construction scheme for continuous differential equations with an approach that replaces the neural network feedback law for a small subset of inputs by a polynomial mapping. We generate the polynomial mapping using regression from input-output samples. To ensure soundness, we rigorously quantify the gap between the output of the network and that of the polynomial model. We demonstrate the effectiveness of our approach over a suite of benchmark examples ranging from 2 to 17 state variables, comparing our approach with alternative ideas based on range analysis
Verisig: verifying safety properties of hybrid systems with neural network controllers
ABSTRACT. This paper presents Verisig, a hybrid system approach to verifying
safety properties of closed-loop systems using neural networks as
controllers. Although techniques exist for verifying input/output
properties of the neural network itself, these methods cannot be used
to verify properties of the closed-loop system (since they work with
piecewise-linear constraints that do not capture non-linear plant
dynamics). To overcome this challenge, we focus on sigmoid-based
networks and exploit the fact that the sigmoid is the solution to a
quadratic differential equation, which allows us to transform the
neural network into an equivalent hybrid system. By composing the
network's hybrid system with the plant's, we transform the problem
into a hybrid system verification problem which can be solved using
state-of-the-art reachability tools. We show that reachability is
decidable for networks with one hidden layer and decidable for general
networks if Schanuel's conjecture is true. We evaluate the
applicability and scalability of Verisig in two case studies, one from
reinforcement learning and one in which the neural network is used to
approximate a model predictive controller.
Gray-box Adversarial Testing for Control Systems with Machine Learning Components
ABSTRACT. Neural Networks (NN) have been proposed in the past as an effective means for both modeling and control of systems with very complex dynamics.
However, despite the extensive research, NN-based controllers have not been adopted by the industry for safety critical systems.The primary reason is that systems with learning based controllers are notoriously hard to test and verify.
Even harder is the analysis of such systems against system-level specifications.
In this paper, we provide a gradient based method for searching the input space of a closed-loop control system in order to find adversarial samples against some system-level requirements.Our experimental results show that combined with randomized search, our method outperforms Simulated Annealing optimization