HSCC'17: HSCC'17: 20TH INTERNATIONAL CONFERENCE ON HYBRID SYSTEMS: COMPUTATION AND CONTROL (PART OF CPS WEEK)
PROGRAM FOR TUESDAY, APRIL 18TH
Days:
next day
all days

View: session overviewtalk overview

10:00-10:30Coffee Break
10:30-12:10 Session 1: Verification I
10:30
Forward inner-approximated reachability of non-linear continuous systems

ABSTRACT. We propose an approach for computing inner-approximations (also called under-approximations) of reachable sets of dynamical systems defined by non-linear, uncertain, ordinary differential equations. This is a notoriously difficult problem, much more intricate than outer-approximations for which there exist well known solutions, mostly based on Taylor models. The few methods developped recently for inner-approximation mostly rely on backward flowmaps, and extra ingredients, either coming from optimisation, or involving topological criteria, are required. Our solution, in comparison, builds on rather inexpensive set-based methods, namely a generalized mean-value theorem combined with Taylor models outer-approximations (also called over-approximations) of the flow and its jacobian with respect to the uncertain inputs and parameters. We are able to show with our prototype Matlab implementation that the method is both efficient and precise on classical examples. The combination of such forward inner and outer Taylor model based approximations can be used as a basis for the verification and falsification of properties of cyber-physical systems.

10:55
On the Polytope Escape Problem for Linear Dynamical Systems

ABSTRACT. The Polytope Escape Problem for continuous linear dynamical systems consists of deciding, given an affine operator f on R^d and a convex polyhedron P in R^d with rational descriptions, whether there exists an initial point x_0 in P such that the trajectory of the unique solution to the differential equation dx/dt = f(x), with x(0)=x_0, is entirely contained in P. We place this problem in the existential first-order theory of reals, which lies between NP and PSPACE, by reducing it in polynomial time to the decision version of linear programming with real algebraic coefficients. Our algorithm makes use of spectral techniques and relies among others on tools from Diophantine approximation.

11:20
SMC: Satisfiability modulo Convex Optimization

ABSTRACT. We address the problem of determining the satisfiability of a Boolean combination of convex constraints over the real numbers, which is ubiquitous in the context of hybrid system verification and control. We first show that a special type of logic formulas, termed monotone Satisfiability Modulo Convex (SMC) formulas, is the most general class of formulas over Boolean and nonlinear real predicates that reduce to convex programs for any satisfying assignment of the Boolean variables. For this class of formulas, we develop a new satisfiability modulo convex procedure that uses a lazy combination of SAT solving and convex programming to provide a satisfying assignment or determine that the formula is unsatisfiable. Our approach can then leverage the efficiency and the formal guarantees of state-of-the-art algorithms in both the Boolean and convex analysis domains. A key step in lazy satisfiability solving is the generation of succinct infeasibility proofs that can support conflict-driven learning and decrease the number of iterations between the SAT and the theory solver. For this purpose, we propose a suite of algorithms that can trade complexity with the minimality of the generated infeasibility certificates. Remarkably, we show that a minimal infeasibility certificate can be generated by simply solving one convex program for a sub-class of SMC formulas, namely ordered positive unate SMC formulas, that have additional monotonicity properties. Perhaps surprisingly, ordered positive unate formulas appear themselves very frequently in a variety of practical applications. By exploiting the properties of SMC formulas, we can then build and demonstrate effective and scalable decision procedures for several problems in hybrid system verification and control, including secure state estimation and robotic motion planning.

11:45
Sapo: A Tool for the Reachability Computation and Parameter Synthesis of Polynomial Dynamical Systems

ABSTRACT. Sapo is a tool for the formal analysis of polynomial dynamical systems. Its main features are: 1) Reachability computation, i.e., the calculation of the set of states reachable from a set of initial conditions, and 2) Parameter synthesis, i.e., the refinement of a set of parameters so that the system satisfies a given specification. Sapo can represent reachable sets as unions of boxes, parallelotopes, or parallelotope bundles (symbolic representation of polytopes). Sets of parameters are represented with polytopes while specifications are for- malized as Signal Temporal Logic (STL) formulas.

12:10-13:30Lunch Break
13:30-15:20 Session 2: Probabilities
13:30
Forward stochastic reachability analysis for uncontrolled linear systems using Fourier transforms

ABSTRACT. We propose a scalable method for forward stochastic reachability analysis for uncontrolled linear systems with affine disturbance. Our method uses Fourier transforms to efficiently compute the forward stochastic reach probability measure (density) and the forward stochastic reach set. This method is applicable to systems with bounded or unbounded disturbance sets. We also examine the properties of the forward stochastic reach set and its probability density. Motivated by the problem of a robot attempting to capture a stochastically moving, non-adversarial target, we demonstrate our method on two simple examples. Where traditional approaches provide approximations, our method provides exact analytical expressions for the densities.

13:55
Controller Synthesis for Reward Collecting Markov Processes in Continuous Space

ABSTRACT. We propose and analyze a generic mathematical model for optimizing rewards in continuous-space, dynamic environments, called Reward Collecting Markov Processes. Our model is motivated by request-serving applications in robotics, where the objective is to control a dynamical system to respond to environment requests that are generated stochastically, while minimizing wait times. Our model departs from usual discounted reward Markov decision processes in that the reward function is not determined by the current state and action. Instead, a background process generates rewards whose values depend on the number of steps between generation and collection. For example, a reward is declared whenever there is a new request for a robot and the robot gets higher reward the sooner it is able to serve the request. A policy in this setting is a sequence of control actions which determines a (random) trajectory over the continuous state space. The reward achieved by the trajectory is the cumulative sum of all rewards obtained along the way in the finite horizon case and the long run average of all rewards in the infinite horizon case.

We study both the finite horizon and infinite horizon problems for maximizing the expected (respectively, the long run average expected) collected reward. Our first result characterizes these problems as solutions to dynamic programs over an augmented hybrid space, which gives history-dependent optimal policies. Second, we provide a computational method for these problems which abstracts the continuous-space problem into a discrete state collecting reward Markov decision process. Under assumptions of Lipschitz continuity of the Markov process and uniform bounds on the discounting, we show that we can bound the error in computing optimal solutions on the finite-state approximation. Finally, we provide a fixed point characterization of the optimal expected collected reward in the infinite case, and show how the fixed point can be obtained by value iteration. We illustrate our results with a simple request-serving robot example.

14:20
Reachability Computation for Switching Diffusions: Finite Abstractions of Continuous Models

ABSTRACT. We consider continuous time stochastic hybrid systems with no resets and continuous dynamics described by linear stochastic differential equation. We show that for this class of systems reachability (and dually, safety) properties can be studied on an abstraction defined in terms of a discrete time discrete space Markov chain (DTMC), with provable error bounds. The technical contribution of the paper is a characterization of the uniform convergence of the time discretization of such stochastic processes with respect to safety properties. This allows to newly provide a complete and sound numerical procedure for reachability and safety analysis of switching diffusions.

14:45
Statistical Verification of the Toyota Powertrain Control Verification Benchmark

ABSTRACT. The Toyota Powertrain Control Verification Benchmark has been recently proposed as challenge problems that capture features of realistic automotive designs. In this paper we statistically verify the most complicated of the powertrain control models proposed, that includes features like delayed differential and difference equations, look-up tables, and highly non-linear dynamics, by simulating the C++ code generated from the Simulink model of the design. Our results show that for at least 98% of the possible initial operating conditions the desired properties hold. These are the first verification results for this model, statistical or otherwise.

15:20-15:50Coffee Break
15:50-17:30 Session 3: Invariance and Enthropy
15:50
Towards Optimal State Estimation of Switched Nonlinear Systems with finite-data-rate measurements

ABSTRACT. State estimation is a fundamental problem for monitoring and controlling systems. Engineering systems interconnect sens- ing and computing devices over a shared bandwidth-limited channels, and therefore, estimation algorithms should strive to use bandwidth optimally. We present a notion of entropy, its upper bound and an algorithm for state estimation of nonlinear switched dynamical systems with unknown switch- ing signal with deterministic (worst case) error bounds. Our approach relies on the notion of topological entropy and uses techniques from the theory for control under limited informa- tion. We show that the average bit rate used is optimal in the sense that, the efficiency gap of the algorithm is within an additive constant of the gap between estimation entropy of the system and its known upper-bound. We apply the algorithm to several systems and discuss the impact of the choice of a key parameter: the number of modes tracked at each time stamp.

16:15
Path-Complete Graphs and Common Lyapunov Functions.

ABSTRACT. A Path-Complete Lyapunov Function is an algebraic criterion composed of finite number of functions, called its pieces, and a directed, labeled graph defining Lyapunov inequalities between these pieces. It provides a stability certificate for discrete-time switching systems under arbitrary switching. In this paper, we prove that the satisfiability of such a criterion implies the existence of a Common Lyapunov Function, expressed as the composition of minima and maxima of the pieces of the Path-Complete Lyapunov function. The converse, however, is not true even for discrete-time linear systems: we present such a system where a max-of-2 quadratics Lyapunov function exists while no corresponding Path-Complete Lyapunov function with 2 quadratic pieces exists. In light of this, we investigate when it is possible to decide if a Path-Complete Lyapunov function is less conservative than another. By analyzing the combinatorial and algebraic structure of the graph and the pieces respectively, we provide simple tools to decide when the existence of such a Lyapunov function implies that of another.

16:40
Invariance Feedback Entropy of Non-deterministic Control Systems

ABSTRACT. We introduce a notion of invariance feedback entropy for discrete-time, non-deterministic control systems as a measure of necessary state information to enforce a given subset of the state space to be invariant. We provide conditions that guarantee finiteness and show that the well-known notion of invariance feedback entropy for deterministic systems is recovered in the deterministic case. We establish the data rate theorem which shows that the entropy equals the largest lower bound on the data rate of any coder-controller that achieves invariance. For finite systems, the invariance feedback entropy is characterized by the value function of an appropriately designed mean-payoff game. We use several examples throughout the paper to instantiate the various definitions and results.