HSCC 2022: 25TH ACM INTERNATIONAL CONFERENCE ON HYBRID SYSTEMS: COMPUTATION AND CONTROL
PROGRAM FOR WEDNESDAY, MAY 4TH
Days:
previous day
next day
all days

View: session overviewtalk overview

13:45-14:00Break & Social
14:15-15:15 Session 12: Temporal Logics & Monitoring
14:15
Multi-Requirement Testing Using Focused Falsification

ABSTRACT. Testing of Cyber-Physical Systems (CPS) deals with the problem of finding input traces to the systems such that given requirements do not hold. Requirements can be formalized in many different ways; in this work requirements are modeled using Signal Temporal Logic (STL) for which a quantitative measure, or robustness value, can be computed given a requirement together with input and output traces. This value is a measure of how far away the requirement is from not holding and is used to guide falsification procedures for deciding on new input traces to simulate one after the other. When the system under test has multiple requirements, standard approach are to falsify them one-by-one, or as a conjunction of all requirements, but these approaches do not scale well for industrial-sized problems. In this work we consider testing of systems with multiple requirements by proposing focused multi-requirement falsification. This is a multi-stage approach where the solver tries to sequentially falsify the requirements one-by-one, but for every simulation also evaluate the robustness value for all requirements. After one requirement has been focused long enough, the next requirement to focus is selected by considering the robustness values and trajectory history calculated thus far. Each falsification attempt makes use of a prior sensitivity analysis, which for each requirement estimates the parameters that are unlikely to affect the robustness value, in order to reduce the number of parameters that are used by the optimization solver. The proposed approach is evaluated on a public benchmark example containing a large number of requirements, and includes a comparison of the proposed algorithm against a new suggested baseline method.

14:35
Linear Time Monitoring for One Variable TPTL

ABSTRACT. The temporal logic \emph{Timed Propositional Temporal Logic} (\TPTL) extends \LTL with \emph{freeze quantifiers} in order to express timing constraints, and is strictly more expressive than Metric Temporal Logic (\MTL). The \emph{monitoring} problem is to check whether a given timed trace satisfies a given temporal logic specification, and monitoring procedures form core subroutines of testing and falsification approaches for Cyber-Physical Systems. In this work, we develop an efficient linear time algorithm (in the length of the trace) for one variable \TPTL in the pointwise semantics. This one variable fragment is known to be already more expressive than \MTL and thus allows specifications of richer timed properties. Our algorithm carefully combines a divide and conquer approach with dynamic programming in order to achieve a linear time algorithm. As a plus, our algorithm uses only a simple twp dimensional table, and a syntax tree of the formula, as the data structures, and hence can be easily implemented on various platforms. We demonstrate the tractability of our approach with our prototype tool implementation on Matlab; our experiments show the tool scales easily to long trace lengths.

14:55
A Multiresolution Analysis of Temporal Logic

ABSTRACT. Is it possible to determine whether a signal violates a formula in Signal Temporal Logic (STL), if the monitor only has access to a low-resolution version of the signal? To answer this question, this paper demonstrates that temporal logic has a multiresolution structure, which parallels the multiresolution structure of signals. A formula in discrete-time Signal Temporal Logic (STL) is equivalently defined via the set of signals that satisfy it, known as its language. If a wavelet decomposition $x=y+d$ is performed on each signal $x$ in the language, we end up with two signal sets $Y$ and $D$, where $Y$ contains the low-resolution approximation signals $y$, and $D$ contains the detail signals $d$ needed to reconstruct the $x$'s. This paper provides a complete computational characterization of both $Y$ and $D$ using a novel constraint set encoding of STL, s.t. $x$ satisfies a formula if and only if its decomposition signals satisfy their respective encoding constraints. Then a conservative logical approximation of $Y$ is also provided: namely, we show that a formula $\formula$ in STL can be approximated by a lower-resolution formula $\formula_{-1}$ s.t. if $x$ satisfies $\formula$, then $y$ satisfies $\formula_{-1}$. By iterating the decomposition, we obtain a sequence of lower-resolution formulas $\formula_{-1},\formula_{-2},\formula_{-3},...$ which thus constitute a multiresolution analysis of $\formula$. We propose two potential applications of these results to runtime verification in distributed systems: the first is a multiresolution monitor that can detect specification violation early by simply observing a low-resolution version of the signal to be monitored. The second proposed application is a runtime monitor that operates directly in the wavelet domain, without needing to reconstruct the signal from its received wavelet coefficients.

15:15-15:30Break & Social
15:30-16:30 Session 13: Synthesis
15:30
Decoding Output Sequences for Discrete-Time Linear Hybrid Systems.

ABSTRACT. In this paper, we study the ``decoding'' problem for discrete-time hybrid systems with linear dynamics in each mode. Given an output trace of the system, the decoding problem seeks to construct a sequence of modes and states that yield a trace ``as close as possible'' to the original output trace. The decoding problem generalizes the state estimation problem, and is applicable to hybrid systems with non-determinism. We show that the decoding problem is NP-complete, and can be reduced to solving a mixed-integer linear program (MILP). In this paper, we decompose the decoding problem into two parts: (a) finding a sequence of discrete modes and transitions; and (b) finding corresponding continuous states for the mode/transition sequence. In particular, once a sequence of modes/transitions is fixed, the problem of ``filling in'' the continuous states is performed by a linear programming problem. In order to support the decomposition, we ``cover'' the set of all possible mode/transition sequences by a finite subset. We use well-known probabilistic arguments to justify a choice of cover with high confidence and design randomized algorithms for finding such covers. Our approach is demonstrated on a series of benchmarks, wherein we observe that relatively tiny fraction of the possible mode/transition sequences can be used as a cover. Furthermore, we show that the resulting linear programs can be solved rapidly by exploiting the tree structure of the set cover.

15:50
A Simpler Alternative: Minimizing Transition Systems Modulo Alternating Simulation Equivalence.

ABSTRACT. This paper studies the reduction (abstraction) of finite-state transition systems for control synthesis problems. We introduce the notion of alternating simulation equivalence (ASE), a more relaxed condition than alternating bisimulations, to relate systems and their abstractions. As with alternating bisimulations, ASE preserves the property that the existence of a controller for the abstraction is necessary and sufficient for a controller to exist for the original system. Moreover, being a less stringent condition, ASE can reduce systems further to produce smaller abstractions. We provide an algorithm that produces minimal AS equivalent abstractions. The theoretical results are then applied to obtain (un)schedulability certificates of periodic event-triggered control systems sharing a communication channel. A numerical example illustrates the results.

16:10
Correct-By-Construction Exploration and Exploitation for Unknown Linear Systems Using Bilinear Optimization

ABSTRACT. This paper addresses the problem of controlling an unknown dynamical system to safely reach a target set. We assume we have a priori access to a finite set of uncertain linear systems, to which the unknown system belongs to. This set can contain models for different failure or operational modes or potential environmental conditions. Given a desired exploration-exploitation profile, we provide a bilinear optimization based solution to this control synthesis problem. Our approach provides a family of controllers that enable adaptation based on data observed at run-time to automatically trade off model detection and reachability objectives while maintaining safety. We demonstrate the approach with several examples.

16:30-16:45Break & Social
16:45-17:45 Session 14: Probabilistic and Stochastic Hybrid Systems
16:45
Temporal Robustness of Stochastic Signals

ABSTRACT. We study the temporal robustness of stochastic signals. This topic is of particular interest in interleaving processes such as multi-agent systems where communication and individual agents induce timing uncertainty. For a deterministic signal and a given specification, we first introduce the synchronous and the asynchronous temporal robustness to quantify the signal’s robustness with respect to synchronous and asynchronous time shifts in its sub-signals. We then define the temporal robustness risk by investigating the temporal robustness of the realizations of a stochastic signal. This definition can be interpreted as the risk associated with a stochastic signal to not satisfy a specification robustly in time. In this definition, general forms of specifications such as signal temporal logic specifications are permitted. We show how the temporal robustness risk is estimated from data for the value-at-risk. The usefulness of the temporal robustness risk is underlined by both theoretical and empirical evidence. In particular, we provide various numerical case studies including a T-intersection scenario in autonomous driving.

17:05
Elliptical Slice Sampling for Probabilistic Verification of Stochastic Systems with Signal Temporal Logic Specifications

ABSTRACT. Autonomous robots typically incorporate complex sensors in their decision-making and control loops. These sensors, such as cameras and Lidars, have imperfections in their sensing and are influenced by environmental conditions. In this work, we present a method for probabilistic verification of linearizable systems with Gaussian and Gaussian mixture noise models (e.g. from perception modules, machine learning components). We compute the probabilities of task satisfaction under Signal Temporal Logic (STL) specifications, using its robustness semantics, with a Markov Chain Monte-Carlo slice sampler. As opposed to other techniques, our method avoids over-approximations and double-counting of failure events. Central to our approach is a method for efficient and rejection-free sampling of signals from a Gaussian distribution such that satisfy or violate a given STL formula. We show illustrative examples of applications in robot motion planning.

17:25
k-Inductive Barrier Certificates for Stochastic Systems

ABSTRACT. Barrier certificates are inductive invariants that provide guarantees on the safety and reachability behaviors of continuous-space dynamical systems. For stochastic dynamical systems, barrier certificates take the form of inductive “expectation” invariants. In this context, a barrier certificate is a non-negative real-valued function over the state space of the system satisfying a strong supermartingale condition: it decreases in expectation as the system evolves. The existence of barrier certificates, then, provides lower bounds on the probability of satisfaction of safety or reachability specifications over unbounded-time horizons. Unfortunately, establishing supermartingale conditions on barrier certificates can often be prohibitively restrictive. In practice, we strive to overcome this challenge by utilizing a weaker condition called c-martingale that permits a bounded increment in expectation at every time step; unfortunately this only guarantees the property of interest for a bounded time horizon. The idea of k-inductive invariants, often utilized in software verification, relaxes the need for the invariant to be inductive with every transition of the system to requiring that the invariant holds in the next step if it holds for the last k steps. This paper synthesizes the idea of k-inductive invariants with barrier certificates. These refinements that we dub as k-inductive barrier certificates relax the supermartingale requirements at each time step to supermartingale requirements in k-steps with potential c-martingale requirements at each step, while still providing unbounded-time horizon probabilistic guarantees. We characterize a notion of k- inductive barrier certificates for safety and two distinct notions of k-inductive barrier certificates for reachability. Correspondingly, utilizing such k-inductive barrier certificates, we obtain probabilistic lower bounds on the satisfaction of safety and reachability specifications, respectively. We present a computational method based on sum-of-squares (SOS) programming to synthesize suitable k- inductive barrier certificates and, demonstrate the effectiveness of the proposed methods via some case studies.

17:45-18:00Break & Social
18:00-19:00 Session 15: HSCC Keynote
18:00
Autonomous systems in the intersection of formal methods, learning, and controls

ABSTRACT. Autonomous systems are emerging as a driving technology for countlessly many applications. Numerous disciplines tackle the challenges toward making these systems trustworthy, adaptable, user-friendly, and economical. On the other hand, the existing disciplinary boundaries delay and possibly even obstruct progress. I argue that the nonconventional problems that arise in designing and verifying autonomous systems require hybrid solutions at the intersection of learning, formal methods, and controls. I will present examples of such hybrid solutions in the context of learning in sequential decision-making processes: physics-informed neural networks for the modeling of unknown dynamical systems and joint task inference and reinforcement learning. These results offer novel means for effectively integrating physics-based, contextual, or structural prior knowledge into data-driven learning algorithms. They improve data efficiency by several orders of magnitude and generalizability to environments and tasks that the system had not experienced previously