previous day
next day
all days

View: session overviewtalk overview

10:30-12:30 Session 6: Time and Probability
Conservative Time Discretization: A Comparative Study

ABSTRACT. We present the first review of methods to overapproximate the set of reachable states of linear time-invariant systems subject to uncertain initial states and input signals for short time horizons. These methods are fundamental to state-of-the-art reachability algorithms for long time horizons, which proceed in two steps: First they use such a method to discretize the system for a short time horizon, and then they efficiently obtain a solution of the new discrete system for the long time horizon. Traditionally, both qualitative and quantitative comparison between different reachability algorithms has only considered the combination of both steps. In this paper we study the first step in isolation. We perform a variety of numerical experiments for six fundamental discretization methods from the literature. As we show, these methods have different trade-offs regarding accuracy and computational cost and, depending on the characteristics of the system, some methods may be preferred over others. We also discuss preprocessing steps to improve the results and efficient implementation strategies.

Untangling the Graphs of Timed Automata to Decrease the Number of Clocks

ABSTRACT. For timed automata, the question of whether the number of clocks can be decreased without violating the semantics is known to be undecidable. It is, however, possible to obtain a number of clocks that is optimal, in a well-defined sense, for a timed automaton with a given graph and set of constraints. Such an optimal allocation of clocks can be improved further by changing the automaton's graph or its set of constraints. We address the first kind of change, and develop a method that may allows us to convert the automaton to one that requires fewer clocks, without changing its semantics.

Probabilistic Model Checking of BPMN Processes at Runtime

ABSTRACT. Business Process Model and Notation (BPMN) is a standard business process modelling language that allows users to describe a set of structured tasks, which results in a service or product. Before running a BPMN process, the user has no clue of the probability of executing some task or specific combination of tasks. This is, however, of prime importance for adjusting resources associated with tasks and thus optimising costs. In this paper, we define an approach to perform probabilistic model checking on BPMN models at runtime. To do so, we first transform the BPMN model into a Labelled Transition System (LTS). Then, by analysing the execution traces obtained when running multiple instances of the process, we can compute the probability of executing each transition in the LTS model, and thus generate a Probabilistic Transition System (PTS). Finally, we perform probabilistic model checking for verifying that the PTS model satisfies a given probabilistic property. This verification loop is applied periodically to update the results according to the execution of the process instances. All these ideas are implemented in a tool chain, which was applied successfully to several realistic BPMN processes.

HyperPCTL Model Checking by Probabilistic Decomposition

ABSTRACT. Abstract. Probabilistic hyperproperties describe system properties involving probability measures over multiple runs and have numerous applications in information-flow security. However, the poor scalability of the existing model checking algorithms to verify probabilistic hyperproperties limits their use to small models. In this paper, we propose a model checking algorithm for verification of discrete-time Markov chains (DTMC) against HyperPCTL formulas by integrating numerical solution techniques. Our algorithm is based on a probabilistic decomposition of the self-composed DTMC into variants of the underlying DTMC. Experimentally, we show that our algorithm significantly outperforms both a symbolic approach and the original approach based on brute-force self-composition

14:30-15:30 Session 7: Learning and Synthesis
Learning Finite State Models from Recurrent Neural Networks

ABSTRACT. Explaining and verifying the behavior of recurrent neural networks (RNNs) is an important step towards achieving confidence in machine learning. The extraction of finite state models, like deterministic automata, has been shown to be a promising concept for analyzing RNNs. In this paper, we apply a black-box approach based on active automata learning combined with model-guided conformance testing to learn finite state machines (FSMs) from RNNs. The technique efficiently infers a formal model of an RNN classifier’s input-output behavior, regardless of its inner structure. In several experiments, we compare this approach to other state-of-the-art FSM extraction methods. By detecting imprecise generalizations in RNNs that other techniques miss, model-guided conformance testing learns FSMs that more accurately model the RNNs under examination. We demonstrate this by identifying counterexamples with this testing approach that falsifies wrong hypothesis models learned by other techniques. This entails that testing guided by learned automata can be a useful method for finding adversarial inputs, that is, inputs incorrectly classified due to improper generalization.

Kaki: Concurrent Update Synthesis for Regular Policies via Petri Games

ABSTRACT. Modern computer networks are becomming increasingly complex and require frequent configuration changes in order to increase their dependability. It is essential that forwarding policies are preserved not only before and after the configuration update but also at any moment during the inheritently distributed execution of the update. We present Kaki, a Petri game based approach for automatic synthesis of switch batches that can be updated in parallel without violating a given (regular) forwarding policy like waypointing and service chaining. Kaki guarantees to find the minimum number of concurrent batches and it supports both splittable and nonsplittable flow forwarding. In order to achieve an optimal performance, we introduce two novel optimization techniques: decomposition into independent subproblems and identification of switches that can be always updated in the same batch. These techniques considerably improve the performance of our tool, relying on TAPAAL's verification engine for Petri games as its backend. Experiments on a large benchmark of real networks from the topology Zoo database demonstrate that Kaki outperforms the state-of-the-art tool FLIP in terms of the runtime on the largest instances of the update problems, while at the same time it provides shorter concurrent update sequences.