QEST2020: QUANTITATIVE EVALUATION OF SYSTEMS
PROGRAM FOR TUESDAY, SEPTEMBER 1ST
Days:
previous day
next day
all days

View: session overviewtalk overview

11:00-12:00 Session 1: Plenary Talk by Annabelle McIver
11:00
On Privacy and Accuracy in Data Releases

ABSTRACT. In this talk we study the relationship between privacy and accuracy in the context of correlated datasets. We use a model of quantitative information flow to describe the the trade-off between privacy of individuals' data and and the utility of queries to that data by modelling the effectiveness of adversaries attempting to make inferences after a data release. We show that, where correlations exist in datasets, it is not possible to implement optimal noise-adding mechanisms that give the best possible accuracy or the best possible privacy in all situations. Finally we illustrate the trade-off between accuracy and privacy for local and oblivious differentially private mechanisms in terms of inference attacks on medium-scale datasets.

12:00-13:00 Social Lunch: Best Paper Awards

During this lunch zoom session, the best-paper awards for CONCUR, FORMATS and QEST are announced.

[Video recording].

13:00-14:15 Session QES2: Model Checking I
Chair:
13:00
Multi-player Equilibria Verification for Concurrent Stochastic Games
PRESENTER: Gabriel Santos

ABSTRACT. Concurrent stochastic games (CSGs) are an ideal formalism for modelling probabilistic systems that feature multiple players or components with distinct objectives making concurrent, rational decisions. Examples include communication or security protocols and multi-robot navigation. Verification methods for CSGs exist but are limited to scenarios where agents or players are grouped into two coalitions, with those in the same coalition sharing an identical objective. In this paper, we propose multi-coalitional verification techniques for CSGs. We use subgame-perfect social welfare (or social cost) optimal Nash equilibria, which are strategies where there is no incentive for any coalition to unilaterally change its strategy in any game state, and where the total combined objectives are maximised (or minimised). We present an extension of the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards) and a corresponding model checking algorithm to synthesise such equilibria for any number of distinct coalitions. We implement our techniques in the PRISM-games tool and apply them to several case studies, including a secret sharing protocol and a public good game.

[presentation video]. [preprint].

13:25
Automatic Pre- and Postconditions for Partial Differential Equations

ABSTRACT. Based on a simple automata-theoretic and algebraic framework, we study equational reasoning for Initial Value Problems (IVPs) of polynomial Partial Differential Equations (PDEs). In order to represent IVPs in their full generality, we introduce stratified systems, where function definitions can be decomposed into distinct subsystems, focusing on different subsets of independent variables. Under a certain coherence condition, for such stratified systems we prove existence and uniqueness of formal power series solutions, which conservatively extend the classical analytic ones. We then give a — in a precise sense, complete — algorithm to compute weakest preconditions and strongest postconditions for such systems. To some extent, this result reduces equational reasoning on PDE initial value (and boundary) problems to algebraic reasoning. We illustrate some experiments conducted with a proof-of-concept implementation of the method.

[presentation video]. [preprint].

13:50
Compact and Explainable Strategy Representations using dtControl

ABSTRACT. Recent advances have shown how decision trees are apt data structures for concisely representing strategies arising out of both model checking as well as controller synthesis for cyber-physical systems. Moreover, they make the strategy explainable and help boost understanding and trust. This tool demonstration presents dtControl – a tool that can represent strategies arising from strategy synthesis using tools like PRISM, Storm, Uppaal Stratego, and SCOTS. We demonstrate the ease-of-use both when employing dtControl as a black box as well as when controlling all hyper-parameters. We compare the decision tree representation to BDDs and also demonstrate the possibility of obtaining even smaller decision trees using the specialized algorithms available in the tool.

[presentation video]. [tool website].

14:45-16:00 Session QES4: Machine Learning Tools
14:45
Loss-size and Reliability Trade-offs Amongst Diverse Redundant Binary Classifiers

ABSTRACT. Many applications involve the use of binary classifiers, including applications where safety/security are critical. The quantitative assessment of such classifiers typically involves receiver operator characteristic (ROC) methods and the estimation of sensitivity/specificity. But such techniques have their limitations. For safety/security critical applications, more relevant measures of reliability and risk should be estimated. Moreover, ROC techniques do not explicitly account for: 1) inherent uncertainties one faces during assessments, 2) reliability evidence other than the observed failure behaviour of the classifier, and 3) how this observed failure behaviour alters one's uncertainty about classifier reliability. We address these limitations using conservative Bayesian inference (CBI) methods, producing statistically principled, conservative values for risk/reliability measures of interest. Our analyses reveals trade-offs amongst all binary classifiers with the same expected loss -- the most reliable classifiers are those most likely to experience high impact failures. This trade-off can be harnessed using diverse redundant binary classifiers.

[presentation video]. [preprint].

15:10
Tracking the Race Between Deep Reinforcement Learning and Imitation Learning
PRESENTER: Timo P. Gros

ABSTRACT. Learning-based approaches for solving large sequential decision making problems have become popular in recent years. The resulting agents perform differently and their characteristics depend on those of the underlying learning approach. Here, we consider a benchmark planning problem from the reinforcement learning domain, the Racetrack, to investigate the properties of agents derived from different deep (reinforcement) learning approaches. We compare the performance of deep supervised learning, in particular imitation learning, vs. reinforcement learning for the Racetrack model. We find that imitation learning yields agents that follow more risky paths. In contrast, the decisions of deep reinforcement learning are more foresighted, i.e., avoid states in which fatal decisions are more likely. Our evaluations show that for this sequential decision making problem, deep reinforcement learning performs best in many aspects even though for imitation learning optimal decisions are considered.

[presentation video]. [preprint].

15:25
SafePILCO: a Software Tool for Safe and Data-Efficient Policy Synthesis

ABSTRACT. SafePILCO is a software tool for safe and data-efficient policy search. It extends the known PILCO algorithm—natively written in MATLAB—for data-efficient reinforcement learning towards safe learning and policy synthesis. We provide a full Python implementation, and leverage existing libraries that allow the codebase to remain short and modular, which is appropriate for wider use by the reinforcement learning, verification and control communities.

[presentation video]. [preprint].

15:40
StochNetV2: a Tool for Automated Deep Abstractions for Stochastic Reaction Networks
PRESENTER: Denis Repin

ABSTRACT. Predicting stochastic cellular dynamics as emerging from the mechanistic models of molecular interactions is a long-standing challenge in systems biology: low-level chemical reaction network (CRN) models give raise to a highly-dimensional continuous-time Markov chain (CTMC) which is computationally demanding and often prohibitive to analyse in practice. A recently proposed abstraction method uses deep learning to replace this CTMC with a discrete-time process, by training a neural network called Mixture Density Network (MDN) with traces sampled at regular time intervals. The major advantage of deep abstractions is that it produces a computational model which is dramatically cheaper to execute, while preserving the statistical features of the training data. In general, the abstraction accuracy improves with the amount of training data. However, depending on a CRN, the overall quality of the method – the efficiency gain and abstraction accuracy – will also depend on the choice of neural network architecture given by hyper-parameters such as the layer types and connections between them. As a consequence, in practice, the modeller would have to take care of finding the suitable architecture manually, for each given CRN, through a tedious and time-consuming trial-and-error cycle.

We present a toolbox for stochastic simulations with CRN models and their (automated) deep abstractions. The optimal neural network architecture is learnt along with learning the transition kernel of the abstract process. Automated search of the architecture makes the method applicable directly to any given CRN, which is time-saving for deep learning experts and crucial for non-specialists. The tool was primarily designed to efficiently reproduce simulation traces of given complex stochastic CRNs in systems biology research, with possibly multi-modal emergent phenotypes. It is at the same time applicable in any broader context, where time-series data of a stochastic process is measured experimentally, or synthesised through a modelling language different than CRNs (e.g. rule-based).

[presentation video]. [tool repository].