FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
VERIPROP ON FRIDAY, AUGUST 12TH

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 134E
Location: Taub 9
09:00
Probabilistic Hyperproperties

ABSTRACT. Four decades ago, Lamport used the notion of trace properties as a means to specify the correctness of individual executions of concurrent programs. This notion was later formalized and classified by Alpern and Schneider to safety and liveness properties. Temporal logics like LTL and CTL were built based on these efforts to give formal syntax and semantics to requirements of trace properties. Subsequently, verification algorithms were developed to reason about individual executions of a system.

However, it turns out that many interesting requirements are not trace properties. For example, important information-flow security policies (e.g. noninterference, observational determinism) or service level agreements (e.g. mean response time, percentage uptime) cannot be expressed as properties of individual execution traces of a system. Rather, they are properties of sets of execution traces, also known as hyperproperties. Temporal logics such as HyperLTL and HyperCTL∗ have been proposed to provide a unifying framework to express and reason about hyperproperties.

This talk is devoted to special class of hyperproperties: we ask the question what are hyperproperties in the context of systems with random behavior. We will discuss what are relevant probabilistic relations between independent executions of a system, how we can formally express them in a temporal logic, and how we can decide the truth of such statements.

09:50
Widest Paths and Global Propagation in Bounded Value Iteration for Stochastic Games
PRESENTER: Ichiro Hasuo

ABSTRACT. Solving \emph{stochastic games} with the reachability objective is a fundamental problem, especially in quantitative verification and synthesis. For this purpose, \emph{bounded value iteration (BVI)} attracts attention as an efficient iterative method. However, BVI's performance is often impeded by costly \emph{end component (EC) computation} that is needed to ensure convergence. Our contribution is a novel BVI algorithm that conducts, in addition to local propagation by the Bellman update that is typical of BVI, \emph{global} propagation of upper bounds that is not hindered by ECs. To conduct global propagation in a computationally tractable manner, we construct a weighted graph and solve the \emph{widest path problem} in it. Our experiments show the algorithm's performance advantage over the previous BVI algorithms that rely on EC computation.

10:10
Safe Sampling-Based Planning for Stochastic Systems with Uncertain Dynamics
PRESENTER: Thom Badings

ABSTRACT. Autonomous systems that operate in safety-critical settings must be able to behave safely under stochastic disturbances and imprecise model parameters. Due to their uncertain and stochastic nature, such autonomous systems may be captured by probabilistic programs. A typical reach-avoid planning problem is to reach a desirable goal state while avoiding certain unsafe states. However, due to stochastic disturbances and imprecise model parameters, the outcome of an action by the controller of the autonomous system may be uncertain. Moreover, the set of safe states is typically non-convex, making it infeasible to find global solutions to reach-avoid problems in general.

In this research, we employ abstraction-based methods to solve reach-avoid problems for continuous-state systems with partially unknown and/or stochastic dynamics. Given a partial model specification and a reach-avoid problem, we abstract the system into a discrete-state model that captures the uncertainty about the dynamics by probabilistic transitions between discrete states. We resort to sampling techniques to compute probably approximately correct (PAC) bounds on these transition probabilities, which we capture in a so-called interval Markov decision process (iMDP). We synthesize policies using a tailored version of the probabilistic model checker PRISM for iMDPs. We then convert the policy into a feedback controller for the continuous system, for which we give high-confidence guarantees on the probability that the reach-avoid problem is satisfied. With a diverse set of benchmarks, we show the robustness and practical applicability of our method for solving realistic reach-avoid problems.

10:30-11:00Coffee Break
11:00-12:30 Session 137G
Location: Taub 9
11:00
Distribution Testing and Probabilistic Programming: A Match made in Heaven

ABSTRACT. The field of distribution testing seeks to test whether a given distribution satisfies a property of interest. While there have been deep technical developments in the field of distribution testing, the field has somehow been confined to theoretical studies. I will discuss our work on development practical algorithmic frameworks based on distribution testing, and its relationship to probabilistic program verification. In particular, I will show that the problem of measuring distance between straight line probabilistic programs is #P-hard. As expected, the theoretical hardness shouldn’t deter us and I will discuss our success in using distribution testing-based frameworks to design efficient samplers. 

11:50
A Quantitative Verification Infrastructure

ABSTRACT. Following the success of deductive verification infrastructures for non-probabilistic programs like Viper and Boogie, we talk about our proposal for a deductive verification infrastructure for probabilistic programs. Our quantitative intermediate verification language enables modular reasoning about lower and upper bounds of expected values of programs. In our framework, we can encode and verify different proof rules for weakest pre-expectation semantics. As a proof of concept, we have created a deductive verifier for probabilistic programs on top of our framework. It supports reasoning about lower bounds of weakest liberal pre-expectations and about upper bounds of weakest pre-expectatons. For loops, we use encodings of proof rules of Park induction and k-induction.

12:10
Inferring Expected Runtimes and Sizes
PRESENTER: Eleanore Meyer

ABSTRACT. We present a modular approach to infer upper bounds on the expected runtimes of probabilistic integer programs automatically. To this end, it computes bounds on the runtimes of program parts and on the sizes of their variables in an alternating way. To evaluate its power, we implemented our approach in a new version of our open-source tool KoAT.

12:30-14:00Lunch Break

Lunches will be held in Taub hall.

14:00-15:30 Session 138G
Location: Taub 9
14:00
Quantifying Leakage in Privacy Risk Analysis using Probabilistic Programming
14:50
Deterministic stream-sampling for probabilistic programming: semantics and verification
PRESENTER: William Smith

ABSTRACT. Probabilistic programming languages rely fundamentally on some notion of sampling, and this is doubly true for probabilistic programming languages which perform Bayesian inference using Monte Carlo techniques. Verifying samplers---proving that they generate samples from the correct distribution---is crucial to the use of probabilistic programming languages for statistical modelling and inference. However, the typical denotational semantics of probabilistic programs is incompatible with deterministic notions of sampling which is problematic, considering most statistical inference is performed using pseudorandom number generators.

We present an operational and denotational semantics for probabilistic programming, in terms of operations on samplers, that allows for deterministic generation of random variables, and which distinguishes between samplers---programs---and the probability distributions which they target. We also introduce a sound calculus for demonstrating that samplers target a desired distribution, justifying common approaches to constructing samplers, which can be applied to produce samplers that are correct-by-construction.

15:10
Probabilistic Guarded Kleene Algebra with Tests

ABSTRACT. We extend Guarded Kleene Algebra with Tests (GKAT) with probabilistic branching and loops to reason about imperative programs with probabilistic control flow. We introduce coalgebraic operational semantics and a sound Salomaa style axiomatisation of bisimulation equivalence. As an example, we show the correctness of a variant of the Knuth-Yao algorithm simulating a three-sided die with a series of fair coin tosses.

15:30-16:00Coffee Break
16:00-17:15 Session 139E
Location: Taub 9
16:00
Uncertainty propagation in discrete-time systems using probabilistic forms

ABSTRACT.  We consider the problem of rigorously quantifying the uncertainty in the states of a dynamical system due to the influence of initial state uncertainties and stochastic disturbance inputs. We use ideas from interval arithmetic and its variations such as affine arithmetic or Taylor expansions to rigorously represent the state variables at time t as a function of the initial state variables and noise symbols that model the random exogenous inputs. We also show how concentration of measure inequalities can be employed to prove rigorous bounds on the tail probabilities of these state variables.

16:50
Verifying Probabilistic Programs using Generating Functions

ABSTRACT. We study discrete probabilistic programs with potentially unbounded looping behaviors over an infinite state space. We present, to the best of our knowledge, the first decidability result for the problem of determining whether such a program generates exactly a specified distribution over its outputs (provided the program terminates almost-surely). The class of distributions that can be specified in our formalism consists of standard distributions (geometric, uniform, etc.) and finite convolutions thereof. Our method relies on representing these (possibly infinite-support) distributions as probability generating functions which admit effective arithmetic operations. We have automated our techniques in a tool called Prodigy, which supports automatic invariance checking, compositional reasoning of nested loops, and efficient queries to the output distribution, as demonstrated by experiments.