View: session overviewtalk overviewside by side with other conferences
09:00 | PRESENTER: Raven Beutner ABSTRACT. Temporal hyperproperties are system properties that relate multiple execution traces. For (finite-state) hardware, temporal hyperproperties are supported by model checking algorithms, and tools for general temporal logics like HyperLTL exist. For (infinite-state) software, the analysis of temporal hyperproperties has, so far, been limited to $k$-safety properties, i.e., properties that stipulate the absence of a bad interaction between any $k$ traces. In this paper, we present an automated method for the verification of $\forall^k\exists^l$-safety properties in infinite-state systems. A $\forall^k\exists^l$-safety property stipulates that for any $k$ traces, there \emph{exist} $l$ traces such that the resulting $k+l$ traces do not interact badly. This combination of universal and existential quantification enables us to express many properties beyond $k$-safety, including, for example, generalized non-interference or program refinement. Our method is based on a strategy-based instantiation of existential trace quantification combined with a program reduction, both in the context of a fixed predicate abstraction. Notably, our framework allows for mutual dependence of strategy and reduction. |
09:20 | PRESENTER: Kuldeep S. Meel ABSTRACT. Quantified information flow (QIF) has emerged as a rigorous approach to quantitatively measure confidentiality; the information-theoretic underpinning of QIF allows the end-users to link the computed quantities with the computational effort required on the part of the adversary to gain access to desired confidential information. In this work, we focus on the estimation of Shannon entropy for a given program P. As a first step, we focus on the case wherein a Boolean formula F(X,Y) captures the relationship between inputs X and output Y of P. Such formulas F(X,Y) have the property that there exists exactly one valuation to Y such that F is satisfied. The existing techniques require O(2^m) model counting queries, where m =|Y|. We propose the first efficient algorithmic technique, called EntropyEstimation, to estimate the Shannon entropy of F(X,Y) with PAC-style guarantees, i.e., the computed estimate is guaranteed to lie within a (1 ± epsilon)-factor of the ground truth with confidence at least 1-delta. Furthermore, EntropyEstimation makes only O(min(m,n)/epsilon^2) counting and sampling queries, where m =|Y|, and n =|X|, thereby achieving a significant reduction in the number of model counting queries. We demonstrate the practical efficiency of our algorithmic framework via a detailed experimental evaluation. Our evaluation demonstrates that the proposed framework scales to the formulas beyond the reach of the previously known approaches. |
09:40 | PRESENTER: Yuxin Fan ABSTRACT. Secure multi-party computation (MPC) is a promising technique for privacy-persevering applications. A number of MPC frameworks have been proposed to reduce the burden of designing customized protocols, allowing non-experts to quickly develop and deploy MPC applications. To improve performance, recent MPC frameworks allow users to declare secret variables so that only these variables are to be protected. However, in practice, it could be highly non-trivial for non-experts to specify secret variables: declaring too many degrades the performance while declaring too less compromises privacy. To address this problem, in this work, we propose an automated security policy synthesis approach to declare as few secret variables as possible but without compromising security. Our approach is a synergistic integration of type inference and symbolic reasoning. The former is able to quickly infer a sound, but sometimes conservative, security policy, whereas the latter allows to identify secret variables in a security policy that can be declassified in a precise manner. Moreover, the results from the symbolic reasoning are fed back to type inference to refine the security types even further. We implement our approach in a tool, named PoS4MPC. Experimental results on five typical MPC applications confirm the efficacy of our approach. |
10:00 | PRESENTER: Julian Siber ABSTRACT. Hyperproperties relate multiple computation traces to each other. Model checkers for hyperproperties thus return, in case a system model violates the specification, a set of traces as a counterexample. Fixing the erroneous relations between traces in the system that led to the counterexample is a difficult manual effort that highly benefits from additional explanations. In this paper, we present an explanation method for counterexamples to hyperproperties described in the specification logic HyperLTL. We extend Halpern and Pearl's definition of actual causality to sets of traces witnessing the violation of a HyperLTL formula, which allows us to identify the events that caused the violation. We report on the implementation of our method and show that it significantly improves on previous approaches for analyzing counterexamples returned by HyperLTL model checkers. |
10:20 | PRESENTER: Miguel Isabel ABSTRACT. The most widely used Zero-Knowledge (ZK) protocols require provers to prove they know a solution to a computational problem expressed as a Rank-1 Constraint System (R1CS). An R1CS is essentially a system of non-linear arithmetic constraints over a set of signals, whose security level depends on its non-linear part only, as the linear (additive) constraints can be easily solved by an attacker. Distilling the essential constraints from an R1CS by removing the part that does not contribute to its security is important, not only to reduce costs (time and space) of producing the ZK proofs, but also to reveal to cryptographic programmers the real hardness of their proofs. In this paper, we formulate the problem of distilling constraints from an R1CS as the (hard) problem of simplifying constraints in the realm of non-linearity. To the best of our knowledge, it is the first time that constraint-based techniques developed in the context of formal methods are applied to the challenging problem of analysing and optimizing ZK protocols. |
Lunches will be held in Taub lobby (CAV, CSF) and in The Grand Water Research Institute (DL, NMR, IJCAR, ITP).
14:00 | PRESENTER: Ryotaro Banno ABSTRACT. In many Internet of Things (IoT) applications, data sensed by an IoT device are continuously sent to the server and monitored against a specification. Since the data often contain sensitive information, and the monitored specification is usually proprietary, both must be kept private from the other end. We propose a protocol to conduct oblivious online monitoring---online monitoring conducted without revealing the private information of each party to the other---against a safety LTL specification. In our protocol, we first convert a safety LTL formula into a DFA and conduct online monitoring with the DFA. Based on fully homomorphic encryption (FHE), we propose two online algorithms (ReverseStream and BlockStream) to run a DFA obliviously. We prove the correctness and security of our entire protocol. We also show the scalability of our algorithms theoretically and empirically. Our case study shows that our algorithms are fast enough to monitor blood glucose levels online, demonstrating our protocol's practical relevance. |
14:20 | PRESENTER: Anna Becchi ABSTRACT. The analysis of legacy systems requires the automated extraction of high-level specifications. We propose a framework, called Abstraction Modulo Stability, for the analysis of transition systems operating in stable states, and responding with run-to-completion transactions to external stimuli. The abstraction captures the effects of external stimuli on the system state, and describes it in the form of a finite state machine. This approach is parametric on a set of predicates of interest and the definition of stability. We consider some possible stability definitions which yield different practically relevant abstractions, and propose a parametric algorithm for abstraction computation. The obtained FSM is extended with guards and effects on a given set of variables of interest. The framework is evaluated in terms of expressivity and adequacy within an industrial project with the Italian Railway Network, on reverse engineering tasks of relay-based interlocking circuit to extract specifications for a computer-based reimplementation. |
14:40 | Reachability of Koopman Linearized Systems Using Random Fourier Feature Observables and Polynomial Zonotope Refinement PRESENTER: Ethan Lew ABSTRACT. Koopman operator linearization approximates nonlinear systems of differential equations with higher-dimensional linear systems. For formal verification using reachability analysis, this is an attractive conversion, as highly scalable methods exist to compute reachable sets for linear systems. However, two main challenges are present with this approach, both of which are addressed in this work. First, the approximation must be sufficiently accurate for the result to be meaningful, which is controlled by the choice of observable functions during Koopman operator linearization. By using random Fourier features as observable functions, the process becomes more systematic than earlier work, while providing a higher-accuracy approximation. Second, although the higher-dimensional system is linear, simple convex initial sets in the original space can become complex non-convex initial sets in the linear system. We overcome this using a combination of Taylor model arithmetic and polynomial zonotope refinement. Compared with prior work, the result is more efficient, more systematic and more accurate. |
15:00 | PRESENTER: Sylvie Putot ABSTRACT. We present a unified approach, implemented in the RINO tool, for the computation of inner and outer approximations of reachable sets of discrete-time and continuous-time dynamical systems, possibly controlled by neural networks with differentiable activation functions. RINO combines a zonotopic set representation with generalized mean-value AE extensions to compute under and over approximations of the robust range of differentiable functions, and applies these techniques to the particular case of learning-enabled dynamical systems. The AE extensions require an efficient and accurate evaluation of the function and its Jacobian with respect to the inputs and initial conditions. For continuous-time dynamical systems, possibly controlled by neural networks, the function to evaluate is the solution of the dynamical system. It is over-approximated in RINO using Taylor methods in time coupled with a set-based evaluation with zonotopes. We demonstrate the good performances of RINO compared to state-of-the art tools Verisig 2.0 and ReachNN* on a set of classical benchmark examples of neural network controlled closed loop systems. For generally comparable precision to Verisig 2.0 and higher precision than ReachNN*, RINO is always at least one order of magnitude faster, while also computing the more involved inner-approximations that the other tools do not compute. |
15:10 | PRESENTER: Kyungmin Bae ABSTRACT. We present the STLmc model checker for signal temporal logic (STL) properties of hybrid systems. STLmc can perform STL model checking up to a robustness threshold for a wide range of nonlinear hybrid systems with ordinary differential equations. Our tool utilizes the refutation-complete SMT-based model checking algorithm with various SMT solvers by reducing the robust STL model checking problem into the Boolean STL model checking problem. If STLmc does not find a counterexample, the system is guaranteed to be correct up to the given bounds and robustness threshold. We demonstrate the effectiveness of STLmc on a number of hybrid system benchmarks. |
15:20 | PRESENTER: Elizabeth Polgreen ABSTRACT. UCLID5 is a tool for the multi-modal formal modeling, verification, and synthesis of systems. It enables one to tackle verification problems for heterogeneous systems such as combinations of hardware and software, or those that have multiple, varied specifications, or systems that require hybrid modes of modeling. A novel aspect of UCLID5 is an emphasis on the use of syntax-guided and inductive synthesis to automate steps in modeling and verification. This tool paper presents new developments in the UCLID5 tool including new language features, integration with new techniques for syntax-guided synthesis and satisfiability solving, support for hyperproperties and combinations of axiomatic and operational modeling, demonstrations on new problem classes, and a robust implementation. |
16:00 | PRESENTER: Chaitanya Agarwal ABSTRACT. Markov decision processes (MDP) and continuous-time MDP (CTMDP) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first algorithm to compute mean payoff probably approximately correctly in unknown MDP; further, we extend it to unknown CTMDP. We do not require any knowledge of the state space, only a lower bound on the minimum transition probability, which has been advocated in literature. In addition to providing PAC bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks. |
16:20 | PRESENTER: Thom Badings ABSTRACT. We employ uncertain parametric CTMCs with parametric transition rates and a prior on the parameter values. The prior encodes uncertainty about the actual transition rates, while the parameters allow dependencies between transition rates. Sampling the parameter values from the prior distribution then yields a standard CTMC, for which we may compute relevant reachability probabilities. We provide a principled solution, based on a technique called scenario-optimization, to the following problem: From a finite set of parameter samples and a user-specified confidence level, compute confidence regions on the reachability probabilities. The confidence regions should (with high probability) contain the reachability probabilities of a CTMC induced by any additional sample. To boost the scalability of the approach, we employ standard abstraction techniques and adapt our methodology to support approximate reachability probabilities. Experiments with various well-known benchmarks show the applicability of the approach. |
16:40 | PRESENTER: Pablo Castro ABSTRACT. We investigate zero-sum turn-based two-player stochastic games in which the objective of one player is to maximize the amount of rewards obtained during a play, while the other aims at minimizing it. We focus on games in which the minimizer plays in a fair way. We believe that these kinds of games enjoy interesting applications in software verification, where the maximizer plays the role of a system intending to maximize the number of “milestones” achieved, and the minimizer represents the behavior of some uncooperative but yet fair environment. Normally, to study total reward properties, games are requested to be stopping (i.e., they reach a terminal state with probability 1). We relax the property to request that the game is stopping only under a fair minimizing player. We prove that these games are determined, i.e., each state of the game has a value defined. Furthermore, we show that both players have memoryless and deterministic optimal strategies, and the game value can be computed by approximating the greatest-fixed point of a set of functional equations. We implemented our approach in a prototype tool, and evaluated it on an illustrating example and an Unmanned Aerial Vehicle case study. |
17:00 | PRESENTER: Lorenz Leutgeb ABSTRACT. In this paper, we present the first fully-automated expected amortised cost analysis of self-adjusting data structures, that is, of randomised splay trees, randomised splay heaps and randomised meldable heaps, which so far have only (semi-)manually been analysed in the literature. We generalise a recently proposed type-and-effect system for logarithmic amortised resource analysis, proposed by Hofmann et al. to probabilistic programs. Further, we provide a prototype implementation able to fully automatically analyse the aforementioned case studies. |
17:20 | PRESENTER: Aina Niemetz ABSTRACT. State-of-the-art SMT solvers are highly complex pieces of software with performance, robustness, and correctness as key requirements. Complementing traditional testing techniques for these solvers with randomized stress testing has been shown to be quite effective. Re-cent work has showcased the value of input fuzzing for finding issues, but typically does not comprehensively test a solver’s API. Previous work on model-based API fuzzing was tailored to a single solver and a small subset of SMT-LIB. We present Murxla, a comprehensive, modular, and highly extensible model-based API fuzzer for SMT solvers. Murxla randomly generates valid sequences of solver API calls based on a customizable API model, with full support for the semantics and features of SMT-LIB.It is solver-agnostic but extensible to allow for solver-specific testing and supports option fuzzing, cross-checking with other solvers, translation toSMT-LIBv2, and SMT-LIBv2 input fuzzing. Our evaluation confirms its efficacy in finding issues in multiple state-of-the-art SMT solvers. |
pickup at 18:00 from the Technion (Tour at Haifa, no food will be provided)