View: session overviewtalk overview
09:00 | Symbolic Execution and Probabilistic Reasoning SPEAKER: Corina Pasareanu ABSTRACT. Symbolic execution is a systematic program analysis technique which explores multiple program behaviors all at once by collecting and solving symbolic path conditions over program paths. The technique has been recently extended with probabilistic reasoning. This approach computes the conditions to reach target program events of interest and uses model counting to quantify the fraction of the input domain satisfying these conditions thus computing the probability of event occurrence. This probabilistic information can be used for example to compute the reliability of an aircraft controller under different wind conditions (modeled probabilistically) or to quantify the leakage of sensitive data in a software system, using information theory metrics such as Shannon entropy. In this talk we review recent advances in symbolic execution and probabilistic reasoning and we discuss how they can be used to ensure the safety and security of software systems. |
10:30 | A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations SPEAKER: unknown ABSTRACT. This paper presents a weakest--precondition--style calculus for reasoning about the expected values (pre--expectations) of mixed--sign unbounded random variables after execution of a probabilistic program. The semantics of a while--loop is well--defined as the limit of iteratively applying a functional to a zero--element just as in the traditional weakest pre--expectation calculus, even though a standard least fixed point argument is not applicable in this context. A striking feature of our semantics is that it is always well--defined, even if the expected values do not exist. We show that the calculus is sound and present an invariant--based approach for reasoning about pre--expectations of loops. |
10:55 | Verification of randomized security protocols SPEAKER: unknown ABSTRACT. We consider the problem of verifying the security of finitely many sessions of a protocol that tosses coins in addition to standard cryptographic primitives against a Dolev-Yao adversary. Two security properties are investigated here --- secrecy, which asks if no attacker interacting with a protocol $P$ can determine a secret with probability > 1-p; and indistinguishability, which asks if the probability observing any sequence o in protocol P1 is the same as that of observing o in protocol P2, under the same attacker. Both secrecy and indistinguishability are known to be coNP-complete for non-randomized protocols. In contrast, we show that, for randomized protocols, secrecy and indistinguishability are both decidable in coNEXPTIME. In addition we prove a matching lower bound for the secrecy problem by reducing the non-satisfiability problem of monadic first order logic without equality. |
11:20 | MDPs with Energy-Parity Objectives SPEAKER: unknown ABSTRACT. Energy-parity objectives combine $\omega$-regular with qualitative objectives of reward MDPs. The controller needs to avoid to run out of energy while satisfying a parity objective. We refute the common belief that if an energy-parity objective holds almost-surely, then this can be realised by some finite memory strategy. We provide a surprisingly simple counterexample that only uses coB\"uchi conditions. We introduce the new class of bounded (energy) storage objectives that, when combined with parity objectives, preserve the finite memory property. Based on these, we show that almost-sure and limit-sure energy-parity objectives as well as almost-sure and limit-sure storage objectives are in $\NP\cap\coNP$ and can be solved in pseudo-polynomial time for energy-parity MDPs. |
11:45 | Computing Quantiles in Markov Chains with Multi-Dimensional Costs SPEAKER: Christoph Haase ABSTRACT. Probabilistic systems that accumulate quantities such as energy or cost are naturally modelled by cost chains, which are Markov chains whose transitions are labelled with a vector of numerical costs. Computing information on the probability distribution of the total accumulated cost is a fundamental problem in this model. In this paper, we study the so-called cost problem, which is to compute quantiles of the total cost, such as the median cost or the probability of large costs. While it is an open problem whether such probabilities are always computable or even rational, we present an algorithm that allows to approximate the probabilities with arbitrary precision. The algorithm is simple to state and implement, and exploits strong results from graph theory such as the so-called BEST theorem for efficiently computing the number of Eulerian circuits in a directed graph. Moreover, our algorithm enables us to show that a decision version of the cost problem lies in the counting hierarchy, a counting analogue to the polynomial-time hierarchy that contains the latter and is included in PSPACE. Finally, we demonstrate the applicability of our algorithm by evaluating it experimentally. |
13:40 | Constraint Satisfaction Problems over semilattice block Mal'tsev algebras SPEAKER: Andrei Bulatov ABSTRACT. There are two well known types of algorithms for solving CSPs: local propagation and generating a basis of the solution space. For several years the focus of the CSP research has been on `hybrid' algorithms that somehow combine the two approaches. In this paper we present a new method of such hybridization that allows us to solve certain CSPs that has been out of reach for a quite a while. We consider these method on a fairly restricted class of CSPs given by algebras we will call semilattice block Mal'tsev. An algebra A is called semilattice block Mal'tsev if it has a binary operation f, a ternary operation m, and a congruence s such that the quotient A/s with operation f is a semilattice, f is a projection on every block of s, and every block of s is a Mal'tsev algebra with Mal'tsev operation g. We show that the constraint satisfaction problem over a semilattice block Mal'tsev algebra is solvable in polynomial time. |
14:05 | The Limits of SDP Relaxations for General-Valued CSPs SPEAKER: unknown ABSTRACT. It has been shown that for a general-valued constraint language \Gamma the following statements are equivalent: (1) any instance of VCSP(\Gamma) can be solved to optimality using a constant level of the Sherali-Adams LP hierarchy; (2) any instance of VCSP(\Gamma) can be solved to optimality using the third level of the Sherali-Adams LP hierarchy; (3) the support of Gamma satisfies the bounded width condition, i.e., it contains weak near-unanimity operations of all arities. We show that if the support of \Gamma violates the bounded with condition then not only is VCSP(\Gamma) not solved by a constant level of the Sherali-Adams LP hierarchy but it is also not solved by \Omega(n) levels of the Lasserre SDP hierarchy (also known as the sum-of-squares SDP hierarchy). For \Gamma corresponding to linear equations in an Abelian group, this result follows from existing work on inapproximability of Max-CSPs. By a breakthrough result of Lee, Raghavendra, and Steurer [STOC'15], our result implies that for any \Gamma whose support violates the bounded width condition no SDP relaxation of polynomial-size solves VCSP(\Gamma). We establish our result by proving that various reductions preserve exact solvability by the Lasserre SDP hierarchy (up to a constant factor in the level of the hierarchy). Our results hold for general-valued constraint languages, i.e., sets of functions on a fixed finite domain that take on rational or infinite values, and thus also hold in notable special cases of {0,\infty}-valued languages (CSPs), {0,1}-valued languages (Min-CSPs/Max-CSPs), and \mathbb{Q}-valued languages (finite-valued CSPs). |
14:30 | The Complexity of Minimal Inference Problem for Conservative Constraint Languages SPEAKER: Michał Wrona ABSTRACT. We study the complexity of the inference problem for propositional circumscription (the minimal inference problem) over arbitrary finite domains. The problem is of fundamental importance in nonmonotonic logics and commonsense reasoning. The complexity of the problem for the two-element domain has been completely classified [Durand, Hermann, and Nordh, Trichotomy in the complexity of minimal inference, LICS 2009]. In this paper, we classify the complexity of the problem over all conservative languages. We show that the problem is either $\Pi^P_2$-complete, coNP-complete, or in P. The classification is based on a coNP-hardness proof for a new class of languages and a new general polynomial-time algorithm solving the minimal inference problem for a large class of languages. |
14:55 | The Two Dichotomy Conjectures for Infinite Domain Constraint Satisfaction Problems Are Equivalent SPEAKER: Michael Pinsker ABSTRACT. There exist two conjectures for constraint satisfaction problems (CSPs) of reducts of finitely bounded homogeneous structures: the first one states that tractability of the CSP of such a structure is, when the structure is a model-complete core, equivalent to its polymorphism clone satisfying a certain non-trivial linear identity modulo outer embeddings. The second conjecture, challenging the approach via model-complete cores by reflections, states that tractability is equivalent to the linear identities (without outer embeddings) satisfied by its polymorphisms clone, together with the natural uniformity on it, being non-trivial. We prove that the identities satisfied in the polymorphism clone of a structure allow for conclusions about the orbit growth of its automorphism group, and apply this to show that the two conjectures are equivalent. We contrast this with a counterexample showing that $\omega$-categoricity alone is insufficient to imply the equivalence of the two conditions above in a model-complete core. Taking a different approach, we then show how the Ramsey property of a homogeneous structure can be utilized for obtaining a similar equivalence under different conditions. We then prove that any polymorphism of sufficiently large arity which is totally symmetric modulo outer embeddings of a finitely bounded structure can be turned into a non-trivial system of linear identities, and obtain non-trivial linear identities for all tractable cases of reducts of the rational order, the random graph, and the random poset. Finally, we provide a new and short proof, in the language of monoids, of the existence and uniqueness of the model-complete core of an $\omega$-categorical structure. |
13:40 | An Effectful Way to Eliminate Addiction to Dependence SPEAKER: unknown ABSTRACT. We define a monadic translation of type theory, called weaning translation, that allows for a large range of effects in dependent type theory—such as exceptions, non-termination, non-determinism or writing operation. Through the light of a call-by-push-value decomposition, we explain why the traditional approach fails with type dependency and justify our approach. Crucially, the construction requires that the universe of algebras of the monad forms itself an algebra, a property that we call self- algebraic monad. The weaning translation applies to a version of the Calculus of Inductive Constructions (CIC) with restricted version of dependent elimination, coined Baclofen Type Theory. Finally, we show how to recover a translation of full CIC by mixing parametricity technique with the weaning translation. This provides the first effectful version of CIC. |
14:05 | An interpretation of system F through bar recursion SPEAKER: Valentin Blot ABSTRACT. There are two possible computational interpretations of second-order arithmetic: Girard's system F and Spector's bar recursion. While the logic is the same, the programs obtained from these two interpretations have a fundamentally different computational behavior and their relationship is not well understood. We make a step towards a comparison by defining the first translation of system F into a simply-typed total language with bar recursion. This translation relies on a realizability interpretation of second-order arithmetic. Due to G\"odel's incompleteness theorem there is no proof of termination of system F within second-order arithmetic. However, for each individual term of system F there is a proof in second-order arithmetic that it terminates, with its realizability interpretation providing a bound on the number of reduction steps to reach a normal form. Using this bound, we compute the normal form through primitive recursion. Moreover, since the normalization proof of system F proceeds by induction on typing derivations, the translation is compositional. The flexibility of our method opens the possibility of getting a more direct translation that will provide an alternative approach to the study of polymorphism, namely through bar recursion. |
14:30 | The Clocks Are Ticking: No More Delays! Reduction Semantics for Type Theory with Guarded Recursion SPEAKER: unknown ABSTRACT. Guarded recursion in the sense of Nakano allows general recursive types and terms to be added to type theory without breaking consistency. Recent work has demonstrated applications of guarded recursion such as programming with codata, reasoning about coinductive types, as well as constructing and reasoning about denotational models of general recursive types. Guarded recursion can also be used as an abstract form of step-indexing for reasoning about programming languages with advanced features. Ultimately, we would like to have an implementation of a type theory with guarded recursion in which all these applications can be carried out and machine-checked. In this paper, we take a step towards this goal by devising a suitable reduction semantics. We present Clocked Type Theory, a new type theory for guarded recursion that is more suitable for reduction semantics than the existing ones. We prove confluence, strong normalisation and canonicity for its reduction semantics, constructing the theoretical basis for a future implementation. We show how coinductive types as exemplified by streams can be encoded, and derive that the type system ensures productivity of stream definitions. |
14:55 | Bar Induction: The Good, the Bad, and the Ugly SPEAKER: unknown ABSTRACT. We present an extension of the computation system and logic of the Nuprl proof assistant with intuitionistic principles, namely versions of Brouwer's bar induction principle, which is equivalent to transfinite induction. We have substantially extended the formalization of Nuprl's type theory within the Coq proof assistant to show that two such bar induction principles are valid w.r.t. Nuprl's semantics (the Good): one for sequences of numbers that involved only minor changes to the system, and a more general one for sequences of name-free (the Ugly) closed terms that involved adding a limit constructor to Nuprl's term syntax in our model of Nuprl's logic. We have proved that these additions preserve Nuprl's key metatheoretical properties such as consistency. Finally, we show some new insights regarding bar induction, such as the non-truncated version of bar induction on monotone bars is intuitionistically false (the Bad). |
Award Session and LICS Business Meeting
Conference Banquet