LICS 2017: THIRTY-SECOND ANNUAL ACM/IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE
PROGRAM FOR THURSDAY, JUNE 22ND
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 10: Invited Talk
Location: M1.01
09:00
Symbolic Execution and Probabilistic Reasoning

ABSTRACT. Symbolic execution is a systematic program analysis technique which explores multiple program behaviors all at once by collecting and solving symbolic path conditions over program paths. The technique has been recently extended with probabilistic reasoning. This approach computes the conditions to reach target program events of interest and uses model counting to quantify the fraction of the input domain satisfying these conditions thus computing the probability of event occurrence. This probabilistic information can be used for example to compute the reliability of an aircraft controller under different wind conditions (modeled probabilistically) or to quantify the leakage of sensitive data in a software system, using information theory metrics such as Shannon entropy. In this talk we review recent advances in symbolic execution and probabilistic reasoning and we discuss how they can be used to ensure the safety and security of software systems.
 

10:30-12:10 Session 11A
Location: M1.01
10:30
A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations
SPEAKER: unknown

ABSTRACT. This paper presents a weakest--precondition--style calculus for reasoning about the expected values (pre--expectations) of mixed--sign unbounded random variables after execution of a probabilistic program. The semantics of a while--loop is well--defined as the limit of iteratively applying a functional to a zero--element just as in the traditional weakest pre--expectation calculus, even though a standard least fixed point argument is not applicable in this context. A striking feature of our semantics is that it is always well--defined, even if the expected values do not exist. We show that the calculus is sound and present an invariant--based approach for reasoning about pre--expectations of loops.

10:55
Verification of randomized security protocols
SPEAKER: unknown

ABSTRACT. We consider the problem of verifying the security of finitely many sessions of a protocol that tosses coins in addition to standard cryptographic primitives against a Dolev-Yao adversary. Two security properties are investigated here --- secrecy, which asks if no attacker interacting with a protocol $P$ can determine a secret with probability > 1-p; and indistinguishability, which asks if the probability observing any sequence o in protocol P1 is the same as that of observing o in protocol P2, under the same attacker. Both secrecy and indistinguishability are known to be coNP-complete for non-randomized protocols. In contrast, we show that, for randomized protocols, secrecy and indistinguishability are both decidable in coNEXPTIME. In addition we prove a matching lower bound for the secrecy problem by reducing the non-satisfiability problem of monadic first order logic without equality.

11:20
MDPs with Energy-Parity Objectives
SPEAKER: unknown

ABSTRACT. Energy-parity objectives combine $\omega$-regular with qualitative objectives of reward MDPs. The controller needs to avoid to run out of energy while satisfying a parity objective.

We refute the common belief that if an energy-parity objective holds almost-surely, then this can be realised by some finite memory strategy. We provide a surprisingly simple counterexample that only uses coB\"uchi conditions.

We introduce the new class of bounded (energy) storage objectives that, when combined with parity objectives, preserve the finite memory property. Based on these, we show that almost-sure and limit-sure energy-parity objectives as well as almost-sure and limit-sure storage objectives are in $\NP\cap\coNP$ and can be solved in pseudo-polynomial time for energy-parity MDPs.

11:45
Computing Quantiles in Markov Chains with Multi-Dimensional Costs

ABSTRACT. Probabilistic systems that accumulate quantities such as energy or cost are naturally modelled by cost chains, which are Markov chains whose transitions are labelled with a vector of numerical costs. Computing information on the probability distribution of the total accumulated cost is a fundamental problem in this model. In this paper, we study the so-called cost problem, which is to compute quantiles of the total cost, such as the median cost or the probability of large costs. While it is an open problem whether such probabilities are always computable or even rational, we present an algorithm that allows to approximate the probabilities with arbitrary precision. The algorithm is simple to state and implement, and exploits strong results from graph theory such as the so-called BEST theorem for efficiently computing the number of Eulerian circuits in a directed graph. Moreover, our algorithm enables us to show that a decision version of the cost problem lies in the counting hierarchy, a counting analogue to the polynomial-time hierarchy that contains the latter and is included in PSPACE. Finally, we demonstrate the applicability of our algorithm by evaluating it experimentally.

10:30-12:10 Session 11B
Location: V1.01
10:30
Differentiation in Logical Form
SPEAKER: unknown

ABSTRACT. We introduce a logical theory of differentiation for a real-valued function on a finite dimensional real Euclidean space. A real-valued continuous function is represented by a localic approximable mapping between two semi-strong proximity lattices, representing the two coherent Euclidean spaces for the domain and the range of the function. Similarly, the Clarke gradient, equivalently the L-derivative, of a locally Lipschitz map, which is non-empty, compact and convex valued, is represented by an approximable mapping. Approximable mappings of the latter type form a bounded complete domain isomorphic with the function space of Scott continuous functions of a real variable into the domain of non-empty compact and convex subsets of the finite dimensional Euclidean space partially ordered with reverse inclusion. Corresponding to the notion of a single-tie of a locally Lipschitz function, used to derive the domain-theoretic L-derivative of the function, we introduce the dual notion of a single-knot of approximable mappings which gives rise to Lipschitzian approximable mappings. We then develop the notion of a strong tie and that of a strong knot leading to a Stone duality result for locally Lipschitz maps and Lipschitzian approximable mappings: a locally Lipschitz map belongs to a strong single-tie associated with a compact and convex set with non-empty interior if and only if the corresponding approximiable mapping belongs to the strong single-knot associated with the interior of the compact convex set. The strong single-knots, in which a Lipschitzian approximable mapping belongs, are employed to define the Lipschitzian derivative of the approximable mapping. The latter is dual to the Clarke subgradient of the corresponding locally Lipschitz map defined domain-theoretically using strong single-ties. A stricter notion of strong knots is subsequently developed which captures approximable mappings of continuously differentiable maps providing a gradient Stone duality for these maps. Finally, we derive a calculus for Lipschitzian derivative of approximable mapping for some basic constructors and show that it is dual to the calculus satisfied by the Clarke gradient.

10:55
On the extension of computable real functions
SPEAKER: unknown

ABSTRACT. We investigate interrelationships among different notions from mathematical analysis, effective topology, and classical computability theory. Our main object of study is the class of computable functions defined over a bounded domain with the boundary being a left-c.e. number. We investigate necessary and sufficient conditions under which such function can be computably extended. It turns out that this depends on the behavior of the function near the boundary as well as on the class of left-c.e. numbers to which the boundary belongs, that is, how it can be constructed. Of particular interest a class of functions is investigated: sawtooth functions constructed from computable enumerations of c.e. sets.

11:20
Bounded time computation on metric spaces and Banach spaces
SPEAKER: unknown

ABSTRACT. We extend Kawamura and Cook's framework for computational complexity for operators in analysis. This model is based on second-order complexity theory for functionals on the Baire space, which is lifted to metric spaces via representations. Time is measured in the length of the input encodings and the output precision.

We propose the notions of complete and regular representations. Completeness is proven to ensure that any computable function has a time bound. Regularity relaxes Kawamura and Cook's notion of a second-order representation, while still guaranteeing fast computability of the length of encodings.

We apply these notions to investigate relationships between metric properties of a space and existence of representations that render the metric bounded-time computable. We show that time bounds for the metric can straightforwardly be translated into size bounds of compact subsets of the space. Conversely, for compact spaces and for Banach spaces we construct admissible complete regular representations admitting fast computation of the metric and short encodings. Here it is necessary to trade time bounds off against length of encodings.

11:45
The Continuity of Monadic Stream Functions
SPEAKER: unknown

ABSTRACT. We study the continuity properties of monadic stream functions. Brouwer's continuity principle states that all functions from infinite sequences of naturals to naturals are continuous, that is, for every sequence the result depends only on a finite initial segment. It is an intuitionistic axiom that is incompatible with classical mathematics. Recently Martín Escardó proved that it is also inconsistent in type theory. We propose a reformulation of the continuity principle that may be more faithful to the original meaning by Brouwer. It applies to monadic streams, potentially unending sequences of values produced by steps triggered by a monadic action, possibly involving side effects. We consider functions on them that are uniform, in the sense that they operate in the same way independently of the particular monad that provides the specific side effects. Formally this is done by requiring a form of naturality in the monad. Functions on monadic streams have not only a foundational importance, but have also practical applications in signal processing and reactive programming. We give algorithms to determine the modulus of continuity of monadic stream functions and to generate strategy trees for them (trees whose nodes and branches describe the interaction of the process with the environment).

13:40-15:20 Session 12A
Location: M1.01
13:40
Constraint Satisfaction Problems over semilattice block Mal'tsev algebras

ABSTRACT. There are two well known types of algorithms for solving CSPs: local propagation and generating a basis of the solution space. For several years the focus of the CSP research has been on `hybrid' algorithms that somehow combine the two approaches. In this paper we present a new method of such hybridization that allows us to solve certain CSPs that has been out of reach for a quite a while. We consider these method on a fairly restricted class of CSPs given by algebras we will call semilattice block Mal'tsev. An algebra A is called semilattice block Mal'tsev if it has a binary operation f, a ternary operation m, and a congruence s such that the quotient A/s with operation f is a semilattice, f is a projection on every block of s, and every block of s is a Mal'tsev algebra with Mal'tsev operation g. We show that the constraint satisfaction problem over a semilattice block Mal'tsev algebra is solvable in polynomial time.

14:05
The Limits of SDP Relaxations for General-Valued CSPs
SPEAKER: unknown

ABSTRACT. It has been shown that for a general-valued constraint language \Gamma the following statements are equivalent: (1) any instance of VCSP(\Gamma) can be solved to optimality using a constant level of the Sherali-Adams LP hierarchy; (2) any instance of VCSP(\Gamma) can be solved to optimality using the third level of the Sherali-Adams LP hierarchy; (3) the support of Gamma satisfies the bounded width condition, i.e., it contains weak near-unanimity operations of all arities.

We show that if the support of \Gamma violates the bounded with condition then not only is VCSP(\Gamma) not solved by a constant level of the Sherali-Adams LP hierarchy but it is also not solved by \Omega(n) levels of the Lasserre SDP hierarchy (also known as the sum-of-squares SDP hierarchy). For \Gamma corresponding to linear equations in an Abelian group, this result follows from existing work on inapproximability of Max-CSPs. By a breakthrough result of Lee, Raghavendra, and Steurer [STOC'15], our result implies that for any \Gamma whose support violates the bounded width condition no SDP relaxation of polynomial-size solves VCSP(\Gamma).

We establish our result by proving that various reductions preserve exact solvability by the Lasserre SDP hierarchy (up to a constant factor in the level of the hierarchy). Our results hold for general-valued constraint languages, i.e., sets of functions on a fixed finite domain that take on rational or infinite values, and thus also hold in notable special cases of {0,\infty}-valued languages (CSPs), {0,1}-valued languages (Min-CSPs/Max-CSPs), and \mathbb{Q}-valued languages (finite-valued CSPs).

14:30
The Complexity of Minimal Inference Problem for Conservative Constraint Languages
SPEAKER: Michał Wrona

ABSTRACT. We study the complexity of the inference problem for propositional circumscription (the minimal inference problem) over arbitrary finite domains. The problem is of fundamental importance in nonmonotonic logics and commonsense reasoning. The complexity of the problem for the two-element domain has been completely classified [Durand, Hermann, and Nordh, Trichotomy in the complexity of minimal inference, LICS 2009]. In this paper, we classify the complexity of the problem over all conservative languages. We show that the problem is either $\Pi^P_2$-complete, coNP-complete, or in P. The classification is based on a coNP-hardness proof for a new class of languages and a new general polynomial-time algorithm solving the minimal inference problem for a large class of languages.

14:55
The Two Dichotomy Conjectures for Infinite Domain Constraint Satisfaction Problems Are Equivalent

ABSTRACT. There exist two conjectures for constraint satisfaction problems (CSPs) of reducts of finitely bounded homogeneous structures: the first one states that tractability of the CSP of such a structure is, when the structure is a model-complete core, equivalent to its polymorphism clone satisfying a certain non-trivial linear identity modulo outer embeddings. The second conjecture, challenging the approach via model-complete cores by reflections, states that tractability is equivalent to the linear identities (without outer embeddings) satisfied by its polymorphisms clone, together with the natural uniformity on it, being non-trivial.

We prove that the identities satisfied in the polymorphism clone of a structure allow for conclusions about the orbit growth of its automorphism group, and apply this to show that the two conjectures are equivalent. We contrast this with a counterexample showing that $\omega$-categoricity alone is insufficient to imply the equivalence of the two conditions above in a model-complete core.

Taking a different approach, we then show how the Ramsey property of a homogeneous structure can be utilized for obtaining a similar equivalence under different conditions.

We then prove that any polymorphism of sufficiently large arity which is totally symmetric modulo outer embeddings of a finitely bounded structure can be turned into a non-trivial system of linear identities, and obtain non-trivial linear identities for all tractable cases of reducts of the rational order, the random graph, and the random poset.

Finally, we provide a new and short proof, in the language of monoids, of the existence and uniqueness of the model-complete core of an $\omega$-categorical structure.

13:40-15:20 Session 12B
Location: V1.01
13:40
An Effectful Way to Eliminate Addiction to Dependence
SPEAKER: unknown

ABSTRACT. We define a monadic translation of type theory, called weaning translation, that allows for a large range of effects in dependent type theory—such as exceptions, non-termination, non-determinism or writing operation. Through the light of a call-by-push-value decomposition, we explain why the traditional approach fails with type dependency and justify our approach. Crucially, the construction requires that the universe of algebras of the monad forms itself an algebra, a property that we call self- algebraic monad. The weaning translation applies to a version of the Calculus of Inductive Constructions (CIC) with restricted version of dependent elimination, coined Baclofen Type Theory. Finally, we show how to recover a translation of full CIC by mixing parametricity technique with the weaning translation. This provides the first effectful version of CIC.

14:05
An interpretation of system F through bar recursion
SPEAKER: Valentin Blot

ABSTRACT. There are two possible computational interpretations of second-order arithmetic: Girard's system F and Spector's bar recursion. While the logic is the same, the programs obtained from these two interpretations have a fundamentally different computational behavior and their relationship is not well understood. We make a step towards a comparison by defining the first translation of system F into a simply-typed total language with bar recursion. This translation relies on a realizability interpretation of second-order arithmetic. Due to G\"odel's incompleteness theorem there is no proof of termination of system F within second-order arithmetic. However, for each individual term of system F there is a proof in second-order arithmetic that it terminates, with its realizability interpretation providing a bound on the number of reduction steps to reach a normal form. Using this bound, we compute the normal form through primitive recursion. Moreover, since the normalization proof of system F proceeds by induction on typing derivations, the translation is compositional. The flexibility of our method opens the possibility of getting a more direct translation that will provide an alternative approach to the study of polymorphism, namely through bar recursion.

14:30
The Clocks Are Ticking: No More Delays! Reduction Semantics for Type Theory with Guarded Recursion
SPEAKER: unknown

ABSTRACT. Guarded recursion in the sense of Nakano allows general recursive types and terms to be added to type theory without breaking consistency. Recent work has demonstrated applications of guarded recursion such as programming with codata, reasoning about coinductive types, as well as constructing and reasoning about denotational models of general recursive types. Guarded recursion can also be used as an abstract form of step-indexing for reasoning about programming languages with advanced features.

Ultimately, we would like to have an implementation of a type theory with guarded recursion in which all these applications can be carried out and machine-checked. In this paper, we take a step towards this goal by devising a suitable reduction semantics.

We present Clocked Type Theory, a new type theory for guarded recursion that is more suitable for reduction semantics than the existing ones. We prove confluence, strong normalisation and canonicity for its reduction semantics, constructing the theoretical basis for a future implementation. We show how coinductive types as exemplified by streams can be encoded, and derive that the type system ensures productivity of stream definitions.

14:55
Bar Induction: The Good, the Bad, and the Ugly
SPEAKER: unknown

ABSTRACT. We present an extension of the computation system and logic of the Nuprl proof assistant with intuitionistic principles, namely versions of Brouwer's bar induction principle, which is equivalent to transfinite induction. We have substantially extended the formalization of Nuprl's type theory within the Coq proof assistant to show that two such bar induction principles are valid w.r.t. Nuprl's semantics (the Good): one for sequences of numbers that involved only minor changes to the system, and a more general one for sequences of name-free (the Ugly) closed terms that involved adding a limit constructor to Nuprl's term syntax in our model of Nuprl's logic. We have proved that these additions preserve Nuprl's key metatheoretical properties such as consistency. Finally, we show some new insights regarding bar induction, such as the non-truncated version of bar induction on monotone bars is intuitionistically false (the Bad).

15:50-17:50 Session 13: Award Session and LICS Business Meeting

Award Session and LICS Business Meeting

Location: M1.01