previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 12: Keynote
Location: H 3010
Constant Delay Enumeration of Query Results

ABSTRACT. Query evaluation is one of the central tasks of a database system. The theoretical foundations of query evaluation rely on a close connection between database theory and mathematical logic, as relational databases correspond to finite relational structures, and queries can be formulated by logical formulae. In the past decade, starting with Durand and Grandjean's 2007 paper, the fields of logic in computer science and database theory have seen a number of contributions that deal with the efficient enumeration of query results. In this scenario, the objective is as follows: given a structure (i.e., a database) and a logical formula (i.e., a query), after a short precomputation phase, the query results shall be generated one by one, without repetition, with guarantees on the maximum delay time between the output of two tuples. In this vein, the best that one can hope for is constant delay (i.e., the delay may depend on the size of the query but not on that of the database) and linear preprocessing time. By now, quite a number of query evaluation problems are known to admit constant delay algorithms preceded by linear or pseudo-linear time preprocessing. In this keynote, I will focus on results and proof techniques concerning first-order queries evaluated on relational structures.

10:00-10:30Coffee break
10:30-11:10 Session 13A

Probability and Privacy

Location: H 3010
Continuous-Time Markov Decisions based on Partial Exploration

ABSTRACT. We provide a framework for speeding up algorithms for time-bounded reachability analysis of continuous-time Markov decision processes. The principle is to find a small, but almost equivalent subsystem of the original system and only analyse the subsystem. Candidates for the subsystem are identified through simulations and iteratively enlarged until runs are represented in the subsystem with high enough probability. The framework is thus dual to that of abstraction refinement. We instantiate the framework in several ways with several traditional algorithms and experimentally confirm orders-of-magnitude speed ups in many cases.

A paper based on this work is currently under submission.

Conditional Value-at-Risk for Reachability and Mean Payoff in Markov Decision Processes

ABSTRACT. We present the conditional value-at-risk (CVaR) in the context of Markov chains and Markov decision processes with reachability and mean-payoff objectives. CVaR quantifies risk by means of the expectation of the worst p-quantile. As such it can be used to design risk-averse systems. We consider not only CVaR constraints, but also introduce their conjunction with expectation constraints and quantile constraints (value-at-risk, VaR). We derive lower and upper bounds on the computational complexity of the respective decision problems and characterize the structure of the strategies in terms of memory and randomization.

This work will have been presented at LICS'18.

Threshold Constraints with Guarantees for Parity Objectives in Markov Decision Processes

ABSTRACT. The beyond worst-case synthesis problem was introduced recently by Bruyère et al.: it aims at building system controllers that provide strict worst-case performance guarantees against an antagonistic environment while ensuring higher expected performance against a stochastic model of the environment. This talk presents an extension of this framework which focused on quantitative objectives, by addressing the case of omega-regular conditions encoded as parity objectives, a natural way to represent functional requirements of systems.

We build strategies that satisfy a main parity objective on all plays, while ensuring a secondary one with sufficient probability. This setting raises new challenges in comparison to quantitative objectives, as one cannot easily mix different strategies without endangering the functional properties of the system. We establish that, for all variants of this problem, deciding the existence of a strategy lies in NP \cap coNP, the same complexity class as classical parity games. Hence, our framework provides additional modeling power while staying in the same complexity class.

Bisimilarity Distances for Approximate Differential Privacy

ABSTRACT. Differential privacy is a widely studied notion of privacy for various models of computation. Technically, it is based on measuring differences between generated probability distributions. We study epsilon,delta-differential privacy in the setting of Labelled Markov Chains. While the exact differences relevant to epsilon,delta-differential privacy are not computable in this framework, we propose a computable bisimilarity distance that yields a sound technique for measuring delta, the parameter that quantifies deviation from pure differential privacy. We show that the distance is always rational, the associated threshold problem is in NP, and the distance can be computed exactly with polynomially many calls to an NP oracle.

10:30-11:10 Session 13B

Algebra and Automata

Location: H 2013
Pointlike sets for varieties determined by groups

ABSTRACT. The separation problem for a variety V of regular languages asks to decide whether two disjoint regular languages can be separated by a V-language. The separation problem is a special case of the covering problem, which, in turn, is the language-theoretic reformulation of the problem of computing the pointlike subsets of a finite semigroup with respect to the variety of finite semigroups corresponding to the variety V of regular languages.

Henckell computed the pointlike sets with respect to aperiodic semigroups, i.e., those semigroups in which all subgroups are trivial. In language-theoretic terms, Henckell's theorem shows that the covering, and hence also separation, problem for first-order-definable languages is decidable. I will present a sweeping generalization of Henckell's theorem, recently obtained in joint work with B. Steinberg: we show that, in the definition of aperiodic semigroup, one may replace "trivial" by any decidable variety of finite groups, and still obtain computability of pointlike sets.

Making this statement more precise, for any variety of finite groups H, let H-bar denote the variety of finite semigroups whose subgroups are in H. Our main result is that the H-bar-pointlike sets are computable whenever membership in H is decidable. The main tools we use in the proof are Schützenberger groups, a variant of the Rhodes expansion, and asynchronous transducers. For a new language-theoretic application, taking H to be the variety of finite solvable groups, our result shows that the covering and separation problems for languages definable in first order logic with modular quantifiers are decidable.

Rational, Recognizable, and Aperiodic Sets in the Partially Lossy Queue Monoid

ABSTRACT. Partially lossy queue monoids (or plq monoids) model the behavior of queues that can forget some parts of their content. Partially lossy queues are a model between reliable and lossy queues. So, these monoids generalize the (reliable) queue monoid~\cite{HusKZ17} and the lossy queue monoid~\cite{Koe16}.

While many decision problems like universality, equivalence, or emptiness of intersection on recognizable subsets in the plq monoid are decidable, most of them are undecidable if the sets are rational. In particular, in this monoid the classes of rational and recognizable subsets do not coincide.

To give a Kleene-type characterization of the recognizable sets in the plq monoid we use an approach like the one by Ochma\'{n}ski~\cite{Och85}: we restrict multiplication and iteration in the construction of rational sets. By these restrictions and by allowing complementation we obtain precisely the class of recognizable sets. From these special rational expressions we can also obtain an MSO logic describing the recognizable subsets like in \cite{Bue60,Die95} for regular languages and recognizable trace languages, resp.

Moreover, we show connections between star-free sets, aperiodic sets, and an FO logic in the plq monoid similar to the known results for aperiodic word and trace languages~\cite{Sch65,McNP71,DieR95}.

A new hierarchy for automaton semigroups

ABSTRACT. We define a new strict and computable hierarchy for the family of automaton semigroups, which reflects the various asymptotic behaviours of the state-activity growth. This hierarchy extends that given by Sidki for automaton groups, and also gives new insights into the latter. Its exponential part coincides with a notion of entropy for some associated automata. We prove that the Order Problem is decidable when the state-activity is bounded. The Order Problem remains open for the next level of this hierarchy, that is, when the state-activity is linear. Gillibert showed that it is undecidable in the whole family.

A Ramsey Theorem for semigroups

ABSTRACT. In automata theory, it is useful to expose patterns that can be iterated. Depending on the model considered, having a loop (repetition of the same state) along a computation might not be sufficient to ensure good properties once the loop is iterated. This is sometimes solved by requiring an idempotent structure of either the corresponding element of the transition monoid of the automaton, or, in the case of transducers, of the output produced along the loop.

In this talk, we will consider the following problem. Given a semigroup S and an integer λ, we would like to expose a bound B such that every sequence of elements of S that has a length greater than B admits λ consecutive factors corresponding to the same idempotent element of S.

This question can be answered by applying Ramsey’s theorem, or Simon’s factorisation forest theorem. However, this yields bounds that are exponential with respect to the size of S. We improve this result by exposing a new bound that is exponential with respect to the size of the maximal J-chain of S. For some semigroups, this matches the previous bounds, but we will see that there exist semigroups where the new bound is exponentially better (groups, transformation monoids, substitution monoids).

Finally, we show how this result can be used to obtain pumping Lemmas for regular relations, i.e., relations definable by non-deterministic streaming string transducers.

10:30-11:10 Session 13C

Vector Addition Systems

Location: H 2032
The Reachability Problem for Vector Addition Systems is Not Elementary

ABSTRACT. The reachability problem for Vector Addition Systems is currently known to be decidable (best upper bound is cubic-Ackermann) and ExpSpace-hard. We provide a better lower bound showing that in fact problem is not elementary.  The construction is based on the observation that certain kind of fraction equations can have surprisingly involved solutions and Vector Addition Systems are able to implement that phenomenon.

Polynomial Vector Addition Systems With States

ABSTRACT. The reachability problem for vector addition systems is one of the most diffic ult and central problems in theoretical computer science. The problem is known to be decidable, but despite intense investigation during the last four decades, the exact complexity is still open. For some sub-classes, the complexity of the reachability problem is known. Structurally bounded vector addition systems, the class of vector addition systems with finite reachability sets from any initial configuration, is one of those classes. In fact, the reachability problem was shown to be polynomial-space complete for that class by Praveen and Lodaya in 2008. Surprisingly, extending this property to vector addition systems with states is open. In fact, there exist vector addition systems with states that are structurally bounded but with Ackermannian large sets of reachable configurations. It follows that the reachability problem for that class is between exponential space and Ackermannian. In this paper we introduce the class of polynomial vector addition systems with states, defined as the class of vector addition systems with states with size of reachable configurations bounded polynomially in the size of the initial ones. We prove that the reachability problem for polynomial vector addition systems is exponential-space complete. Additionally, we show that we can decide in polynomial time if a vector addition system with states is polynomial. This characterization introduces the notion of iteration scheme with potential applications to the reachability problem for general vector addition systems.

Efficient Algorithms for Asymptotic Bounds on Termination Time in VASS

ABSTRACT. Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parameterized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if the complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form Θ(n^k), for some integer k ≤ d, where d is the dimension of the VASS. We present a polynomial-time algorithm computing the optimal k. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e., a k such that the termination complexity is Ω(n^k). Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis.

Unboundedness problems for languages of vector addition systems

ABSTRACT. A vector addition system (VAS) with an initial and a final marking and transition labels induces a language. In part because the reachability problem in VAS remains far from being well-understood, it is difficult to devise decision procedures for such languages. This is especially true for checking properties that state the existence of infinitely many words of a particular shape. Informally, we call these unboundedness properties.

We present a simple set of axioms for predicates that can express unboundedness properties. Our main result is that such a predicate is decidable for VAS languages as soon as it is decidable for regular languages. Among other results, this allows us to show decidability of (i) separability by bounded regular languages, (ii) unboundedness of occurring factors from a language K with mild conditions on K, and (iii) universality of the set of factors.

This is joint work with Wojciech Czerwiński and Piotr Hofman and has appeared in the proceedings of ICALP 2018.

11:20-12:00 Session 14: Spotlight
Location: H 3010
Nonarchimedean Convex Programming and Its Relation to Mean-Payoff Games

ABSTRACT. Linear programming, and more generally convex semialgebraic programming, makes sense over any ordered nonarchimedean field, like a field of real Puiseux series. Nonarchimedean instances encode classical parametric instances of convex programs with a suitably large parameter. Tropical geometry allows one to study such instances by combinatorial means. In particular, it reveals that, under a genericity condition, solving a nonarchimedean feasibility problem is equivalent to deciding who the winner is in a mean payoff game. Indeed, nonarchimedean linear programs correspond to deterministic mean payoff games, whereas nonarchimedean semidefinite programs correspond to perfect information stochastic mean payoff games. In this way, one can apply methods from convex programming to mean payoff games, and vice versa. We will present here the main ideas and tools from tropical geometry developed in this approach, and illustrate them by two results: a counter example, with a family of linear programs, with large coefficients, for which log-barrier interior point methods have a non strongly polynomial behavior (they make a number of iterations exponential in the number of constraints and variables); a theorem transferring complexity results concerning pivoting methods in linear programming to mean payoff games.

This is based on a series of works with Allamigeon, Benchimol and Joswig, concerning the tropicalization of the central path and of pivoting methods, and with Allamigeon and Skomra, for the tropicalization of semidefinite programming.

12:00-13:30Lunch break
13:30-14:50 Session 15: Invited Session

Multi-Objective Reasoning in Probabilistic Models

Location: H 3010
Efficient analysis of probabilistic systems that accumulate quantities

ABSTRACT. Probabilistic systems that accumulate quantities such as energy and/or cost are naturally modelled by Markov chains whose transitions are labelled with vectors of numbers. Computing information on the probability distribution of the total accumulated cost is a fundamental problem in this model. The naive way of solving this problem is to encode the accumulated quantities in the state space of the model, incurring an exponential blowup. One can improve on this approach. I will explain how results on Presburger arithmetic, language theory and graph theory lead to more efficient algorithms and verification tools.

Joint work with Christoph Haase and Markus Lohrey.

Probabilistic Model Checking: Advances and Applications

ABSTRACT. Probabilistic model checking is a technique for formally verifying quantitative properties of systems that exhibit stochastic behavior. It can provide formal guarantees on a wide range of system properties, such as safety, reliability, energy efficiency or security, and can be used to investigate trade-offs between them. This talk will give an overview of the probabilistic model checking tool PRISM, and describe some of the current advances being made in its development. This includes techniques for controller synthesis and multi-objective verification, as well as support for new probabilistic models such as stochastic games and partially observable Markov decision processes. These will be illustrated with applications from various domains including robotics, energy management and task scheduling.

14:50-15:20Coffee break
15:20-16:10 Session 16A

Register Automata and Weighted Automata

Location: H 3010
The Containment Problem for Unambiguous Register Automata

ABSTRACT. We study the containment problem for unambiguous register automata.
Register automata, introduced by Kaminski and Francez in 1994, extend finite automata with a finite set of registers.
Register automata can process data words, i.e., finite words over an infinite data domain; the registers are used to store data coming from the input word for later comparisons. A register automaton is unambiguous if it has at most one accepting run for every input data word. We prove that the class of languages recognized by these automata is not closed under complement, thereby preventing a classical reduction of the containment problem to the emptiness problem for register automata. We nonetheless prove that the containment problem for unambiguous register automata is decidable, based on a reduction to a reachability problem on some finite graph.
(This is joint work with Antoine Mottet.)

Polynomial-time equivalence testing for deterministic fresh-register automata

ABSTRACT. Register automata are one of the most studied automata models over infinite alphabets. The complexity of language equivalence for register automata is quite subtle. In general, the problem is undecidable but, in the deterministic case, it is known to be decidable and in NP. Here we propose a polynomial-time algorithm building upon automata- and group-theoretic techniques. The algorithm is applicable to standard register automata with a fixed number of registers as well as their variants with a variable number of registers and ability to generate fresh data values (fresh-register automata). To complement our findings, we also investigate the associated inclusion problem and show that it is PSPACE-complete.

This is joint work with Andrzej Murawski and Steven Ramsay.

Weak Cost Register Automata are Still Powerful

ABSTRACT. We consider one of the weakest variants of cost register automata over a tropical semiring, namely copyless cost register automata over N with updates using min and increments. We show that this model can simulate, in some sense, the runs of counter machines with zero-tests. We deduce that a number of problems pertaining to that model are undecidable, in particular equivalence, disproving a conjecture of Alur et al. from 2012. To emphasize how weak these machines are, we also show that they can be expressed as a restricted form of linearly-ambiguous weighted automata.

Joint work with S. Almagor, F. Mazowiecki and G. Perez.

Pumping lemmas for weighted automata

ABSTRACT. We present three pumping lemmas for three classes of functions definable by fragments of weighted automata over the min-plus semiring and the semiring of natural numbers. As a corollary we show that the hierarchy of functions definable by unambiguous, finitely-ambiguous, polynomially-ambiguous weighted automata, and the full class of weighted automata is strict for the min-plus semiring.

This is joint work with Cristian Riveros.

What can you expect of a weighted automaton?

ABSTRACT. We present the first study of non-deterministic weighted automata under probabilistic semantics. In this semantics words are random events and functions computed by weighted automata are random variables. We consider the probabilistic questions of computing the expected value and the cumulative distribution for such random variables.

The exact answers to the probabilistic questions for non-deterministic automata can be irrational and are uncomputable in general. To overcome this limitation, we propose an approximation algorithm for the probabilistic questions, which works in exponential time in the automaton and polynomial time in the Markov chain. We apply this result to show that non-deterministic automata can be effectively determinised with respect to the standard deviation metric.

This is a joint work with Jan Otop.

15:20-16:10 Session 16B

Logic and Games

Location: H 2013
Reachability for Branching Concurrent Stochastic Games

ABSTRACT. We give polynomial time algorithms for deciding almost-sure and limit-sure reachability in Branching Concurrent Stochastic Games (BCSGs). These are a class of infinite-state imperfect-information stochastic games that generalize both finite-state concurrent stochastic reachability games, as well as branching simple stochastic reachability games.

(Joint work with Kousha Etessami, Alistair Stewart, and Mihalis Yannakakis.)

Towards Partial Order Reductions for Strategic Ability

ABSTRACT. Alternating-time temporal logic ATL* and its fragment ATL extend temporal logic with the notion of strategic ability. However, there are two caveats. First, many tools and algorithmic solutions focus on agents with perfect information. This is clearly unrealistic in all but the simplest multi-agent scenarios (though somewhat easy to understand, since model checking of ATL variants with imperfect information is both theoretically and practically hard). Secondly, the semantics of strategic logics are almost exclusively based on synchronous concurrent game models. Still, many real-life systems are inherently asynchronous. No less importantly, many systems that are synchronous at the implementation level can be more conveniently modeled as asynchronous on a more abstract level.

In this paper, we make the first step towards strategic analysis of asynchronous systems with imperfect information. Our contribution is threefold. First, we define a semantics of strategic abilities for agents in asynchronous systems, with and without perfect information. Secondly, we present some general complexity results for verification of strategic abilities in such systems. Thirdly, and most importantly, we adapt *partial order reduction (POR)* to model checking of strategic abilities for agents with imperfect information. We also demonstrate that POR allows to significantly reduce the size of the model, and thus to make the verification more feasible. In fact, we show that the most efficient known variant of POR, defined for linear time logic LTL, can be applied almost directly. The (nontrivial) proof that the LTL reductions work also for the more expressive strategic operators is the main contribution of this paper.

Interestingly, it turns out that the scheme does *not* work for ATL with perfect information strategies. Until now, virtually all the results have suggested that verification of strategic abilities is significantly easier for agents with perfect information. Thus, we identify an aspect of verification that might be in favor of imperfect information strategies in some contexts.

(This is joint work with Wojciech Penczek, Piotr Dembiński, and Antoni Mazurkiewicz, to appear soon at AAMAS 2018)

Beyond Admissibility: Rationality in Quantitative Games

ABSTRACT. When modelling interactive scenarios with games played on nite graphs, one can face the situation that a player has no winning strategy, that is, no behaviour that allows her to attain her objective regardless of the behaviours of the other players. In such cases, one may wonder what constitutes a rational behaviour. In the boolean setting, admissi- bility has been shown to be a good criterion for rationality: indeed, an admissible strategy is a strategy that is not dominated: no other strategy would yield a better outcome against every behaviour of the other players. Furthermore, each strategy is either admissible or dominated by an admissible strategy. This property is fundamental in the sense that for any behaviour, there always exists a corresponding rational behaviour. This fundamental property does not hold anymore in the quantitative setting. We show that by switching from judging each individual strategy as rational or not to considering families of strategies that share a common behaviour pattern, we can recover a satisfactory rationality notion. Finally, we exhibit a sucient condition for a game to satisfy a similar fundamental property lifted to family of strategies.

Reasoning about Knowledge and Strategies

ABSTRACT. Two distinct semantics have been considered for knowledge in the context of strategic reasoning, depending on whether players know each other’s strategy or not. In the former case, that we call the informed semantics, distributed synthesis for epistemic temporal specifications is undecidable, already on systems with hierarchical information. However, for the other, uninformed semantics, the problem is decidable on such systems. In this work we generalise this result by introducing an epistemic extension of Strategy Logic with imperfect information. The semantics of knowledge operators is uninformed, and captures agents that can change observation power when they change strategies. We solve the model-checking problem on a class of "hierarchical instances", which provides a solution to a vast class of strategic problems with epistemic temporal specifications, such as distributed or rational synthesis, on hierarchical systems.

Constrained existence problem for weak subgame perfect equilibria with omega-regular Boolean objectives

ABSTRACT. We study multiplayer turn-based games played on a finite directed graph such that each player aims at satisfying an omega-regular Boolean objective. Instead of the well-known notions of Nash equilibrium (NE) and subgame perfect equilibrium (SPE), we focus on the recent notion of weak subgame perfect equilibrium (weak SPE), a refinement of SPE. In this setting, players who deviate can only use the subclass of strategies that differ from the original one on a finite number of histories. We are interested in the constrained existence problem for weak SPEs. We provide a complete characterization of the computational complexity of this problem: it is P-complete for Explicit Muller objectives, NP-complete for Co-Büchi, Parity, Muller, Rabin, and Streett objectives, and PSPACE-complete for Reachability and Safety objectives (we only prove NP-membership for Büchi objectives). We also show that the constrained existence problem is fixed parameter tractable and is polynomial when the number of players is fixed. All these results are based on a fine analysis of a fixpoint algorithm that computes the set of possible payoff profiles underlying weak SPEs.

This work is a joint work with Thomas Brihaye, Véronique Bruyère and Jean-François Raskin

15:20-16:10 Session 16C

Databases and Complexity

Location: H 2032
Reachability and Distances under Multiple Changes

ABSTRACT. Recently it was shown that the transitive closure of a directed graph can be updated using first-order formulas after insertions and deletions of single edges in the dynamic descriptive complexity framework by Dong, Su, and Topor, and Patnaik and Immerman. In other words, Reachability is in DynFO.

In this talk we extend the framework to changes of multiple edges at a time, and study the Reachability and Distance queries under these changes. We show that the former problem can be maintained in DynFO(+,\times) under changes affecting O(log n / log log n) nodes, for graphs with n nodes. If the update formulas may use a majority quantifier then both Reachability and Distance can be maintained under changes that affect O(log^c n) nodes, for any fixed natural number c.

This talk is based on a paper with the same title, presented at ICALP 2018, which is joint work with Samir Datta, Anish Mukherjee and Thomas Zeume.

Counting Triangles under Updates in Worst-Case Optimal Time

ABSTRACT. We consider the problem of incrementally maintaining the triangle count query under single-tuple updates to the input relations. We introduce an approach that exhibits a space-time tradeoff such that the space-time product is quadratic in the size of the input database and the update time can be as low as the square root of this size. This lowest update time is worst-case optimal conditioned on the Online Matrix-Vector Multiplication conjecture.

The classical and factorized incremental view maintenance approaches are recovered as special cases of our approach within the space-time tradeoff. In particular, they require linear-time update maintenance, which is suboptimal. Our approach can also be used to count all triangles in a static database in the worst-case optimal time to enumerate the triangles.

This is a joint work with Hung Q. Ngo, Milos Nikolic, Dan Olteanu, and Haozhe Zhang and will appear in ICDT 2019.

Answering UCQs under updates

ABSTRACT. We investigate the query evaluation problem for fixed queries over fully dynamic databases where tuples can be inserted or deleted. The task is to design a dynamic data structure that can immediately report the new result of a fixed query after every database update. We consider unions of conjunctive queries (UCQs) and focus on the query evaluation tasks testing (decide whether an input tuple a belongs to the query result), enumeration (enumerate, without repetition, all tuples in the query result), and counting (output the number of tuples in the query result).

We identify three increasingly restrictive classes of UCQs which we call t-hierarchical, q-hierarchical, and exhaustively q-hierarchical UCQs. Our main results provide the following dichotomies: If the query's homomorphic core is t-hierarchical (q-hierarchical, exhaustively q-hierarchical), then the testing (enumeration, counting) problem can be solved with constant update time and constant testing time (delay, counting time). Otherwise, it cannot be solved with sublinear update time and sublinear testing time (delay, counting time), unless the OV-conjecture and/or the OMv-conjecture fails.

This is joint work with Christoph Berkholz and Nicole Schweikardt (published in Proc. ICDT 2018).

Characterization of non-associative circuits using automata, and applications

ABSTRACT. Arithmetic circuit are the algebraic analogue of boolean circuits. As a natural model for computing multivariate polynomials, they have been extensively studied. The most important open question in the field of algebraic complexity theory is that of separating the classes VP and VNP, the analogues of P and NP. More precisely, can one obtain super-polynomial lower bounds for circuits computing a given explicit polynomial ?

Despite decades of efforts, this question yet seems out of reach, the best general lower bound being only slightly super-linear. The most common approach is to prove lower bounds for restricted classes of circuits, such as monotone or constant-depth circuits. Another approach would be removing relations arising from the mathematical structure underlying the computations, making it harder for circuits to compute polynomials and thus conceivably easier to obtain lower bounds. In this line of thought, Nisan (1991) was able to obtain breakthrough results in the context of non-commutative computations, separating circuits and formulas and characterizing the minimal size of Algebraic Branching Programs (ABP).

Likewise, circuits for which the multiplication is assumed to be non-associative, meaning that (xy)x and x(yx) are different monomials, have been considered. General exponential lower bounds can be proved in this setting. We highlight a syntactical equivalence between non-associative circuits and acyclic Multiplicity Tree Automata (MTA), which leads to a general algebraic characterization of the size of the smallest non-associative circuit computing a given non-associative polynomial.

As a direct consequence of this characterization, we unify several recent circuit lower bounds in the non-commutative setting. Going deeper in the comprehension of this new algebraic tool, we are able to considerably extend a known lower bound to a class which is very close to general.

The Linear Space Hypothesis and Its Close Connection to Two-Way Finite Automata

ABSTRACT. The linear space hypothesis (LSH) is a new, practical working hypothesis that asserts the following: the parameterized satisfiability problem of 2CNF Boolean formulas \phi in which each variable appears at most 3 times in the form of literals (called 2SAT_3) cannot be solved using both time polynomial in the size |\phi| of binary encoding of phi and space O(n^{epsilon})polylog(|\phi|)-space for a constant epsilon<1, where n is the number of variables in \phi. From this hypothesis, it immediately follows that NL differs from L, where L and NL denote the log-space deterministic and nondeterministic complexity classes, respectively.

The highlights of our results are listed below.

1. We show various consequences of LSH in the field of NL search problems and NL optimization problems.

2. We further present a new characterization of this hypothesis in terms of the state complexity of transforming certain restricted 2-way nondeterministic finite automata (or 2nfa's) to restricted 2-way alternating finite automata (or 2afa's).

3. We also discuss a non-uniform version of LSH and give another characterization in terms of the state complexity of transforming a non-uniform family of restricted 2nfa’s into a certain non-uniform family of restricted 2afa’s.

16:20-17:10 Session 17A


Location: H 3010
Synthesizing Optimally Resilient Controllers

ABSTRACT. Recently, Dallal, Neider, and Tabuada studied a generalization of the classical game-theoretic model used in program synthesis, which additionally accounts for unmodeled intermittent disturbances. In this extended framework, one is interested in computing optimally resilient strategies, i.e., strategies that are resilient against as many disturbances as possible. Dallal, Neider, and Tabuada showed how to compute such strategies for safety specifications. In this work, we compute optimally resilient strategies for a much wider range of winning conditions and show that they do not require more memory than winning strategies in the classical model. Our algorithms only have a polynomial overhead in comparison to the ones computing winning strategies. In particular, for parity conditions optimally resilient strategies are positional and can be computed in quasipolynomial time.

Joint work with Daniel Neider and Alexander Weinert.

Strix: Explicit Reactive Synthesis Strikes Back!

ABSTRACT. Strix is a new tool for reactive LTL synthesis combining a direct translation of LTL formulas into deterministic parity automata (DPA) and an efficient, multi-threaded explicit state solver for parity games. In brief, Strix (1) decomposes the given formula into simpler formulas, (2) identifies isomorphic formulas and groups them into equivalence classes, (3) translates one formula for each class on-the-fly into DPAs based on the queries of the parity game solver, (4) composes the DPAs into a parity game, and at the same time already solves the intermediate games using strategy iteration, (5) finally translates the winning strategy, if it exists, into a Mealy machine or an AIGER circuit, and (6) optionally minimizes the resulting Mealy machine or AIGER circuit. We experimentally demonstrate the applicability of our approach by a comparison with Party, BoSy, and ltlsynt using the SYNTCOMP 2017 benchmarks. In these experiments, our tool can compete with all other approaches, solving more instances for realizability and often producing smaller solutions. In particular, our tool successfully synthesizes the full and unmodified LTL specification of the AMBA protocol for n=2 masters. We will also report on the results of SYNTCOMP 2018 for which we entered with Strix this year.

This is joint work with Salomon Sickert and Michael Luttenberger. It has been accepted at CAV 2018 as a tool paper. This work was partially funded and supported by the German Research Foundation (DFG) projects "Game-based Synthesis for Industrial Automation" (253384115) and "Verified Model Checkers" (317422601).

Automatic Synthesis of Systems with Data

ABSTRACT. System synthesis aims at automatically generating a system from its high level specification. An interesting setting is the one of reactive systems, which are systems that continuously react, in a timely fashion, to the input stimuli of the environment in which they are executed. Their interaction yields a pair of input and output words. Then, the specification, provided as a logical formula or as an automaton, describes the set of admissible behaviours, which can be seen as a relation between input and output words.

Most of the works on synthesis has been conducted under the assumption that the input and output alphabets are finite. However, many applications require an infinite set to model programs manipulating integers, reals, timestamps, etc. Thus, we consider specifications over data words, which are finite sequences of pairs in A x D, where A is a finite alphabet and D is an infinite set (of data). Then, from those specifications, our aim is to determine, for different classes of implementations, whether the synthesis problem is decidable and, if yes, what is its complexity.

When the synthesis problem only constrains admissible behaviours, without imposing additional requisits, it can be formulated as a uniformisation problem, following a set-theoretic terminology. For f a function and R a relation, we say that f uniformises R when f and R have the same domain and when f (understood as a relation) is a subrelation of R.

Then, the uniformisation problem from a class of relations C to a class of functions F can be defined as: given a relation R in C, provide a function f in F which uniformises R if it exists, and answer No otherwise. In this work, we consider instances of the uniformisation problem.

To represent relations over data words, we introduce two models, namely Nondeterministic Data Transducers and Deterministic Register Transducers, which are generalisations of Nondeterministic Finite Transducers to data words. To handle data, we equip the machine with registers, in which it can store the data to output it later on. The difference between the two models is that the first one is not allowed to conduct any tests on the data which is read, whereas the second can test equality between the data and the content of one register. For both models, we provide an algorithm to solve the uniformisation problem to their input-deterministic version, i.e. to a machine which, given an input, deterministically produces an output.

Observation synthesis for games with imperfect information

ABSTRACT. We consider games over a finite arena with set of vertices V. A history is a word over V, and the information of a player is an equivalence relation over histories.

Given such an equivalence relation over histories, we want to synthesize an observation function V*->O*, which maps equivalent histories to identical observations and non-equivalent histories to different observations.

Given a sequential observation function, we are able to define a new arena V' with an equivalence relation ~ over V' such that two histories are equivalent iff they are letter-to-letter equivalent under ~.

We show that the observation synthesis problem is decidable when the target function is letter-to-letter, and in that case we can effectively synthesize an observation function. We also show that the problem is undecidable in a slightly more general case.

Joint work with Paulin Fournier and Bruno Guillon

Graph Planning with Expected Finite Horizon

ABSTRACT. A classical problem in discrete planning is to consider a weighted graph and construct a path that maximizes the sum of weights for a given time horizon T. However, in many scenarios, the time horizon in not fixed, but the stopping time is chosen according to some distribution such that the expected stopping time is T. If the stopping time distribution is not known, then to ensure robustness, the distribution is chosen by an adversary, to represent the worst-case scenario.

A stationary plan for every vertex always chooses the same outgoing edge. For fixed horizon T, stationary plans are not sufficient for optimality. Quite surprisingly we show that when an adversary chooses the stopping time distribution with expected stopping time T, then stationary plans are sufficient. While computing optimal stationary plans for fixed horizon is NP-complete, we show that computing optimal stationary plans under adversarial stopping time distribution can be achieved in polynomial time. Consequently, our polynomial-time algorithm for adversarial stopping time also computes an optimal plan among all possible plans.

16:20-17:10 Session 17B


Location: H 2013
Safe and Optimal Scheduling of Hard and Soft Tasks

ABSTRACT. We consider a stochastic scheduling problem with both hard and soft tasks. Each task is described by a discrete probability distribution over possible execution times, a deadline and a distribution over inter-arrival times. Our scheduling problem is \emph{non-clairvoyant} in the sense that the execution and inter-arrival times are subject to uncertainty modeled by stochastic distributions. Given an instance of our problem, we want to synthesize a schedule that is safe and efficient: safety imposes that deadline of hard tasks are never violated while efficient means that soft tasks should be completed as often as possible. To enforce that property, we impose a cost when a soft task deadline is violated and we are looking for a schedule that minimizes the expected value of the limit average of this cost.

First, we show that the dynamics of such a system can be modeled as a finite Markov Decision Process (MDP). Second, we show that the problem of deciding the existence of a safe and efficient scheduler is PP-hard and in EXPTIME. Third, we have implemented our synthesis algorithm that partly relies on the probabilistic model-checker {\sc Storm} to analyze the underlying MDP. Given a set of both hard and soft tasks, we first compute the set of safe schedules, i.e., the ones in which the hard tasks always finish their execution, then we compute the scheduler that minimizes the limit average cost among the set of all safe schedules. Finally, we compare the optimal solutions obtained with our procedure against an adaptation of the \emph{earliest deadline first} (EDF) algorithm to account for the soft task. We show that this EDF strategy can be arbitrarily worse in terms of the expected limit average cost for missing the deadlines of the soft tasks when compared to our optimal procedure.

Symbolic Approximation of Weighted Timed Games

ABSTRACT. Weighted timed games are zero-sum games played by two players on a timed automaton equipped with weights, where one player wants to minimise the accumulated weight while reaching a target. Used in a reactive synthesis perspective, this quantitative extension of timed games allows one to measure the quality of controllers in real-time systems. Weighted timed games are notoriously difficult and quickly undecidable, even when restricted to non-negative weights.

For non-negative weights, the largest class that can be analyzed has been introduced in a paper by Bouyer, Jaziri and Markey in 2015. Though the value problem is undecidable, the authors show how to approximate the value by considering regions with a refined granularity.

In this work, we extend this class to incorporate negative weights, allowing one to model energy for instance, and prove that the value can still be approximated. In addition, we show that a symbolic algorithm, relying on the paradigm of value iteration, can be used for this approximation scheme.

Costs and Rewards in Priced Timed Automata

ABSTRACT. We consider Pareto analysis of reachable states of multi-priced timed automata (MPTA): timed automata equipped with multiple observers that keep track of costs (to be minimised) and rewards (to be maximised) along a computation. Each observer has a constant non-negative derivative which may depend on the location of the MPTA.

We study the Pareto Domination Problem, which asks whether it is possible to reach a target location via a run in which the accumulated costs and rewards Pareto dominate a given objective vector. We show that this problem is undecidable in general, but decidable for MPTA with at most three observers. For MPTA whose observers are all costs or all rewards, we show that the Pareto Domination Problem is PSPACE-complete. We also consider an epsilon-approximate Pareto Domination Problem that is decidable without restricting the number and types of observers.

We develop connections between MPTA and Diophantine equations. Undecidability of the Pareto Domination Problem is shown by reduction from Hilbert's 10th Problem, while decidability for three observers is shown by a translation to a fragment of arithmetic involving quadratic forms.

The talk is on a paper that is going to be presented in ICALP 2018.

Reachability in timed automata with diagonal constraints

ABSTRACT. This is a presentation of a work with the same title accepted at CONCUR 2018. Joint work with Paul Gastin (ENS Cachan) and Sayan Mukherjee (Chennai Mathematical Institute).

We study the reachability problem for timed automata having diagonal constraints (like x - y <= 5, w - x > 2, etc) as guards in transitions. Timed automata restricted to constraints of the form x <= 5, y >= 2 (that is, not having comparison of difference between clocks with a constant) are called diagonal-free automata. It is known that diagonal constraints do not add expressive power, but they could be exponentially succinct: there is a family of timed languages L_n given by timed automata with a single state and n transitions having diagonal constraints whereas any diagonal free timed automaton for L_n would need at least 2^n states. This observation motivates the use of diagonal constraints in the modelling of real systems with the expectation to obtain more succinct models. However, existing algorithms for timed automata analysis are tuned for diagonal free constraints, and efficient methods to handle diagonal constraints are not known. In this work, we revisit this problem and extend recent algorithms studied for diagonal-free automata to the case of diagonal constraints.

In the talk, we will sketch the role of diagonal constraints in timed automata, the idea of the generic reachability algorithm, and then mention our results.

Universal Safety for Timed Petri Nets is PSPACE-complete

ABSTRACT. Timed-arc Petri nets (TPN) are an extension of Petri nets where each token carries one real-valued clock and transitions are guarded by integer inequality constraints on clocks. We consider the model in which newly created tokens can either be reset, or inherit values of input tokens of the transition.

The Existential Coverability problem for TPN asks, for a given place p and transition t, whether there exists a number m such that the marking M (m) = m·{(p, 0)} ultimately enables t. Here, M (m) contains exactly m tokens on place p with all clocks set to zero and no other tokens. This problem corresponds to checking safety properties in distributed networks of arbitrarily many (namely m) initially identical timed processes that communicate by handshake. A negative answer certifies that the ‘bad event’ of transition t can never happen regardless of the number m of processes, i.e., the network is safe for any size. Thus by checking existential coverability, one solves the dual problem of Universal Safety.

We show that both problems are decidable and PSPACE-complete. The lower bound is shown by a reduction from the iterated monotone Boolean circuit problem. The upper bound is shown by an acceleration technique for a suitable notion of (infinite) region automata. This moreover leads to a symbolic representation of the set of coverable configurations, which can be computed in exponential space.

16:20-17:10 Session 17C

Petri Nets and Security

Location: H 2032
Regular separability of WSTS

ABSTRACT. The goal of the talk is to present a result on languages recognized by transition-labeled well-structured transition systems (WSTS) with upward compatibility. We have shown that, under very mild assumptions, every two disjoint WSTS languages are regularly separable: There is a regular language containing one of them and being disjoint from the other. As a consequence, if a language as well as its complement are both recognized by WSTS, then they are necessarily regular. In particular, no subclass of WSTS languages beyond the regular languages is closed under complement.

The first step of the proof links inductive invariants from verification to separability in formal languages: We show that any inductive invariant (of the product of the given systems) gives rise to a regular separator - provided it can be finitely represented. In a second step, we show that finitely-represented invariants always exist using ideal completions.

Decidability in data Petri nets — a conjecture.

ABSTRACT. We investigate data-enriched models, like Petri nets with data, where executability of a transition is conditioned by a relation between data values involved. Decidability status of various decision problems in such models may depend on the structure of data domain. According to the WQO Dichotomy Conjecture, if a data domain is homogeneous then it either exhibits a well quasi-order (in which case decidability follows by standard arguments), or essentially all the decision problems are undecidable for Petri nets over that data domain.

During the talk, we will present the context and state the conjecture. Our results concerning resolved special cases of the conjecture will be presented on the poster.

DEEPSEC: Deciding Equivalence Properties in Security Protocols Theory and Practice

ABSTRACT. Abstract—Automated verification has become an essential part in the security evaluation of cryptographic protocols. Recently, there has been a considerable effort to lift the theory and tool support that existed for reachability properties to the more complex case of equivalence properties. In this paper we contribute both to the theory and practice of this verification problem. We establish new complexity results for static equivalence, trace equivalence and labelled bisimilarity and provide a decision procedure for these equivalences in the case of a bounded number of sessions. Our procedure is the first to decide trace equivalence and labelled bisimilarity exactly for a large variety of cryptographic primitives—those that can be represented by a subterm convergent destructor rewrite system. We implemented the procedure in a new tool, DEEPSEC. We showed through extensive experiments that it is significantly more efficient than other similar tools, while at the same time raises the scope of the protocols that can be analysed.

This paper received the Distinguished Paper Award at the 2018 IEEE Symposium on Security and Privacy.

18:00-19:00Football game (in Tiergarten)