previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 63G: Linear Logic and Proof Systems

"Linear Logic and Proof Systems": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Linear-Algebraic Models of Linear Logic as Categories of Modules over Sigma-Semirings
PRESENTER: Takeshi Tsukada

ABSTRACT. A number of models of linear logic are based on or closely related to linear algebra, in the sense that morphisms are ``matrices'' over appropriate coefficient sets. Examples include models based on coherence spaces, finiteness spaces and probabilistic coherence spaces, as well as the relational and weighted relational models. This paper introduces a unified framework based on module theory, making the linear algebraic aspect of the above models more explicit. Specifically we consider modules over \emph{\( \Sigma \)-semirings} \( R \), which are ring-like structures with partially-defined countable sums, and show that morphisms in the above models are actually \( R \)-linear map in the standard algebraic sense for appropriate \( R \). An advantage of our algebraic treatment is that the category of \( R \)-modules is locally presentable, from which it easily follows that this category becomes a model of intuitionistic linear logic with the cofree exponential. We then discuss constructions of classical models and show that the above-mentioned models are examples of our constructions.

Bouncing threads for circular and non-wellfounded proofs
PRESENTER: David Baelde

ABSTRACT. Given that (co)inductive types are naturally modelled as fixed points, it is unsurprising that fixed-point logics are of interest in the study of programming languages, via the Curry-Howard (or proofs-as-programs) correspondence. This motivates investigations of the structural proof-theory of fixed-point logics and of their cut-elimination procedures.

Among the various approaches to proofs in fixed-point logics, circular -- or cyclic -- proofs, are of interest in this regard but suffer from a number of limitations, most notably from a quite restricted use of cuts. Indeed, the validity condition which ensures soundness of non-wellfounded derivations and productivity of their cut-eliminitation prevents some computationally-relevant patterns of cuts. As a result, traditional circular proofs cannot serve as a basis for a theory of (co)recursive programming by lack of compositionality: there are not enough circular proofs and they compose badly.

The present paper addresses some of these limitations by developing the circular and non-wellfounded proof-theory of multiplicative additive linear logic with fixed points (muMALL) beyond the scope of the seminal works of Santocanale and Fortier and of Baelde et al. We make the following contributions: (1) We define bouncing-validity: a new, generalized, validity criterion for muMALL, which takes axioms and cuts into account. (2) We show soundness and cut elimination theorems for bouncing-valid non-wellfounded proofs. As a result, even though bouncing-validity proves the same sequents (or judgments) as before but we have many more valid proofs at our disposal. (3) We illustrate the relevance of bouncing-validity on a number of examples motivated by the Curry-Howard correspondence. (4) Finally, we study the decidability of the criterion in the circular case: we prove it is undecidable in general. (5) However, we identify a hierarchy of decidable sub-criteria, such that any bouncing-valid circular proof is validated by some criterion of the hierarchy.

On the strength of Sherali-Adams and Nullstellensatz as propositional proof systems
PRESENTER: Ilario Bonacina

ABSTRACT. We characterize the strength of the algebraic proof systems Sherali-Adams (SA) and Nullstellensatz (NS) in terms of Frege-style proof systems. Unlike bounded-depth Frege, SA has polynomial-size proofs of the pigeonhole principle (PHP). A natural question is whether adding PHP to bounded-depth Frege is enough to simulate SA.

We show that SA with unary integer coefficients lies strictly between tree-like depth-1 Frege + PHP and tree-like Resolution. We introduce a weighted version of PHP (wPHP) and we show that SA with integer coefficients lies strictly between tree-like depth-1 Frege + wPHP and Resolution.

Analogue results are shown for NS using the bijective (i.e. onto and functional) pigeonhole principle and a weighted version of it.

A functorial excursion between algebraic geometry and linear logic

ABSTRACT. The language of Algebraic Geometry combines two complementary and dependent levels of discourse: on the geometric side, schemes define spaces of the same cohesive nature as manifolds ; on the vectorial side, every scheme X comes equipped with a symmetric monoidal category of quasicoherent modules, which may be seen as generalised vector bundles on the scheme X. In this paper, we use the functor of points approach to Algebraic Geometry developed by Grothendieck in the 1970s to establish that every covariant presheaf X on the category of commutative rings --- and in particular every scheme X --- comes equipped ``above it'' with a symmetric monoidal closed category Mod_X of presheaves of modules. This category Mod_X defines moreover a model of intuitionistic linear logic, whose exponential modality is obtained by glueing together in an appropriate way the Sweedler dual construction on ring algebras. The purpose of this work is to establish on firm mathematical foundations the idea that linear logic should be understood as a logic of generalised vector bundles, in the same way as dependent type theory is understood today as a logic of spaces up to homotopy.

Logical Foundations of Quantitative Equality

ABSTRACT. In quantitative reasoning one compares objects by distances, instead of equivalence relations, so that one can measure how much they are similar, rather than just saying whether they are equivalent or not. In this paper we aim at providing a solid logical ground to quantitative reasoning with distances, using the categorical language of Lawvere's hyperdoctrines. The key idea is to see distances as equality predicates in Linear Logic. We use graded modalitiess to write a resource sensitive substitution rule for equality, which allows us to give it a quantitative meaning by distances. We introduce a deductive calculus for (Graded) Linear Logic with quantitative equality and the notion of Lipschitz doctrine to define a sound and complete categorical semantics of it. We also describe a universal construction of Lipschitz doctrines, which generates examples based for instance on metric spaces and quantitative realisability.

Exponentials as Substitutions and the Cost of Cut Elimination in Linear Logic

ABSTRACT. This paper introduces the exponential substitution calculus (ESC), a new presentation of cut elimination for IMELL, based on proof terms and building on the idea that exponentials can be seen as explicit substitutions. The idea in itself is not new, but here it is pushed to a new level, inspired by Accattoli and Kesner's linear substitution calculus (LSC).

One of the key properties of the LSC is that it naturally models the sub-term property of abstract machines, that is the key ingredient for the study of reasonable time cost models for the lambda-calculus. The new ESC is then used to design a cut elimination strategy with the sub-term property, providing the first polynomial cost model for cut elimination with unconstrained exponentials.

For the ESC, we also prove untyped confluence and typed strong normalization, showing that it is an alternative to proof nets for an advanced study of cut elimination.

10:30-11:00Coffee Break
11:00-12:30 Session 65F: Graphs, Behavioural Equivalences and Learning

"Graphs, Behavioural Equivalences and Learning": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Treelike decompositions for transductions of sparse graphs
PRESENTER: Sandra Kiefer

ABSTRACT. We give new decomposition theorems for classes of graphs that can be transduced in first-order logic from classes of sparse graphs --- more precisely, from classes of bounded expansion and from nowhere dense classes. In both cases, the decomposition takes the form of a single colored rooted tree of bounded depth where, in addition, there can be links between nodes that are not related in the tree. The constraint is that the structure formed by the tree and the links has to be sparse. Using the decomposition theorem for transductions of nowhere dense classes, we show that they admit low-shrubdepth covers of size O(n^ε), where n is the vertex count and ε>0 is any fixed real. This solves an open problem posed by Gajarský et al. (ACM TOCL '20) and also by Briański et al. (SIDMA '21).

Stable graphs of bounded twin-width
PRESENTER: Jakub Gajarský

ABSTRACT. We prove that every class of graphs C that is monadically stable and has bounded twin-width can be transduced from some class with bounded sparse twin-width. This generalizes analogous results for classes of bounded linear cliquewidth [Nešetřil et al., SODA 2020] and of bounded cliquewidth [Nešetřil et al., SODA 2021]. It also implies that monadically stable classes of bounded twin-width are linearly chi-bounded.

Model Checking on Interpretations of Classes of Bounded Local Cliquewidth

ABSTRACT. An interpretation is an operation that maps an input graph to an output graph by redefinig its edge relation using a first-order formula. This rich framework includes operations such as taking the complement or a fixed power of a graph as (very) special cases.

We prove that there is an FPT algorithm for the first-order model checking problem on classes of graphs which are interpretable in classes of graphs with bounded local cliquewidth. Notably, this includes interpretations of planar graphs and classes of bounded genus in general.

To obtain this result we develop a new tool which works in a very general setting of NIP classes and which we believe can be an important ingredient in obtaining similar results in the future.

Size measures and alphabetic equivalence in the mu-calculus
PRESENTER: Johannes Marti

ABSTRACT. Algorithms for solving computational problems related to the modal mu-calculus generally do not take the formulas themselves as input, but operate on some kind of representation of formulas. This representation is usually based on a graph structure that one may associate with a mu-calculus formula. Recent work by Kupke, Marti and Venema showed that the operation of renaming bound variables may incur an exponential blow-up of the size of such a graph representation. Their example revealed the undesirable situation that standard constructions, on which algorithms for model checking and satisfiability depend, are sensitive to the specific choice of bound variables used in a formula.

Our work discusses how the notion of alphabetic equivalence interacts with the construction of graph representations of mu-calculus formulas, and with the induced size measures of formulas. We introduce the condition of alpha-invariance on such constructions, requiring that alphabetically equivalent formulas are given the same (or isomorphic) graph representations.

Our main results are the following. First we show that if two mu-calculus formulas are alpha-equivalent, then their respective Fischer-Ladner closures have the same cardinality, up to alpha-equivalence. We then continue with the definition of an alpha-invariant construction which represents an arbitrary mu-calculus formula by a graph that has exactly the size of the quotient of the closure of the formula, up to alpha-equivalence. This definition, which is itself based on a renaming of variables, solves the above-mentioned problem discovered by Kupke et al.

Milner's Proof System for Regular Expressions Modulo Bisimilarity is Complete

ABSTRACT. Milner (1984) defined a process semantics for regular expressions. He formulated a sound proof system for bisimilarity of process interpretations of regular expressions, and asked whether this system is complete.

We report conceptually on a proof that establishes that Milner's system is complete, by motivating and describing all of its main steps. We substantially refine the completeness proof by Grabmayer and Fokkink (2020) for the restriction of Milner's system to `1-free' regular expressions. As a crucial complication we recognize that process graphs with empty-step transitions that satisfy the layered loop-existence/elimination property LLEE are not closed under bisimulation collapse (unlike process graphs with LLEE that only have proper-step transitions). We circumnavigate this obstacle by defining a LLEE-preserving `crystallization procedure' for such process graphs. By that we obtain `near-collapsed' process graphs with LLEE whose strongly connected components are either collapsed or of `twin-crystal' shape. Such near-collapsed process graphs guarantee provable solutions for bisimulation collapses of process interpretations of regular expressions.

Computable PAC Learning of Continuous Features
PRESENTER: Cameron Freer

ABSTRACT. We introduce definitions of computable PAC learning for binary classification over computable metric spaces. We provide sufficient conditions on a hypothesis class to ensure than an empirical risk minimizer (ERM) be computable, and bound the strong Weihrauch degree of an ERM under more general conditions. We also give a presentation of a hypothesis class that does not admit any proper computable PAC learner with computable sample function, despite the underlying class being PAC learnable.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 67G: FOL, SOL and Model Theory

"FOL, SOL and Model Theory": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
The amazing mixed polynomial closure and its applications to two-variable first-order logic

ABSTRACT. Polynomial closure is a standard operator which is applied to a class of regular languages. In the paper, we investigate three restrictions called left (LPol), right (RPol) and mixed polynomial closure (MPol). The first two were known while MPol is new. We look at two decision problems that are defined for every class C. Membership takes a regular language as input and asks if it belongs to C. Separation takes two regular languages as input and asks if there exists a third language in C including the first one and disjoint from the second. We prove that LPol, RPol and MPol preserve the decidability of membership under mild hypotheses on the input class, and the decidability of separation under much stronger hypotheses. We apply these results to natural hierarchies.

First, we look at several language theoretic hierarchies that are built by applying LPol, RPol and MPol recursively to a single input class. We prove that these hierarchies can actually be defined using almost exclusively MPol. We also consider quantifier alternation hierarchies for two-variable first-order logic (FO2) and prove that one can climb them using MPol. The result is generic in the sense that it holds for most standard choices of signatures. We use it to prove that for most of these choices, membership is decidable for all levels in the hierarchy. Finally, we prove that separation is decidable for the hierarchy of two-variable first-order logic equipped with only the linear order (FO2(<)).

The Regular Languages of First-Order Logic with One Alternation
PRESENTER: Corentin Barloy

ABSTRACT. The regular languages with a neutral letter expressible in first-order logic with one alternation are characterized. Specifically, it is shown that if an arbitrary \(\Sigma_2\) formula defines a regular language with a neutral letter, then there is an equivalent \(\Sigma_2\) formula that only uses the order predicate. This shows that the so-called Central Conjecture of Straubing holds for \(\Sigma_2\) over languages with a neutral letter, the first progress on the Conjecture in more than 20 years. To show the characterization, lower bounds against polynomial-size depth-3 Boolean circuits with constant top fan-in are developed. The heart of the combinatorial argument resides in studying how positions within a language are determined from one another, a technique of independent interest.

Zero-One Laws and Almost Sure Valuations of First-Order Logic in Semiring Semantics
PRESENTER: Matthias Naaf

ABSTRACT. Semiring semantics evaluates logical statements by values in some commutative semiring K. Random semiring interpretations, induced by a probability distribution on K, generalise random structures, and we investigate here the question of how classical results on first-order logic on random structures, most importantly the 0-1 laws of Glebskii et al. and Fagin, generalise to semiring semantics. For positive semirings, the classical 0-1 law implies that every first-order sentence is, asymptotically, either almost surely evaluated to 0 by random semiring interpretations, or almost surely takes only values different from 0. However, by means of a more sophisticated analysis, based on appropriate extension properties and on algebraic representations of first-order formulae, we can prove much stronger results.

For many semirings K, the first-order sentences can be partitioned into classes F(k), such that for each element k of K, every sentence in F(k) evaluates almost surely to k under random semiring interpretations. Further, for finite or infinite lattice semirings, this partition actually collapses to just three classes F(0), F(1) and F(e) of sentences that, respectively, almost surely evaluate to 0, 1, and to the smallest positive value e. For all other values k in K we have that F(k) is empty. The problem of computing the almost sure valuation of a first-order sentence on finite lattice semirings is PSPACE-complete.

Geometric decision procedures and the VC dimension of linear arithmetic theories
PRESENTER: Alessio Mansutti

ABSTRACT. This paper resolves two open problems on linear integer arithmetic(LIA), also known as Presburger arithmetic. First, we give a triply exponential geometric decision procedure for LIA, i.e., a procedure based on manipulating semilinear sets. This matches the runningtime of the best quantifier elimination and automata-based procedures. Second, building upon our first result, we give a doubly exponential upper bound on the Vapnik–Chervonenkis (VC) dimension of sets definable in LIA, proving a conjecture of D. Nguyen and I. Pak [Combinatorica 39, pp. 923–932, 2019].

These results partially rely on an analysis of sets definable in linear real arithmetic (LRA), and analogous results for LRA are also obtained. At the core of these developments are new decomposition results for semilinear and R-semilinear sets, the latter being the sets definable in LRA. These results yield new algorithms to compute the complement of (R-)semilinear sets that do not cause a non-elementary blowup when repeatedly combined with procedures for other Boolean operations and projection. The existence of such an algorithm for semilinear sets has been a long-standing open problem.

A direct computational interpretation of second-order arithmetic via update recursion

ABSTRACT. Second-order arithmetic has two kinds of computational interpretations: via Spector's bar recursion of via Girard's polymorphic lambda-calculus. Bar recursion interprets the negative translation of the axiom of choice which, combined with an interpretation of the negative translation of the excluded middle, gives a computational interpretation of the negative translation of the axiom scheme of specification. It is then possible to instantiate universally quantified sets with arbitrary formulas (second-order elimination). On the other hand, polymorphic lambda-calculus interprets directly second-order elimination by means of polymorphic types. The present work aims at bridging the gap between these two interpretations by interpreting directly second-order elimination through update recursion, which is a variant of bar recursion.

When Locality Meets Preservation

ABSTRACT. This paper investigates the expressiveness of a fragment of first-order sentences in Gaifman normal form, namely the positive Boolean combinations of basic local sentences. We show that they match exactly the first-order sentences preserved under local elementary embeddings, thus providing a new general preservation theorem and extending the Łós-Tarski Theorem. This full preservation result fails as usual in the finite, and we show furthermore that the naturally related decision problems are undecidable. In the more restricted case of preservation under extensions, it nevertheless yields new well-behaved classes of finite structures: we show that preservation under extensions holds if and only if it holds locally.

15:30-16:00Coffee Break
16:00-17:00 Session 70: Plenary
Complexity Measures for Reactive Systems

ABSTRACT. A reactive system maintains an on-goint interaction with its environment. At each moment in time, the system receives from the environment an assignment to the input signals and responds with an assignment to the output signals. The system is correct with respect to a given specification if, for all environments, the infinite computation that is generated by the interaction between the system and the environment satisfies the specification. Reactive systems are modeled by transducers: finite state machines whose transitions are labeled by assignments to the input signals and whose states are labeled by assignments to the output signals. In the synthesis problem, we automatically transform a given specification into a correct transducer.

While complexity measures receive much attention in the design of algorithms, they are less explored in the context of synthesis. This is a serious drawback: just as we cannot imagine an algorithm designer that accepts a correct algorithm without checking its complexity, we cannot expect designers of reactive systems to accept synthesized systems that are only guaranteed to be correct. It is not clear, however, how to define the complexity of a transducer. Unlike the case of algorithms (or Turing machines in general), there is no "size of input" to relate to, and measures like time and space complexity are irrelevant. Indeed, we care for on-going interactions, along which the transducer reacts instantly according to its transition function. One immediate complexity measure for a transducer is its size, but many more complexity measures are of interest. The talk discusses such measures and describes how the search for optimal reactive systems affects the synthesis problem.