MFCS 2025: 50TH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE
PROGRAM FOR MONDAY, AUGUST 25TH
Days:
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 1
Location: 316 (Main room)
09:00
Lambdas, Transducers and MSO

ABSTRACT. This talk will revolve around the classical question: What kind of maps from words to words or trees to trees can be considered as “well behaved” from an automata theoretic point of view?

10:00-10:30Coffee Break
10:30-11:45 Session 2A
Location: 316 (Main room)
10:30
Computational complexity of covering regular trees

ABSTRACT. A graph covering projection, also referred to as a locally bijective homomorphism, is a mapping between the vertices and edges of two graphs that preserves incidences and is a local bijection. This concept originates in topological graph theory but has also found applications in combinatorics and theoretical computer science.

We prove that any graph obtained by adding semi-edges to the vertices of a tree with maximum degree $\Delta \ge 3$, making it a $d$-regular graph for $d \geq \Delta$, defines an NP-complete graph covering problem.

10:55
Isometric-Universal Graphs for Trees

ABSTRACT. We consider the problem of finding the smallest graph that contains two input trees each with at most $n$ vertices preserving their distances. In other words, we look for an isometric-universal graph with the minimum number of vertices for two given trees. We prove that this problem can be solved in time $O(n^{5/2}\log{n})$. We extend this result to forests instead of trees, and propose an algorithm with running time $O(n^{7/2}\log{n})$. As a key ingredient, we show that a smallest isometric-universal graph of two trees essentially is a tree. Furthermore, we prove that these results cannot be extended. Firstly, we show that deciding whether there exists an isometric-universal graph with $t$ vertices for three forests is NP-complete. Secondly, we show that any smallest isometric-universal graph cannot be a tree for some families of three trees. This latter result has implications for greedy strategies solving the smallest isometric-universal graph problem.

11:20
Supercritical Size-Width Tree-Like Resolution Trade-Offs for Graph Isomorphism

ABSTRACT. We study the refutation complexity of graph isomorphism in the tree-like resolution calculus. Torán and Wörz showed that there is a resolution refutation of narrow width k for two graphs if and only if they can be distinguished in (k+1)-variable first-order logic (FO^{k+1}). While DAG-like narrow width $k$ resolution refutations have size at most n^k, tree-like refutations may be much larger. We show that there are graphs of order n, whose isomorphism can be refuted in narrow width k but only in tree-like size 2^{\Omega(n^{k/2})}. This is a \emph{supercritical} trade-off where bounding one parameter (the narrow width) causes the other parameter (the size) to grow above its worst case. The size lower bound is super-exponential in the formula size and improves a related supercritical trade-off by Razborov. To prove our result, we develop a new variant of the k-pebble EF-game for FO^k to reason about tree-like refutation size in a similar way as the Prover-Delayer games in proof complexity. We analyze this game on the compressed CFI graphs introduced by Grohe, Lichter, Neuen, and Schweitzer. Using a recent improved robust compressed CFI construction of de Rezende, Fleming, Janett, Nordström, and Pang, we obtain a similar bound for width k (instead of the stronger but less common narrow width) and make the result more robust.

10:30-11:45 Session 2B
10:30
On Expansions of Monadic Second-Order Logic with Dynamical Predicates

ABSTRACT. Expansions of the monadic second-order (MSO) theory of the structure [N;<] have been a fertile and active area of research ever since the publication of the seminal papers of Buechi and Elgot & Rabin on the subject in the 1960s. In the present paper, we establish decidability of the MSO theory of [N;<,P], where P ranges over a large class of unary "dynamical" predicates, i.e., sets of non-negative values assumed by certain integer linear recurrence sequences. One of our key technical tools is the novel concept of *(effective) weak pronormality*, which we expect may also find independent applications further afield.

10:55
On Large Zeros of Linear Recurrence Sequences

ABSTRACT. The Skolem Problem asks to determine whether a given integer linear recurrence sequence (LRS) has a zero term. This problem, whose decidability has been open for many decades, arises across a wide range of topics in computer science, including loop termination, formal languages, automata theory, and probabilistic model checking, amongst many others.

In the present paper, we introduce a notion of "large" zeros of linear recurrence sequences, i.e., zeros occurring at an index larger than a sixth-fold exponential of the size of the data defining the given LRS. We establish two main results. First, we show that large zeros are very sparse: the set of positive integers that can possibly arise as large zeros of some LRS has null density. This in turn immediately yields a Universal Skolem Set of density one, answering a question left open in the literature. Second, we define an infinite set of prime numbers, termed "good", having density one amongst all prime numbers, with the following property: for any large zero of a given LRS, there is an interval around the large zero together with an upper bound on the number of good primes possibly present in that interval. The bound in question is much lower than one would expect if good primes were similarly distributed as ordinary prime numbers, as per the Cramer model in number theory. We therefore conjecture that large zeros do not exist, which would entail decidability of the Skolem Problem.

11:20
Deciding Termination of Simple Randomized Loops

ABSTRACT. We show that universal positive almost sure termination (UPAST) is decidable for a class of simple randomized programs, i.e., it is decidable whether the expected runtime of such a program is finite for all inputs. Our class contains all programs that consist of a single loop, with a linear loop guard and a loop body composed of two linear commuting and diagonalizable updates. In each iteration of the loop, the update to be carried out is picked at random, according to a fixed probability. We show the decidability of UPAST for this class of programs, where the program’s variables and inputs may range over various sub-semirings of the real numbers. In this way, we extend a line of research initiated by Tiwari in 2004 into the realm of randomized programs.

11:45-14:15Lunch Break
14:15-15:30 Session 3A
Location: 316 (Main room)
14:15
Kernelization in Almost Linear Time for Clustering into Bounded Vertex Cover Components

ABSTRACT. Motivated by the growing interest in graph clustering and the framework proposed during the Dagstuhl Seminar 23331, we consider a natural specialization of this general approach (as also suggested during the seminar). The seminar introduced a broad perspective on clustering, where the goal is to partition a graph into connected components (or ``clusters'') that satisfy simple structural integrity constraints—not necessarily limited to cliques.

In our work, we focus on the case where each cluster is required to have bounded vertex cover number. Specifically, a connected component \( C \) satisfies this condition if there exists a set \( S \subseteq V(C) \) with \( |S| \leq d \) such that \( C - S \) is an independent set. We study this within the framework of the \dvdvcfull{} (\dvdvcshort) problem: given a graph \( G \) and an integer \( k \), the task is to determine whether there exists a vertex set \( S \subseteq V(G) \) of size at most \( k \) such that every connected component of \( G - S \) has vertex cover number at most \( d \). We also examine the edge-deletion variant, \dedvcfull{} (\dedvcshort), where the goal is to delete at most \( k \) edges so that each connected component of the resulting graph has vertex cover number at most \( d \). We obtain following results. \begin{enumerate} \item \dvdvcshort{} admits a kernel with $\cO(d^6k^3)$ vertices and $\cO(d^7k^4)$ edges. \item \dedvcshort{}, admits a kernel with $\cO(d^4k)$ vertices and $\cO(d^6k)$ edges. \end{enumerate} Both of our kernelization algorithms run in time \( \mathcal{O}\left(1.253^d \cdot (kd)^{\mathcal{O}(1)} \cdot n \log n\right) \). It is important to note that, unless the Exponential Time Hypothesis (ETH) fails, the dependence on \( d \) cannot be improved to \( 2^{o(d)} \), as the case \( k = 0 \) reduces to solving the classical \textsc{Vertex Cover} problem, which is known to require \( 2^{\Omega(d)} \) time under ETH.

A key ingredient in our kernelization algorithms is a structural result about the hereditary graph class \( \mathcal{G}_d \), consisting of graphs in which every connected component has vertex cover number at most \( d \). We show that \( \mathcal{G}_d \) admits a finite obstruction set (with respect to the induced subgraph relation) of size \( 2^{\mathcal{O}(d^2)} \), where each obstruction graph has at most \( 3d + 2 \) vertices. This combinatorial result may be of independent interest.

14:40
Quasipolynomial-Time Deterministic Kernelization and Gammoid Representation

ABSTRACT. In this paper, we suggest to extend the notion of a kernel to permit the kernelization algorithm to be executed in quasi-polynomial time rather than polynomial time. So far, we are only aware of one work that addressed this negatively, showing that some lower bounds on kernel sizes proved for kernelization also hold when quasi-polynomial time complexity is allowed. When we, anyway, deal with an NP-hard problem, sacrificing polynomial time in preprocessing for quasi-polynomial time may often not be a big deal, but, of course, the question is --- does it give us more power? The only known work, mentioned above, seems to suggest that the answer is ``no''. In this paper, we show that this is not the case --- in particular, we show that this notion is extremely powerful for derandomization. Some of the most basic kernelization algorithms in the field are based on inherently randomized tools whose derandomization is a huge problem that has remained (and may still remain) open for many decades. Still, some breakthrough advances for derandomization in quasi-polynomial time have been made. Can we harness these advancements to design quasi-polynomial deterministic kernelization algorithms for basic problems in the field?

To this end, we revisit the question of deterministic polynomial-time computation of a linear representation of transversal matroids and gammoids, which is a longstanding open problem. We present a deterministic computation of a representation matrix of a transversal matroid in time quasipolynomial in the rank of the matroid, where each entry of the matrix can be represented in quasipolynomial (in the rank of the matroid) bits. As a corollary, we obtain a linear representation of a gammoid in deterministic quasipolynomial time and quasipolynomial bits in the size of the underlying ground set of the gammoid. In turn, as applications of our results, we present deterministic quasi-polynomial time kernels of polynomial size for several central problems in the field.

15:05
Elimination Distance to Dominated Clusters

ABSTRACT. In the Dominated Cluster Deletion problem we are given an undirected graph G and integers k and d and the question is to decide whether there exists a set of at most k vertices whose removal results in a graph in which each connected component has a dominating set of size at most d. In the Elimination Distance to Dominated Clusters problem we are again given an undirected graph G and integers k and d and the question is to decide whether we can recursively delete vertices up to depth k such that each remaining connected component has a dominating set of size at most d.

Bentert et al. [Bentert et al., MFCS 2024] recently provided an almost complete classification of the parameterized complexity of Dominated Cluster Deletion with respect to the parameters k, d, c, and \Delta, where c and \Delta are the degeneracy, and the maximum degree of the input graph, respectively. In particular, they provided a non-uniform algorithm with running time f(k,d).n^O(d). They left as an open problem whether the problem is fixed-parameter tractable with respect to the parameter (k+d+c). We provide a uniform algorithm running in time f(k,d).n^O(d) for both Dominated Cluster Deletion and Elimination Distance to Dominated Clusters. We furthermore show that both problems are FPT when parameterized by (k+d+l), where l is the semi-ladder index of the input graph, a parameter that is upper bounded and may be much smaller than the degeneracy c, positively answering the open question of Bentert~et~al. We almost complete the picture by providing an almost full classification for the parameterized complexity and kernelization complexity of Elimination Distance to Dominated Clusters. The one difficult base case that remains open is whether Treedepth (the case d=0) is NP-hard on graphs of bounded maximum degree.

14:15-15:30 Session 3B
14:15
Temporal Valued Constraint Satisfaction Problems

ABSTRACT. We study the computational complexity of the \emph{valued} constraint satisfaction problem (VCSP) for every valued structure over ${\mathbb Q}$ that is preserved by all order-preserving bijections. Such VCSPs will be called \emph{temporal}, in analogy to the (classical) constraint satisfaction problem: a relational structure is preserved by all order-preserving bijections if and only if all its relations have a~first-order definition in $({\mathbb Q};<)$, and the CSPs for such structures are called \emph{temporal} CSPs. Many optimization problems that have been studied intensively in the literature can be phrased as a temporal VCSP. We prove that a temporal VCSP is in P, or NP-complete. Our analysis uses the concept of \emph{fractional polymorphisms}. This is the first dichotomy result for VCSPs over infinite domains which is complete in the sense that it treats \emph{all} valued structures that contain a given automorphism group.

14:40
Three Fundamental Questions in Modern Infinite-Domain Constraint Satisfaction

ABSTRACT. The Feder-Vardi dichotomy conjecture for Constraint Satisfaction Problems (CSPs) with finite templates, confirmed independently by Bulatov and Zhuk, has an extension to certain well-behaved infinite templates due to Bodirsky and Pinsker which remains wide open. We provide answers to three fundamental questions on the scope of the Bodirsky-Pinsker conjecture. Our first two main results provide two simplifications of this scope, one of structural, and the other one of algebraic nature. The former simplification implies that the conjecture is equivalent to its restriction to templates without algebraicity, a crucial assumption in the most powerful classification methods. The latter yields that the higher-arity invariants of any template within its scope can be assumed to be essentially injective, and any algebraic condition characterizing any complexity class within the conjecture closed under Datalog reductions must be satisfiable by injections, thus lifting the mystery of the better applicability of certain conditions over others. Our third main result uses the first one to show that any non-trivially tractable template within the scope serves, up to a Datalog-computable modification of it, as the witness of the tractability of a non-finitely tractable finite-domain Promise Constraint Satisfaction Problem (PCSP) by the so-called sandwich method. This generalizes a recent result of Mottet and provides a strong hitherto unknown connection between the Bodirsky-Pinsker conjecture and finite-domain PCSPs.

15:05
Quantum Relaxations of CSP and Structure Isomorphism

ABSTRACT. We investigate quantum relaxations of two key decision problems in computer science: the constraint satisfaction problem (CSP) and the structure isomorphism problem. CSP asks whether a homomorphism exists between two relational structures, while structure isomorphism seeks an isomorphism between them. In recent years, it has become increasingly apparent that many special cases of CSP can be reformulated in terms of the existence of perfect classical strategies in non-local games, a key topic of study in quantum information theory. These games have allowed us to study quantum advantage in relation to many important decision problems, such as the $k$-colouring problem, and the problem of solving binary constraint systems. Abramsky et al. (2017) have shown that all of these games can be seen as special instances of a non-local CSP game. Moreover, they show that perfect quantum strategies in this CSP game can be viewed as Kleisli morphisms of a graded monad on the category of relational structures, which they dub the quantum monad. In this way, the quantum monad provides a categorical characterisation of quantum advantage for the non-local CSP game.

In this work we solidify and expand the results of Abramsky et al., answering several of their open questions. Firstly, we compare the definition of quantum graph homomorphisms arising from this work with an earlier definition of the concept due to Mančinska and Roberson and show that there are graphs which exhibit quantum advantage under one definition but not the other. Our second contribution is to extend the results of Abramsky et al. which only hold in the tensor product framework of quantum mechanics to the commuting operator framework. Next, we study a non-local structure isomorphism game, which generalises the well-studied graph isomorphism game. We show how the construction of the quantum monad can be refined to provide categorical semantics for quantum strategies in this game. This results in a category where morphisms coincide with quantum homomorphisms and isomorphisms coincide with quantum isomorphisms.

15:30-16:00Coffee Break
16:00-17:15 Session 4A
Location: 316 (Main room)
16:00
Solving Partial Dominating Set and Related Problems Using Twin-Width

ABSTRACT. Partial vertex cover and partial dominating set are two well-investigated optimization problems. While they are $\rm W[1]$-hard on general graphs, they have been shown to be fixed-parameter tractable on many sparse graph classes, including nowhere-dense classes. In this paper, we demonstrate that these problems are also fixed-parameter tractable with respect to the twin-width of a graph. Indeed, we establish a more general result: every graph property that can be expressed by a logical formula of the form $\phi\equiv\exists x_1\ldots \exists x_k \#y\,\psi(x_1,\ldots,x_k,y)\ge t$, where $\psi$ is a quantifier-free formula, $t$ is an arbitrary number, and $\#y$ is a counting quantifier, can be evaluated in time $f(d,k)n$, where $n$ is the number of vertices and $d$ is the width of a contraction sequence that is part of the input. Notably, this includes problems such as connected partial dominating set and independent partial dominating set.

16:25
A Note on the Complexity of Defensive Domination

ABSTRACT. In a graph G, a k-attack A is any set of at most k vertices and ℓ-defense D is a set of at most ℓ vertices. We say that defense D counters attack A if each a ∈ A can be matched to a distinct defender d ∈ D with a equal to d or a adjacent to d in G. In the defensive domination problem, we are interested in deciding, for a graph G and positive integers k and ℓ given on input, if there exists an ℓ-defense that counters every possible k-attack on G. Defensive domination is a natural resource allocation problem and can be used to model network robustness and security, disaster response strategies, and redundancy designs.

The defensive domination problem is naturally in the complexity class ΣP2. The problem was known to be NP-hard in general, and polynomial-time algorithms were found for some restricted graph classes. In this note we prove that the defensive domination problem is ΣP2-complete. We also introduce a natural variant of the defensive domination problem in which the defense is allowed to be a multiset of vertices. This variant is also ΣP2-complete, but we show that it admits a polynomial-time algorithm in the class of interval graphs. A similar result was known for the original setting in the class of proper interval graphs.

16:50
Parameterized Spanning Tree Congestion

ABSTRACT. In this paper we study the \textsc{Spanning Tree Congestion} problem, where we are given a graph $G=(V,E)$ and are asked to find a spanning tree $T$ of minimum maximum \emph{congestion}. Here, the congestion of an edge $e\in T$ is the number of edges $uv\in E$ such that the (unique) path from $u$ to $v$ in $T$ traverses $e$. We consider this well-studied NP-hard problem from the point of view of (structural) parameterized complexity and obtain the following results:

\begin{itemize}

\item We resolve a natural open problem by showing that \textsc{Spanning Tree Congestion} is not FPT parameterized by treewidth (under standard assumptions). More strongly, we present a generic reduction which applies to (almost) any parameter of the form ``vertex-deletion distance to class $\mathcal{C}$'', thus obtaining W[1]-hardness for parameters more restricted than treewidth, including tree-depth plus feedback vertex set, or incomparable to treewidth, such as twin cover. Via a slight tweak of the same reduction we also show that the problem is NP-complete on graphs of modular-width $4$.

\item Even though it is known that \textsc{Spanning Tree Congestion} remains NP-hard on instances with only one vertex of unbounded degree, it is currently open whether the problem remains hard on bounded-degree graphs. We resolve this question by showing NP-hardness on graphs of maximum degree $8$.

\item Complementing the problem's W[1]-hardness for treewidth, we formulate an algorithm that runs in time roughly ${(k+w)}^{\mathcal{O}(w)}$, where $k$ is the desired congestion and $w$ the treewidth, improving a previous argument for parameter $k+w$ that was based on Courcelle's theorem. This explicit algorithm pays off in two ways: it allows us to obtain an FPT approximation scheme for parameter treewidth, that is, a $(1+\varepsilon)$-approximation running in time roughly ${(w/\varepsilon)}^{\mathcal{O}(w)}$; and it leads to an exact FPT algorithm for parameter clique-width$+k$ via a Win/Win argument.

\item Finally, motivated by the problem's hardness for most standard structural parameters, we present FPT algorithms for several more restricted cases, namely, for the parameters vertex-deletion distance to clique; vertex integrity; and feedback edge set, in the latter case also achieving a single-exponential running time dependence on the parameter.

\end{itemize}

16:00-17:15 Session 4B
16:00
Positional-Player Games

ABSTRACT. In reactive synthesis, we transform a specification to a system that satisfies the specification in all environments. For specifications in linear-temporal logic, research on bounded synthesis, where the sizes of the system and the environment are bounded, captures realistic settings and has lead to algorithms of improved complexity and implementability. In the game-based approach to synthesis, the system and its environment are modeled by strategies in a two-player game with an omega-regular objective, induced by the specification. There, bounded synthesis corresponds to bounding the memory of the strategies of the players. The memory requirement for various objectives has been extensively studied. In particular, researchers have identified positional objectives, where the winning player can follow a memoryless strategy -- one that needs no memory.

In this work we study bounded synthesis in the game setting. Specifically, we define and study positional-player games, in which one or both players are restricted to memoryless strategies, which correspond to non-intrusive control in various applications. We study positional-player games with Rabin, Streett, and Muller objectives, as well as with weighted multiple Buchi and reachability objectives. Our contribution covers their theoretical properties as well as a complete picture of the complexity of deciding the game in the various settings.

16:25
Games with ω-Automatic Preference Relations

ABSTRACT. This paper investigates Nash equilibria (NEs) in multi-player turn-based games on graphs, where player preferences are modeled as ω-automatic relations via deterministic parity automata. Unlike much of the existing literature, which focuses on specific reward functions, our results apply to any preference relation definable by an ω-automatic relation. We analyze the computational complexity of determining the existence of an NE (possibly under some constraints), verifying whether a given strategy profile forms an NE, and checking whether a specific outcome can be realized by an NE. When a (constrained) NE exists, we show that there always exists one with finite-memory strategies. Finally, we explore fundamental properties of ω-automatic relations and their implications in the existence of equilibria.

16:50
Deciding regular games: a playground for exponential time algorithms

ABSTRACT. Regular games form a well-established class of games for analysis and synthesis of reactive systems. They include colored Muller games, McNaughton games, Muller games, Rabin games, and Streett games. These games are played on directed graphs $\mathcal G$ where Player 0 and Player 1 play by generating an infinite path $\rho$ through the graph. The winner is determined by specifications put on the set $X$ of vertices in $\rho$ that occur infinitely often. These games are determined, enabling the partitioning of $\mathcal G$ into two sets $Win_0$ and $Win_1$ of winning positions for Player 0 and Player 1, respectively. Numerous algorithms exist that decide instances of regular games, e.g., Muller games, by computing $Win_0$ and $Win_1$. In this paper we aim to find general principles for designing uniform algorithms that decide all regular games. For this we utilize various recursive and dynamic programming algorithms that leverage standard notions such as subgames and traps. Importantly, we show that our techniques improve or match the performances of existing algorithms for many instances of regular games.

17:30-20:00

Welcome reception in the University of Warsaw Library.