## Editors: Marc Vinyals, Olaf Beyersdorff and Jan Johannsen

Authors, Title and Abstract | Paper | Talk |
---|---|---|

ABSTRACT. The CSP (constraint satisfaction problems) is a class of problems deciding whether there exists a homomorphism from an instance relational structure to a target one. The CSP dichotomy is a profound result recently proved by Zhuk and Bulatov. It says that for each fixed target structure CSP is either NP-complete or p-time. Zhuk's algorithm solves CSP in polynomial time for constraint languages having a weak near unanimity polymorphism. For negative instances of p-time CSP it is reasonable to explore their proof complexity. We show that the soundness of Zhuk's algorithm can be proved in the theory of bounded arithmetic, namely in theory V^1 augmented by three special universal algebra axioms. | Jul 31 14:30 | |

ABSTRACT. We study the complexity of proof systems augmenting resolution with inference rules that allow, given a formula F in conjunctive normal form, deriving clauses that are not necessarily logically implied by F but whose addition to F preserves satisfiability. When the derived clauses are allowed to introduce variables not occurring in F, the systems we consider become equivalent to extended resolution. We are concerned with the versions of these systems "without new variables". They are called BC-, RAT-, SBC- and GER-, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution. Except for SBC-, these systems are known to be exponentially weaker than extended resolution. They are, however, all equivalent to it under a relaxed notion of simulation that allows the translation of the formula along with the proof when moving between proof systems. By taking advantage of this fact, we construct formulas that exponentially separate RAT- from GER- and vice versa. With the same strategy, we also separate SBC- from RAT-. Additionally, we give polynomial-size SBC- proofs of the pigeonhole principle, which separates SBC- from GER- by a previously known lower bound. These results also separate the three systems from BC- since they all simulate it. We thus give an almost complete picture of their relative strengths. Along the way, we prove a partial simulation of RAT- by BC- that is to our knowledge the only example of a nontrivial simulation in proof complexity that cannot necessarily be carried out in time polynomial in the size of the produced proof, which highlights the semantic nature of these systems. | Aug 01 17:00 | |

ABSTRACT. We consolidate two widely believed conjectures about tautologies---no optimal proof system exists, and most require superpolynomial size proofs in any system---into a $p$-isomorphism-invariant condition satisfied by all paddable $\textbf{coNP}$-complete languages or none. The condition is: for any Turing machine (TM) $M$ accepting the language, $\textbf{P}$-uniform input families requiring superpolynomial time by $M$ exist (equivalent to the first conjecture) and appear with positive upper density in an enumeration of input families (implies the second). In that case, no such language is easy on average (in $\textbf{AvgP}$) for a distribution applying non-negligible weight to the hard families. The hardness of proving tautologies and theorems is likely related. Motivated by the fact that arithmetic sentences encoding ``string $x$ is Kolmogorov random'' are true but unprovable with positive density in a finitely axiomatized theory $\mathcal{T}$ (Calude and J{\"u}rgensen), we conjecture that any propositional proof system requires superpolynomial size proofs for a dense set of $\textbf{P}$-uniform families of tautologies encoding ``there is no $\mathcal{T}$ proof of size $\leq t$ showing that string $x$ is Kolmogorov random''. This implies the above condition. The conjecture suggests that there is no optimal proof system because undecidable theories help prove tautologies and do so more efficiently as axioms are added, and that constructing hard tautologies seems difficult because it is impossible to construct Kolmogorov random strings. Similar conjectures that computational blind spots are manifestations of noncomputability would resolve other open problems. | Aug 01 14:00 | |

ABSTRACT. A canonical communication problem $\mathrm{Search}{\phi}$ is defined for every unsatisfiable CNF $\phi$: an assignment to the variables of $\phi$ is distributed among the communicating parties, they are to find a clause of $\phi$ falsified by this assignment. Lower bounds on the randomized $k$-party communication complexity of $\mathrm{Search}{\phi}$ in the number-on-forehead (NOF) model imply tree-size lower bounds, rank lower bounds, and size-space tradeoffs for the formula $\phi$ in the semantic proof system $\mathrm{T}^{cc}(k,c)$ that operates with proof lines that can be computed by $k$-party randomized communication protocol using at most $c$ bits of communication [Göös, Pitassi, 2014]. All known lower bounds on $\mathrm{Search}{\phi}$ (e.g. [Beame, Pitassi, Segerlind, 2007]; [Göös, Pitassi, 2014], [Impagliazzo, Pitassi, Urquhart, 1994]) are realized on ad-hoc formulas $\phi$ (i.e. they were introduced specifically for these lower bounds). We introduce a new communication complexity approach that allows establishing proof complexity lower bounds for natural formulas. First, we demonstrate our approach for two-party communication and apply it to the proof system $\mathrm{Res}(\oplus)$ that operates with disjunctions of linear equalities over $\mathbb{F}_2$ [Itsykson, Sokolov, 2014]. Let a formula $\mathrm{PM}_G$ encode that a graph $G$ has a perfect matching. If $G$ has an odd number of vertices, then $\mathrm{PM}_{G}$ has a tree-like $\mathrm{Res} oplus)$-refutation of a polynomial-size [Itsykson, Sokolov, 2014]. It was unknown whether this is the case for graphs with an even number of vertices. Using our approach we resolve this question and show a lower bound $2^{\Omega(n)}$ on size of tree-like $\mathrm{Res}(\oplus)$-refutations of $\mathrm{PM}_{K_{n+2,n}}$. Then we apply our approach for $k$-party communication complexity in the NOF model and obtain a $\Omega\left(\frac{1}{k} 2^{n/2k - 3k/2}\right)$ lower bound on the randomized $k$-party communication complexity of $\mathrm{Search} mathrm{BPHP}^{M}_{2^n}}$ w.r.t. to some natural partition of the variables, where $\mathrm{BPHP}^{M}_{2^n}$ is the bit pigeonhole principle and $M=2^n+2^{n(1-1/k)}$. In particular, our result implies that the bit pigeonhole requires exponential tree-like $\mathrm{Th}(k)$ proofs, where $\mathrm{Th}(k)$ is the semantic proof system operating with polynomial inequalities of degree at most $k$ and $k= \O(\log^{1-\epsilon} n)$ for some $\epsilon > 0$. We also show that $\mathrm{BPHP}^{2^n+1}_{2^n}$ superpolynomially separates tree-like $\mathrm{Th}(\log^{1-\epsilon} m)$ from tree-like $\mathrm{Th}(\log m)$, where $m$ is the number of variables in the refuted formula. The talk is based on the paper published in Proceedings of CCC-2021. | Jul 31 15:00 | |

ABSTRACT. The problem of the existence of an p-optimal propositional proof system is one of the central open problems in proof complexity. The goal of this work is to study the restricted case of this problem, namely, the case of strong reductions. We also introduce the notion of bounded proof system and study the connection between optimality and automatizability for bounded proof systems. | Aug 01 14:30 | |

ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL. | Aug 01 10:00 | |

On vanishing sums of roots of unity in polynomial calculus and sum-of-squares (an extended abstract) ABSTRACT. Vanishing sums of roots of unity can be seen as a natural generalization of knapsack from Boolean variables to variables taking values over the roots of unity. We show that these sums are hard to prove for polynomial calculus and for sum-of-squares, both in terms of degree and size. | Aug 01 16:00 | |

ABSTRACT. Hitting formulas are a peculiar class of propositional CNF formulas. Not only is their satisfiability decidable in polynomial time, even their models can be counted in closed form. In contrast, other tractable classes, like 2-SAT and Horn-SAT, usually have algorithms based on resolution; and model counting remains hard. On the flip side, those resolution-based algorithms usually easily imply an upper bound on resolution complexity, which is missing for hitting formulas. Are hitting formulas hard for resolution? In this paper we take the first steps towards answering this question. We show that the resolution complexity of hitting formulas is dominated by so-called irreducible hitting formulas. However, constructing large irreducible formulas is non-trivial and it is not even known whether infinitely many exist. Building upon our theoretical results, we implement an efficient algorithm on top of the Nauty software package to enumerate all irreducible unsatisfiable hitting formulas with up to 14 clauses, and we determine exact resolution complexity of those with up to 13 clauses, by extending an existing SAT encoding. | Aug 01 15:00 |