Interactions Between Proof Complexity and Finite Model Theory (part 1)

ABSTRACT. The main object of study in proof complexity are formal proofs of the unsatisfiability of propositional formulas, whereas finite model theory focuses on the expressibility and complexity of
logics on finite structures.

Although their research objects are different, several connections between both areas have been unveiled and techniques from one area have been fruitfully applied to make
progress in the other.

In this tutorial, I will give an introduction to propositional proof complexity, discuss some of the connections to finite model theory that have been established so far, and present results that have been obtained using these interactions.

Interactions Between Proof Complexity and Finite Model Theory (part 2)

ABSTRACT. The main object of study in proof complexity are formal proofs of the unsatisfiability of propositional formulas, whereas finite model theory focuses on the expressibility and complexity of
logics on finite structures.

Although their research objects are different, several connections between both areas have been unveiled and techniques from one area have been fruitfully applied to make
progress in the other.

In this tutorial, I will give an introduction to propositional proof complexity, discuss some of the connections to finite model theory that have been established so far, and present results that have been obtained using these interactions.

ABSTRACT. Probabilistic programming is en vogue. It is used to describe complex Bayesian networks, quantum programs, security protocols and biological systems. Programming languages like C, C#, Java, Prolog, Scala, etc. all have their probabilistic version. Key features are random sampling and means to adjust distributions based on evidences from measurements and system observations. Through richer control-flow constructs they allow for representing probabilistic graphical models far beyond the capabilities of Bayesian networks.

This tutorial focuses on elementary questions: semantics, termination, run-time analysis, and formal verification.

ABSTRACT. The celebrated theory of NP-Hardness classifies problems into “easy” (in P) and “NP-hard” (probably not in P). However, while the easy problems do have polynomial time algorithms, in many cases, a polynomial runtime like O(n^2) or O(n^3) can be too slow. Many fundamental "easy" problems have resisted decades of attempts at getting truly fast algorithms, and are typically solved by potentially inaccurate but fast algorithms in practice. Such problems include Sequence Alignment, Parsing, Distance Computation in Graphs, and so many other problems we encounter in basic Algorithms courses. Can we formally explain the lack of faster algorithms and justify the use of heuristics, even for “easy” problems?

In this talk, I will overview a theoretical framework for proving hardness for problems in P. Inspired by NP-Hardness, this approach is based on reductions which have uncovered fascinating structure among the different problems in P. We can now point at a small set of core problems, and say: For dozens of problems in P there is no hope to get faster algorithms, until we know how to solve the core problems faster.