View: session overviewtalk overview
09:00  All Questions Answered ABSTRACT. ZOOM link: https://technion.zoom.us/j/97111314813
During the past two years, the speaker has been drafting Section 7.2.2.3 of The Art of Computer Programming, which is intended to be a solid introduction to techniques for solving Constraint Satisfaction Problems. The CP 2022 conference is an excellent opportunity for him to get feedback from the leading experts on the subject, and so he was delighted to learn that the organizers were also interested in hearing a few words from him. Rather than giving a canned lecture, he much prefers to let the audience choose the topics, and for all questions to be kept a secret from him until the lecture is actually in progress. (He believes that people often learn more from answers that are spontaneously fumbled than from responses that are carefully preplanned.) Questions related to constraints will naturally be quite welcome, but questions on any subject whatsoever will not be ducked! He'll try to answer them all as best he can, without spending a great deal of time on any one topic, unless there is special interest to go into more depth. Meanwhile he hopes to have drafted some notes for circulation before the conference begins, in case some attendees might wish to focus some of their questions on expository material related to his forthcoming book, either during this session or informally afterwards. Warning: His least favorite questions have the form "What is your favorite X?" If you want to ask such questions, please try to do it cleverly so that he doesn't have to choose between different things that he loves in different ways. How to download the current draft of Section 7.2.2.3? You may have a look at the current draft at http://cs.stanford.edu/~knuth/fasc7a.ps.gz. Please note that this draft will hopefully be extended by a dozen or more pages by the time CP begins (the same link will be used). Don also points out that a reward of $2.56 per bug detected will be given to first responders, plus 32 cents per excellent suggestion for improvement. How to ask questions? If you wish to ask a question, please send it by email to christine.solnon@insalyon.fr before July 31, 2022.

09:00  PRESENTER: Marc Hermes ABSTRACT. Tennenbaum's theorem states that the only countable model of Peano arithmetic (PA) with computable arithmetical operations is the standard model of natural numbers. In this paper, we use constructive type theory as a framework to revisit and generalize this result. The chosen framework allows for a synthetic approach to computability theory, by exploiting the fact that, externally, all functions definable in constructive type theory can be shown computable. We internalize this fact by assuming a version of Church's thesis expressing that any function on natural numbers is representable by a formula in PA. This assumption allows for a conveniently abstract setup to carry out rigorous computability arguments and feasible mechanization. Concretely, we constructivize several classical proofs and present one inherently constructive rendering of Tennenbaum's theorem, all following arguments from the literature. Concerning the classical proofs in particular, the constructive setting allows us to highlight differences in their assumptions and conclusions which are not visible classically. All versions are accompanied by a unified mechanization in the Coq proof assistant. 
09:30  PRESENTER: Vincent Rahli ABSTRACT. Timeprogressing expressions, i.e., expressions that compute to different values over time such as Brouwerian choice sequences or reference cells, are a common feature in many frameworks. For type theories to support such elements, they usually employ sheaf models. In this paper, we provide a general framework in the form of an extensional type theory incorporating various timeprogressing elements along with a general possibleworlds forcing interpretation parameterized by modalities. The modalities can, in turn, be instantiated with topological spaces of bars, leading to a general sheaf model. This parameterized construction allows us to capture a distinction between theories that are “agnostic”, i.e., compatible with classical reasoning in the sense that classical axioms can be validated, and those that are “intuitionistic”, i.e., incompatible with classical reasoning in the sense that classical axioms can be proven false. This distinction is made via properties of the modalities selected to model the theory and consequently via the space of bars instantiating the modalities. We further identify a class of timeprogressing elements that allows deriving “intuitionistic” theories that include not only choice sequences but also simpler operators, namely reference cells. 
10:00  PRESENTER: Samuel Mimram ABSTRACT. Natural numbers are isomorphism classes of finite sets and one can look for operations on sets which, after quotienting, allow recovering traditional arithmetic operations. Moreover, from a constructivist perspective, it is interesting to study whether those operations can be performed without resorting to the axiom of choice (the use of classical logic is usually necessary). Following the work of Bernstein, Sierpiński, Doyle and Conway, we study here ``division by two'' (or, rather, regularity of multiplication by two). We provide here a full formalization of this operation on sets, using the cubical variant of Agda, which is an implementation of the homotopy type theory setting, thus revealing some interesting points in the proof. Moreover, we show that this construction extends to general types, as opposed to sets. 
09:00  On Syntactic Forgetting under Uniform Equivalence PRESENTER: Matthias Knorr ABSTRACT. Forgetting in Answer Set Programming (ASP) aims at reducing the language of a logic program without affecting the consequences over the remaining language. It has recently gained interest in the context of modular ASP where it allows simplifying a program of a module, making it more declarative, by omitting auxiliary atoms or hiding certain atoms/parts of the program not to be disclosed. Unlike for arbitrary programs, it has been shown that forgetting for modular ASP can always be applied, for input, output and hidden atoms, and preserve all dependencies over the remaining language (in line with uniform equivalence). However, the definition of the result is based solely on a semantic characterization in terms of HTmodels. Thus, computing an actual result is a complicated process and the result commonly bears no resemblance to the original program, i.e., we are lacking a corresponding syntactic operator. In this paper, we show that there is no forgetting operator that preserves uniform equivalence (modulo the forgotten atoms) between the given program and its forgetting result by only manipulating the rules of the original program that contain the atoms to be forgotten. We then present a forgetting operator that preserves uniform equivalence and is syntactic whenever this is suitable. We also introduce a special class of programs, where syntactic forgetting is always possible, and as a complementary result, establish it as the largest known class where forgetting while preserving all dependencies is always possible. 
09:15  PRESENTER: Fabrizio Riguzzi ABSTRACT. In Probabilistic Abductive Logic Programming we are given a probabilistic logic program, a set of abducible facts, and a set of constraints. Inference in probabilistic abductive logic programs aims to find a subset of the abducible facts that is compatible with the constraints and that maximizes the joint probability of the query and the constraints. In this paper, we extend the PITA reasoner with an algorithm to perform abduction on probabilistic abductive logic programs exploiting Binary Decision Diagrams. Tests on several synthetic datasets show the effectiveness of our approach. 
09:30  DeepStochLog: Neural Stochastic Logic Programming PRESENTER: Giuseppe Marra ABSTRACT. Recent advances in neuralsymbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neuralsymbolic framework based on stochastic definite clause grammars, a kind of stochastic logic program. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained endtoend. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves stateoftheart results on challenging neuralsymbolic learning tasks. 
09:45  Transforming Gringo Rules into Formulas in a Natural Way ABSTRACT. Research on the input language of the ASP grounder gringo uses a translation that converts rules in that language into firstorder formulas. That translation often transforms short rules into formulas that are syntactically complex. In this note we identify a class of rules that can be transformed into formulas in a simpler, more natural way. The new translation contributes to our understanding of the relationship between the language of gringo and firstorder languages. 
10:00  PRESENTER: Nicolas Schwind ABSTRACT. Belief revision aims at incorporating a new piece of information into the beliefs of an agent. This process can be modelled by the standard AGM/KM rationality postulates. However, these postulates suppose a classical setting and do not govern the case where (some parts of) the beliefs of the agent are not consistent. In this work we discuss how to adapt these postulates when the underlying logic is Priest’s Logic of Paradox, in order to model a rational change, while being a conservative extension of AGM/KM belief revision. This paper is a summary of [5]. 
10:15  Model Reconciliation in Logic Programs PRESENTER: Tran Cao Son ABSTRACT. Inspired by recent research in explainable planning, we investigate the model reconciliation problem between two logic programs \pi a and \pi h, which represent the knowledge bases of an agent and a human, respectively. Given \pi a, \pi h, and a query q such that \pi a entails q and \pi h does not entail q (or \pi a does not entail q and h entails q), the model reconciliation problem focuses on the question of how to modify \pi h, by adding \epsilon+⊆\pi a to \pi h and removing \epsilon−⊆\pi h from \pi h such that the resulting program \pî h=(\pi h∖\epsilon−)∪\epsilon+ has an answer set containing q (or has no answer set containing q). The pair (\epsilon+,\epsilon−) is referred to as a solution for the model reconciliation problem (\pi a,\pi h,q) (or (\pi a,\pi h,¬q)). We prove that, for a reasonable selected set of rules \epsilon+⊆\pi a there exists a way to modify \pi h such that \pî h is guaranteed to credulously entail q (or skeptically entail ¬q). Given that there are potentially several solutions, we discuss different characterizations of solutions and algorithms for computing solutions for model reconciliation problems. 
"Quantitative Algebra, Probabilities and Monads": 6 papers (12 min presentation + 23 min Q&A)
09:00  Varieties of quantitative algebras and their monads ABSTRACT. Quantitative algebras, for a given signature with countable arities, are algebras equipped with a metric making all operations nonexpanding. They have been studied by Mardare, Panangaden and Plotkin who also introduced $c$basic quantitative equations for regular cardinals $c$. Categories of quantitative algebras that can be presented by such equations for $c=\aleph_1$ are called $\omega_1$varieties. We prove that they are precisely the monadic categories $\Met^\T$, where $\T$ is a countably basic monad on the category $\Met$ of metric spaces. For finitary signatures one speaks about $\omega$varieties in case $c= \aleph_0$. If all metrics used are restricted to $\bf{UMet}$, the category of ultrametric spaces, then $\omega$varieties are precisely the monadic categories $\bf{UMet}^\T$ , where $\T$ is a finitely basic monad. 
09:15  Beyond Nonexpansive Operations in Quantitative Algebraic Reasoning PRESENTER: Matteo Mio ABSTRACT. The framework of quantitative equational logic has been successfully applied to reason about algebras whose carriers are metric spaces and operations are nonexpansive. We extend this framework in two orthogonal directions: algebras endowed with generalised metric space structures, and operations being nonexpansive up to a lifting. We apply our results to the algebraic axiomatisation of the ŁukaszykKarmowski distance on probability distributions, which has recently found application in the field of representation learning on Markov processes. 
09:30  Concrete categories and higherorder recursion (with applications including probability, differentiability, and full abstraction) PRESENTER: Cristina Matache ABSTRACT. We study concrete sheaf models for a callbyvalue higherorder language with recursion. Our family of sheaf models is a generalization of many examples from the literature, such as models for probabilistic and differentiable programming, and fully abstract logical relations models. We treat recursion in the spirit of synthetic domain theory. We provide a general construction of a lifting monad starting from a class of admissible monomorphisms in the site of the sheaf category. In this way, we obtain a family of models parametrized by a concrete site and a class of monomorphisms, for which we prove a general computational adequacy theorem. 
09:45  Probability monads with submonads of deterministic states PRESENTER: Sean Moss ABSTRACT. Probability theory can be studied synthetically as the computational effect embodied by a commutative monad. In the recently proposed Markov categories, one works with an abstraction of the Kleisli category and then defines deterministic morphisms equationally in terms of copying and discarding. The resulting difference between `pure' and `deterministic' leads us to investigate the `sober' objects for a probability monad, for which the two concepts coincide. We propose natural conditions on a probability monad which allow us to identify the sober objects and define an idempotent sobrification functor. Our framework applies to many examples of interest, including the Giry monad on measurable spaces, and allows us to sharpen a previously given version of de Finetti's theorem for Markov categories. 
10:00  Monoidal Streams for Dataflow Programming PRESENTER: Mario Román ABSTRACT. We introduce monoidal streams: a generalization of synchronous stream processes to monoidal categories. Monoidal streams over a symmetric monoidal category form a feedback monoidal category. In the same way that classical streams provide semantics to dataflow programming with pure functions; monoidal streams provide semantics to dataflow programming where theories of processes are represented by a symmetric monoidal category. As an example, we study a stochastic dataflow language. 
10:15  Partitions and Ewens Distributions in elementfree Probability Theory ABSTRACT. This article redevelops and deepens the probability theory of Ewens and others from the 1970s in population biology. At the heart of this theory are the socalled Ewens distributions describing biolological mutations. These distributions have a particularly rich (and beautiful) mathematical structure. The original work is formulated in terms of partitions, which are special multisets on natural numbers. The current redevelopment starts from multisets on arbitrary sets, with partitions as a special form that captures only the multiplicities of multiplicities, without naming the elements themselves. This `elementfree' approach will be developed in parallel to the usual elementbased theory. Ewens' famous sampling formula describes a cone of (parametrised) distributions on partitions. Another cone for this chain is described in terms of new (elementfree) multinomials. They are welldefined because of a novel `partitions multinomial theorem' that extends the familiar multinomial theorem. This is based on a new concept of `division', as elementfree distribution, in terms of multisets of probabilities. 
09:00  Quantified CDCL with Universal Resolution ABSTRACT. Quantified ConflictDriven Clause Learning (QCDCL) solvers for QBF generate Qresolution proofs. Pivot variables in Qresolution must be existentially quantified. Allowing resolution on universally quantified variables leads to a more powerful proof system called QUresolution, but so far, QBF solvers have used QUresolution only in very limited ways. We present a modified version of QCDCL that generates proofs in QUresolution by leveraging propositional unit propagation. Experiments with crafted instances and benchmarks from recent QBF evaluations demonstrate the viability of this approach. 
09:30  Relating existing powerful proof systems for QBF PRESENTER: Leroy Chew ABSTRACT. We advance the theory of QBF proof systems by showing the first simulation of the universal checking format QRAT by a theoryfriendly system. We show that the sequent system G fully psimulates QRAT, including the Extended Universal Reduction (EUR) rule which was recently used to show QRAT does not have strategy extraction. Because EUR heavily uses resolution paths our technique also brings resolution path dependency and sequent systems closer together. While we do not recommend G for practical applications this work can potentially show what features are needed for a new QBF checking format stronger than QRAT. 
10:00  Quantifier Elimination in Stochastic Boolean Satisfiability PRESENTER: HaoRen Wang ABSTRACT. Stochastic Boolean Satisfiability (SSAT) generalizes quantified Boolean formulas (QBFs) by allowing quantification over random variables. Its generality makes SSAT powerful to model decision or optimization problems under uncertainty. On the other hand, this generalization complicates the computation in its counting nature. In this work, we address the following two questions: 1) Is there an analogy of quantifier elimination in SSAT, similar to QBF? 2) If quantifier elimination is possible for SSAT, can it be effective for SSAT solving? The answers are affirmative. We develop an SSAT decision procedure based on quantifier elimination. Experimental results demonstrate the unique benefits of the new method compared to the stateoftheart solvers. 
11:0012:00: Modelling
12:0012:30: Robust Solutions
11:00  Computing relaxations for the threedimensional stable matching problem with cyclic preferences PRESENTER: Luis Quesada ABSTRACT. Constraint programming has proven to be a successful framework for determining whether a given instance of the threedimensional stable matching problem with cyclic preferences (3DSMCYC) admits a solution. If such an instance is satisfiable, constraint models can even compute its optimal solution for several different objective functions. On the other hand, the only existing output for unsatisfiable 3DSMCYC instances is a simple declaration of impossibility. In this paper, we explore four ways to adapt constraint models designed for 3DSMCYC to the maximum relaxation version of the problem, that is, the computation of the smallest part of an instance whose modification leads to satisfiability. We also extend our models to support the presence of costs on elements in the instance, and to return the relaxation with lowest total cost for each of the four types of relaxation. Empirical results reveal that our relaxation models are efficient, as in most cases, they show little overhead compared to the satisfaction version. 
11:30  PRESENTER: Ruth Hoffmann ABSTRACT. Research in constraint programming typically focuses on problem solving efficiency. However, the way users conceptualise problems and communicate with constraint programming tools is often sidelined. How humans think about constraint problems can be important for the development of efficient tools that are useful to a broader audience. For example, a system incorporating knowledge on how people think about constraint problems can provide explanations to users and improve the communication between the human and the solver. We present an initial step towards better understanding of the human side of the constraint solving process. To our knowledge, this is the first humancentred study addressing how people approach constraint modelling and solving. We observed three sets of ten users each (constraint programmers, computer scientists and noncomputer scientists) and analysed how they find solutions for wellknown constraint problems. We found regularities offering clues about how to design systems that are more intelligible to humans. 
12:00  Large Neighborhood Search for Robust Solutions for Constraint Satisfaction Problems with Ordered Domains PRESENTER: Jheisson Lopez ABSTRACT. In real applications, typically, there is a lack of information about the uncertainty/dynamismassociated with them. An effort has been done in the literature for finding robust solutions for their associated Constraint Satisfaction Problems (CSPs). Here, we analyze a previous exact/complete approach from the stateoftheart that focuses on CSPs with ordered domains and dynamic bounds. However, this approach has low performance in largescale CSPs. For this reason, in this paper, we present an inexact/incomplete approach that is faster at finding robust solutions for largescale CSPs. This is specially handy when the computation time available for finding a solution is limited and/or a new solution must be recomputed online (because the original one is lost due to the dynamism). Specifically, we present a Large Neighbourhood Search (LNS) algorithm combined with Constraint Programming(CP) and Branchandbound (B&B) that searches for robust solutions. Furthermore, we also present a robustvalue selection heuristic that guides the search towards more promising branches. We evaluate our approach with largescale CSPs instances. Including the case study of scheduling problems. The evaluation shows a considerable improvement in the robustness of the solutions achieved by our algorithm for largescale CSPs. 
11:00  Scheduling the Equipment Maintenance of an Electric Power Transmission Network using Constraint Programming PRESENTER: Louis Popovic ABSTRACT. Modern electrical power utilities must maintain their electrical equipment and replace them as they reach the end of their useful life. The Transmission Maintenance Scheduling (TMS) problem consists in generating an annual maintenance plan for electric power transportation equipment while maintaining the stability of the network and ensuring a continuous power flow for customers. Each year, a list of equipment that needs to be maintained or replaced is available and the goal is to generate an optimal maintenance plan. This paper proposes a constraintbased scheduling approach to solve the TMS problem. The model considers two types of constraints: (1) constraints that can be naturally formalized inside a constraint programming model, and (2) complex constraints that do not have a proper formalization from the field specialists. The latter cannot be integrated inside the model due to their complexity. Their satisfaction is thus verified by a black box tool, which is a simulator mimicking the impact of a maintenance schedule on the real power network. The simulator is based on complex differential powerflow equations. Experiments are carried out at five strategic points of HydroQuébec power grid infrastructure, which involves more than 200 electrical equipment and 300 withdrawal requests. Results show that our model is able to comply with most of the formalized and unformalized constraints. It also generates maintenance schedules within an execution time of few minutes. The generated schedules are similar to the ones proposed by a field specialist and can be used to simulate maintenance programs for the next 10 years. 
11:30  Solving the Constrained SingleRow Facility Layout Problem with Decision Diagrams PRESENTER: Vianney Coppé ABSTRACT. The SingleRow Facility Layout Problem is an NPhard problem dealing with the ordering of departments with given lengths and pairwise traffic intensities in a facility. In this context, one seeks to minimize the sum of the distances between department pairs, weighted by the corresponding traffic intensities. Practical applications of this problem include the arrangement of rooms on a corridor in hospitals or offices, airplanes and gates in an airport or machines in a manufacture. This paper presents two novel exact models for the Constrained SingleRow Facility Layout Problem, a recent variant of the problem including positioning, ordering and adjacency constraints. On the one hand, the stateoftheart mixedinteger programming model for the unconstrained problem is extended to incorporate the constraints. On the other hand, a decision diagrambased approach is described, based on an existing dynamic programming model for the unconstrained problem. Computational experiments show that both models outperform the only mixedinteger programming model in the literature, to the best of our knowledge. While the two models have execution times of the same order of magnitude, the decision diagrambased approach handles positioning constraints much better but the mixedinteger programming model has the advantage for ordering constraints. 
12:00  Modeling and Solving Parallel Machine Scheduling with Contamination Constraints in the Agricultural Industry PRESENTER: Felix Winter ABSTRACT. Modernday factories of the agricultural industry need to produce and distribute large amounts of compound feed to handle the daily demands of livestock farming. As a highlyautomated production process is utilized to fulfill the largescale requirements in this domain, finding efficient machine schedules is a challenging task which requires the consideration of complex constraints and the execution of optional cleaning jobs to prevent a contamination of the final products. Furthermore, it is critical to minimize job tardiness in the schedule, since the truck routes which are used to distribute the products to customers are sensitive to delays. Thus, there is a strong need for efficient automated methods which are able to produce optimized schedules in this domain. This paper formally introduces a novel reallife problem in this area and investigates constraintmodeling techniques as well as a metaheuristic approach to efficiently solve practical scenarios. In particular, we investigate two innovative constraint programming model variants as well as an integer programming formulation to model the contamination constraints which require an efficient utilization of variables with a real valued domain. To tackle largescale instances, we additionally provide a local search approach based on simulated annealing that utilizes problemspecific neighborhood operators. We provide a set of new reallife problem instances that we use in an extensive experimental evaluation of all proposed approaches. Computational results show that our models can be successfully used together with stateoftheart constraint solvers to provide several optimal results as well as highquality bounds for many reallife instances. Additionally, the proposed metaheuristic approach could reach many optimal results and delivers the best upper bounds on many of the large practical instances in our experiments. 
11:00  PRESENTER: Siva Somayyajula ABSTRACT. In sequential functional languages, sized types enable termination checking of programs with complex patterns of recursion in the presence of mixed inductivecoinductive types. In this paper, we adapt sized types and their metatheory to the concurrent setting. We extend the semiaxiomatic sequent calculus, a subsuming paradigm for futuresbased functional concurrency, and its underlying operational semantics with recursion and arithmetic refinements. The latter enables a new and highly general sized type scheme we call sized type refinements. As a widely applicable technical device, we type recursive programs with infinitely deep typing derivations that unfold all recursive calls. Then, we observe that certain such derivations can be made infinitely wide but finitely deep. The resulting trees serve as the induction target of our strong normalization result, which we develop via a novel logical relations argument. 
11:30  PRESENTER: Margarita Veshchezerova ABSTRACT. The ZXcalculus is a powerful framework for reasoning in quantum computing. It provides in particular a compact representation of matrices of interests. A peculiar property of the ZXcalculus is the absence of a formal sum allowing the linear combinations of arbitrary ZXdiagrams. The universality of the formalism guarantees however that for any two ZXdiagrams, the sum of their interpretations can be represented by a ZXdiagram. We introduce the first general, inductive definition of the summation of ZXdiagrams, relying on the construction of controlled diagrams. Based on this summation techniques, we provide an inductive differentiation of ZXdiagrams. Indeed, given a ZXdiagram with variables in the description of its angles, one can differentiate the diagram according to one of these variables. Differentiation is ubiquitous in quantum mechanics and quantum computing (e.g. for solving optimisation problems). Technically, differentiation of ZXdiagrams is strongly related to summation as witnessed by the product rules. We also introduce an alternative, non inductive, differentiation technique rather based on the isolation of the variables. Finally, we apply our results to deduce a diagram for an Ising Hamiltonian. 
11:00  Strong Equivalence of Logic Programs with Ordered Disjunction: a Logical Perspective PRESENTER: Panos Rondogiannis ABSTRACT. Logic Programs with Ordered Disjunction (LPODs) extend classical logic programs with the capability of expressing preferential disjunctions in the heads of program rules. The initial semantics of LPODs (Brewka 2002; Brewka et al. 2004), although simple and quite intuitive, is not purely modeltheoretic. A consequence of this is that certain properties of programs appear nontrivial to formalize in purely logical terms. An example of this state of affairs is the characterization of the notion of strong equivalence for LPODs (Faber et al. 2008). Although the results of (Faber et al. 2008) are accurately developed, they fall short of characterizing strong equivalence of LPODs as logical equivalence in some specific logic. This comes in sharp contrast with the wellknown characterization of strong equivalence for classical logic programs, which, as proved in (Lifschitz et al. 2001), coincides with logical equivalence in the logic of hereandthere. In this paper we obtain a purely logical characterization of strong equivalence of LPODs as logical equivalence in a fourvalued logic. Moreover, we provide a new proof of the coNPcompleteness of strong equivalence for LPODs, which has an interest in its own right since it relies on the special structure of such programs. Our results are based on the recent logical semantics of LPODs introduced in (Charalambidis et al. 2021), a fact which we believe indicates that this new semantics may prove to be a useful tool in the further study of LPODs. 
11:22  Strong Equivalence of Logic Programs with Counting ABSTRACT. Under some conditions, a rule in the input language of the ASP grounder gringo can be transformed into a firstorder sentence that has the same meaning under the stable model semantics. Such translations can be used for proving strong equivalence of logic programs and for verifying their correctness with respect to specifications expressed by firstorder formulas. This paper defines a translation of this kind that is applicable to some rules containing arithmetic operations and aggregate expressions. 
11:44  Analyzing Semantics of Aggregate Answer Set Programming Using Approximation Fixpoint Theory PRESENTER: Linde Vanbesien ABSTRACT. Aggregates provide a concise way to express complex knowledge. The problem of selecting an appropriate formalisation of aggregates for answer set programming (ASP) remains unsettled. This paper revisits the problem from the viewpoint of Approximation Fixpoint Theory (AFT). We introduce an AFT formalisation equivalent with the GelfondLifschitz reduct for basic ASP programs and we extend it to handle aggregates. We analyse how existing approaches relate to our framework. We hope this work sheds some new light on the issue of a proper formalisation of aggregates. 
12:06  PRESENTER: Francesco Ricca ABSTRACT. Answer Set Programming with Quantifiers ASP(Q) is a recent extension of Answer Set Programming (ASP) that allows one to model problems from the entire polynomial hierarchy. Earlier work focused on demonstrating modeling capabilities of ASP(Q). However, further evidence of modeling efficacy of \QASP is needed and, even more importantly, effective solvers for ASP(Q) must be developed. In this paper, we address both these areas. First, we provide elegant quantified ASP encodings for additional relevant AI problems, such as \emph{Argumentation Coherence} ($\Pi_2^P$complete) and \emph{Semistable Semantics for ASP} ($\Sigma_2^P$complete). Second, we provide a modular ASP(Q) solver that translates a quantified ASP program together with a given data instance into a QBF to be solved by any QBF solver. We evaluate the performance of our solver on several instances and with different backend QBF solvers, demonstrating its efficacy as a tool for rapid modeling and solving of complex combinatorial problems. 
11:00  PRESENTER: Arseny Skryagin ABSTRACT. The goal of combining the robustness of neural networks and the expressivity of symbolic methods has rekindled the interest in NeuroSymbolic AI. One specifically interesting branch of research is deep probabilistic programming languages (DPPLs) which carry out probabilistic logical programming via the probability estimations of deep neural networks. However, recent SOTA DPPL approaches allow only for limited conditional probabilistic queries and do not offer the power of true joint probability estimation. In our work, we propose an easy integration of tractable probabilistic inference within a DPPL. To this end we introduce SLASH, a novel DPPL that consists of NeuralProbabilistic Predicates (NPPs) and a logical program, united via answer set programming. NPPs are a novel design principle allowing for the unification of all deep model types and combinations thereof to be represented as a single probabilistic predicate. In this context, we introduce a novel $+/$ notation for answering various types of probabilistic queries by adjusting the atom notations of a predicate. We evaluate SLASH on the benchmark task of MNIST addition as well as novel tasks for DPPLs such as missing data prediction, generative learning and set prediction with stateoftheart performance, thereby showing the effectiveness and generality of our method. 
11:30  PRESENTER: David Tena Cucala ABSTRACT. Rule learning involves developing machine learning models that can be applied to a set of logical facts to predict additional facts, as well as providing methods for extracting from the learned model a set of logical rules that explain symbolically the model's predictions. Existing such approaches, however, do not describe formally the relationship between the model's predictions and the derivations of the extracted rules; rather, it is often claimed without justification that the extracted rules `approximate' or `explain' the model, and rule quality is evaluated by manual inspection. In this paper, we study the formal properties of NeuralLPa prominent rule learning approach. We show that the rules extracted from NeuralLP models can be both unsound and incomplete: on the same input dataset, the extracted rules can derive facts not predicted by the model, and the model can make predictions not derived by the extracted rules. We also propose a modification to the NeuralLP model that ensures that the extracted rules are always sound and complete. Finally, we show that, on several prominent benchmarks, the classification performance of our modified model is comparable to that of the standard NeuralLP model. Thus, faithful learning of rules is feasible from both a theoretical and practical point of view. 
12:00  Looking Inside the BlackBox: Logicbased Explanations for Neural Networks PRESENTER: João Ferreira ABSTRACT. Deep neural networkbased methods have recently enjoyed great popularity due to their effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into an almost ubiquitous solution in multiple domains. However, due to the size and complexity of typical neural network models' architectures, as well as the subsymbolical nature of the representations generated by their neuronal activations, neural networks are essentially opaque, making it nearly impossible to explain to humans the reasoning behind their decisions. We address this issue by developing a procedure to induce humanunderstandable logicbased theories that attempt to represent the classification process of a given neural network model. Exploring the setting of a synthetic image classification task, we provide empirical results to assess the quality of the developed theories for different neural network models, compare them to existing theories on that task, and give evidence that the theories developed through our method are faithful to the representations learned by the neural networks that they are built to describe. 
11:00  PRESENTER: Nicolas Schwind ABSTRACT. Belief revision aims at incorporating a new piece of information into the beliefs of an agent. This process can be modeled by the standard Katsuno and Mendelzon's (KM) rationality postulates. However, these postulates suppose a classical setting and do not govern the case where (some parts of) the beliefs of the agent are not consistent. In this work we discuss how to adapt these postulates when the underlying logic is Priest's Logic of Paradox, in order to model a rational change, while being a conservative extension of KM belief revision. This paper is a summary of (Schwind, Konieczny, and Pino Pérez 2022). 
11:10  A conditional, a fuzzy and a probabilistic interpretation of selforganising maps (Abstract) PRESENTER: Laura Giordano ABSTRACT. This is an extended abstract of the paper: Giordano L.; Gliozzi V.; and Theseider Dupre ́ D.; A conditional, a fuzzy and a probabilistic interpretation of selforganising maps. In Journal of Logic and Computation, Volume 32, Issue 2, March 2022, Pages 178205. DOI: https://doi.org/10.1093/logcom/exab082 In this paper we establish a link between a preferential and a fuzzy semantics for description logics and selforganizing maps (SOMs), which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalization. In particular, we show that the input/output behavior of a SOM after training can be described by a fuzzy description logic interpretation as well as by a preferential interpretation, based on a conceptwise multipreferential semantics, which takes into account preferences with respect to different concepts and has been recently proposed for ranked and for weighted defeasible description logics. Properties of the network can be proven by model checking both on the preferential and on the fuzzy interpretations. Starting from the fuzzy interpretation, we also provide a probabilistic account for this neural network model based on Zadeh's probability of fuzzy events. 
11:20  InfluenceDriven Explanations for Bayesian Network Classifiers PRESENTER: Francesca Toni ABSTRACT. We propose a novel approach to building influencedriven explanations (IDXs) for (discrete) Bayesian network classifiers (BCs). IDXs feature two main advantages wrt other commonly adopted explanation methods. First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing a deep, rather than shallow, account of the BCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count towards explanations. Our approach is thus flexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances capturing existing approaches. We demonstrate IDXs’ capability to explain various forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses. 
11:30  PRESENTER: Alice Tarzariol ABSTRACT. Our work addresses the generation of firstorder constraints to reduce symmetries and improve the solving performance for classes of instances of a given combinatorial problem. To this end, we devise a modeloriented approach obtaining positive and negative examples for an Inductive Logic Programming task by analyzing instancespecific symmetries for a training set of instances. The learned firstorder constraints are interpretable and can be used to augment a general problem encoding in Answer Set Programming. This extented abstract introduces the context of our work, contributions and results. 
11:40  Abstract Argumentation with Markov Networks (Extended Abstract) ABSTRACT. Extended abstract for the Recently Published Research Track 
11:50  Interpreting Neural Networks as Quantitative Argumentation Frameworks (Extended Abstract) ABSTRACT. Extended abstract for the Recently Published Research Track 
12:00  An Axiomatic Approach to Revising Preferences (Extended Abstract) PRESENTER: Adrian Haret ABSTRACT. We study a model of preference revision in which a prior preference over a set of alternatives is adjusted in order to accommodate input from an authoritative source, while maintaining certain structural constraints (e.g., transitivity, completeness), and without giving up more information than strictly necessary. We analyze this model under two aspects: the first allows us to capture natural distancebased operators, at the cost of a mismatch between the input and output formats of the revision operator. Requiring the input and output to be aligned yields a second type of operator, which we characterize using preferences on the comparisons in the prior preference. Preference revision is set in a logicbased framework and using the formal machinery of belief change, along the lines of the wellknown AGM approach: we propose rationality postulates for each of the two versions of our model and derive representation results, thus situating preference revision within the larger family of belief change operators. 
12:10  PRESENTER: Julian Chingoma ABSTRACT. We simulate voting rules for multiwinner elections in a model of judgment aggregation that distinguishes between rationality and feasibility constraints. These constraints restrict the structure of the individual judgments and of the collective outcome computed by the rule, respectively. We extend known results regarding the simulation of singlewinner voting rules to the multiwinner setting, both for elections with ordinal preferences and for elections with approvalbased preferences. This not only provides us with a new tool to analyse multiwinner elections, but it also suggests the definition of new judgment aggregation rules, by generalising some of the principles at the core of wellknown multiwinner voting rules to this richer setting. We explore this opportunity with regards to the principle of proportionality. Finally, in view of the computational difficulty associated with many judgment aggregation rules, we investigate the computational complexity of our embeddings and of the new judgment aggregation rules we put forward. 
"Automata, Transducers and Games": 6 papers (12 min presentation + 23 min Q&A)
11:00  Abstractions for the localtime semantics of timed automata: a foundation for partialorder methods PRESENTER: Igor Walukiewicz ABSTRACT. A timed network is a parallel composition of timed automata synchronizing on common actions. We develop a methodology that allows to use partialorder methods for the reachability problem in timed networks. It is based on a localtime semantics [Bengtsson et al. (1998)]. A new simulation based abstraction of localtime zones is proposed. An efficient algorithm for testing subsumption with respect to this abstraction is developed. The abstraction is not finite in general. We show that, under some weak assumptions, there is no finite abstraction for localtime zones that works for arbitrary networks. We introduce a notion of bounded spread networks, where it is possible to use subsumption and partialorder methods at the same time. 
11:15  The boundedness and zero isolation problems for weighted automata over nonnegative rationals PRESENTER: David Purser ABSTRACT. We consider linear costregister automata (equivalently, weighted automata) over the semiring of nonnegative rationals, which generalise probabilistic automata. The two problems boundedness and zero isolation ask whether there is a sequence of words that converge to infinity and zero, respectively. In the general model both problems are undecidable so we focus on the linear copyless restriction. There, we show that the boundedness problem is decidable. As for the zero isolation problem we need to further restrict the class. We obtain a model, where zero isolation becomes equivalent to universal coverability of orthant vector addition systems (OVAS), a new model in the VAS family interesting on its own. In standard VAS runs are considered only in the positive orthant, while in OVAS every orthant has its own set of vectors that can be applied in that orthant. Assuming Schanuel’s conjecture is true, we prove decidability of universal coverability for threedimensional OVAS, which implies decidability of zero isolation in a model with at most three independent registers. 
11:30  Efficient Construction of Reversible Transducers from Regular Transducer Expressions PRESENTER: Govind R ABSTRACT. The class of regular transformations has several equivalent characterizations such as functional MSO transductions, deterministic twoway transducers, streaming string transducers, as well as regular transducer expressions (RTE). For algorithmic applications, it is very common and useful to transform a specification, here, an RTE, to a machine, here a transducer. In this paper, we give an efficient construction of a two way reversible transducer (2RFT) equivalent to a given RTE. 2RFTs are a well behaved class of transducers which are deterministic and codeterministic (hence allow evaluation in linear time wrt the input word), and where composition has only polynomial complexity. For full RTE, the constructed 2RFT has size doubly exponential in the size of the expression. If the RTE does not use Hadamard product or chainedstar, the constructed 2RFT has size exponential, which is a tight bound. 
11:45  Active learning for sound negotiations PRESENTER: Igor Walukiewicz ABSTRACT. We present two active learning algorithms for sound deterministic negotiations. Sound deterministic negotiations are models of distributed systems, a kind of Petri nets or Zielonka automata with additional structure. We show that this additional structure allows to minimize such negotiations. The two active learning algorithms differ in the type of membership queries they use. Both have similar complexity to Angluin’s L* algorithm, in particular, the number of queries is polynomial in the size of the negotiation, and not in the number of configurations. 
12:00  Stochastic Games with Synchronizing Objectives ABSTRACT. We consider twoplayer stochastic games played on a finite graph for infinitely many rounds. Stochastic games generalize both Markov decision processes (MDP) by adding an adversary player, and twoplayer deterministic games by adding stochasticity. The outcome of the game is a sequence of distributions over the states of the game graph. We consider synchronizing objectives, which require the probability mass to accumulate in a set of target states, either always, once, infinitely often, or always after some point in the outcome sequence; and the winning modes of sure winning (if the accumulated probability is equal to 1) and almostsure winning (if the accumulated probability is arbitrarily close to 1). We present algorithms to compute the set of winning distributions for each of these synchronizing modes, showing that the corresponding decision problem is PSPACEcomplete for synchronizing once and infinitely often, and PTIMEcomplete for synchronizing always and always after some point. These bounds are remarkably in line with the special case of MDPs, while the algorithmic solution and proof technique are considerably more involved, even for deterministic games. This is because those games have a flavour of imperfect information, in particular they are not determined and randomized strategies need to be considered, even if there is no stochastic choice in the game graph. Moreover, in combination with stochasticity in the game graph, finitememory strategies are not sufficient in general (for synchronizing infinitely often). 
12:15  Characterizing Positionality in Games of Infinite Duration over Infinite Graphs ABSTRACT. We study turnbased quantitative games of infinite duration opposing two antagonistic players and played over graphs. This model is widely accepted as providing the adequate framework for formalizing the synthesis question for reactive systems. This important application motivates the question of strategy complexity: which valuations (or payoff functions) admit optimal positional strategies (without memory)? Valuations for which both players have optimal positional strategies have been characterized by Gimbert and Zielonka for finite graphs and by Colcombet and Niwiński for infinite graphs. However, for reactive synthesis, existence of optimal positional strategies for the opponent (which models an antagonistic environment) is irrelevant. Despite this fact, not much is known about valuations for which the protagonist admits optimal positional strategies, regardless of the opponent. In this work, we characterize valuations which admit such strategies over infinite graphs. Our characterization uses the vocabulary of universal graphs, which has also proved useful in understanding recent breakthrough results regarding the complexity of parity games. More precisely, we show that a valuation admitting universal graphs which are monotonic and wellordered is positional over all game graphs, and – more surprisingly – that the converse is also true for valuations admitting neutral letters. We prove the applicability and elegance of the framework by unifying a number of known positionality results, establishing stronger ones, and providing a new closure property. 
11:00  A SAT Attack on Rota's Basis Conjecture PRESENTER: Markus Kirchweger ABSTRACT. The SAT modulo Symmetries (SMS) is a recently introduced framework for dynamic symmetry breaking in SAT instances. It combines a CDCL SAT solver with an external lexicographic minimality checking algorithm. We extend SMS from graphs to matroids and use it to progress on Rota's Basis Conjecture (1989), which states that one can always decompose a collection of r disjoint bases of a rank r matroid into r disjoint rainbow bases. Through SMS, we establish that the conjecture holds for all matroids of rank 4 and certain special cases of matroids of rank 5. Furthermore, we extend SMS with the facility to produce DRAT proofs. External tools can then be used to verify the validity of additional axioms produced by the lexicographic minimality check. As a byproduct, we have utilized our framework to enumerate matroids modulo isomorphism and to support the investigation of various other problems on matroids. 
11:30  The packing chromatic number of the infinite square grid is at least 14 PRESENTER: Bernardo Subercaseaux ABSTRACT. A kpacking coloring of a graph G = (V, E) is a mapping from V to {1, ..., k} such that any pair of vertices u, v that receive the same color i must be at distance greater than i in G. Arguably the most fundamental problem regarding packing colorings is to determine the packing chromatic number of the infinite square grid. A sequence of previous works has proved this number to be between 13 and 15. Our work improves the lower bound to 14. Moreover, we present a new encoding that is asymptotically more compact than the previously used ones. 
12:00  PRESENTER: Benjamin Kiesl ABSTRACT. We present approaches to store and restore the state of a SAT solver, allowing us to migrate the state between different compute resources, or even between different solvers. This can be used in many ways, e.g., to improve the fault tolerance of solvers, to schedule SAT problems on a restricted number of cores, or to use dedicated preprocessing tools for inprocessing. We identify a minimum viable subset of the solver state to migrate such that the loss of performance is small. We then present and implement two different approaches to state migration: one approach stores the state at the end of a solver run whereas the other approach stores the state continuously as part of the proof trace. We show that our approaches enable the generation of correct models and valid unsatisfiability proofs. Experimental results confirm that the overhead is reasonable and that in several cases solver performance actually improves. 
Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).
14:0015:00: Best paper awards
15:0015:30: Dissertation award
14:00  PRESENTER: Isaac Rudich ABSTRACT. Decision diagrams are an increasingly important tool in cuttingedge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warmstart technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. We present a method of peeling off a subgraph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peelandbound. We test the method on the sequence ordering problem, and our results indicate that our peelandbound scheme generates stronger bounds than a branchandbound scheme using the same propagators, and at significantly less computational cost. 
14:30  Exploiting Functional Constraints in Generating Dominance Breaking Nogoods for Constraint Optimization PRESENTER: Allen Z. Zhong ABSTRACT. Dominance breaking is an effective technique to reduce the time for solving constraint optimization problems. Lee and Zhong propose an automated dominance breaking framework for a class of constraint optimization problems based on specific forms of objectives and constraints. In this paper, we propose to enhance the framework for problems with nested function calls which can be flattened to functional constraints. In particular, we focus on aggregation functions and exploit such properties as monotonicity, commutativity and associativity to give an efficient procedure for generating effective dominance breaking nogoods. Experimentation also shows ordersofmagnitude runtime speedup using the generated dominance breaking nogoods and demonstrates the ability to discover new dominance relations on problems with ineffective or no known dominance breaking constraints in the literature. 
15:00  Doctoral Research Award 
14:00  Restricting tree grammars with term rewriting PRESENTER: Felix Laarmann ABSTRACT. We investigate the problem of enumerating all terms generated by a treegrammar which are also in normal form with respect to a set of directed equations (rewriting relation). To this end we show that deciding emptiness and finiteness of the resulting set is EXPTIMEcomplete. The emptiness result is inspired by a prior result by Common and Jacquemard on ground reducibility. The finiteness result is based on modification of pumping arguments used by Common and Jacquemard. We highlight practical applications and limitations. We provide and evaluate a prototype implementation. Limitations are somewhat surprising in that, while deciding emptiness and finiteness is EXPTIMEcomplete for linear and nonlinear rewrite relations, the linear case is practically feasible while the nonlinear case is infeasible, even for a trivially small example. The algorithms provided for the linear case also improve on prior practical results by Kallat et al. 
14:30  PRESENTER: Nariyoshi Chida ABSTRACT. Many modern regular expression engines employ various extensions to give more expressive support for realworld usages. Among the major extensions employed by many of the modern regular expression engines are backreferences and lookaheads. A question of interest about these extended regular expressions is their expressive power. Previous works have shown that (i) the extension by lookaheads does not enhance the expressive power, i.e., the expressive power of regular expressions with lookaheads is still regular, and that (ii) the extension by backreferences enhances the expressive power, i.e., the expressive power of regular expressions with backreferences (abbreviated as rewb) is no longer regular. This raises the following natural question: Does the extension of regular expressions with backreferences by lookaheads enhance the expressive power of regular expressions with backreferences? This paper answers the question positively by proving that adding either positive lookaheads or negative lookaheads increases the expressive power of rewb (the former abbreviated as rewblp and the latter as rewbln). In particular, we show that, unlike rewb, rewbln is closed under intersection. A consequence of our result is that neither the class of finite state automata nor that of memory automata (MFA) of Schmid (or any other known class of automata, to our knowledge) corresponds to rewblp or rewbln. To fill the void, as a first step toward building such automata, we propose a new class of automata called memory automata with positive lookaheads (PLMFA) that corresponds to rewblp. The key idea of PLMFA is to extend MFA with a new kind of memories, called positivelookahead memory, that is used to simulate the backtracking behavior of positive lookaheads. 
15:00  ABSTRACT. Twocounter machines, pioneered by Minsky in the 1960s, constitute a particularly simple, universal model of computation. Universality of reversible twocounter machines (having a rightunique step relation) has been shown by Morita in the 1990s. Therefore, the halting problem for reversible twocounter machines is undecidable. Surprisingly, this statement is specific to certain instruction sets of the underlying machine model. In the present work we consider twocounter machines (CM2) with instructions inc(c) (increment counter c, go to next instruction), dec(c,q) (if counter c is zero, then go to next instruction, otherwise decrement counter c and go to instruction q). While the halting problem for CM2 is undecidable, we give a decision procedure for the halting problem for reversible CM2, contrasting Morita's result. We supplement our result with decision procedures for uniform boundedness (is there a uniform bound on the number of reachable configurations?) and uniform mortality (is there a uniform bound on the number of steps in any run?) for CM2. Termination and correctness of each presented decision procedure is certified using the Coq proof assistant. In fact, both the implementation and certification is carried out simultaneously using the tactic language of the Coq proof assistant. Building upon existing infrastructure, the mechanized decision procedures are contributed to the Coq library of undecidability proofs. 
14:00  PRESENTER: Bart Bogaerts ABSTRACT. Justification theory is an abstract unifying formalism that captures semantics of various nonmonotonic logics. One intriguing problem that has received significant attention is the consistency problem: under which conditions are justification for a fact and justifications for its negation suitably related. Two variants of justification theory exist: one in which justifications are trees and one in which they are graphs. In this work we resolve the consistency problem once and for all for the treelike setting by showing that all reasonable treelike justification systems are consistent. 
14:22  PRESENTER: Bart Bogaerts ABSTRACT. Justification theory is a general framework for the definition of semantics of rulebased languages that has a high explanatory potential. Nested justification systems, first introduced by Denecker et al. (2015), allow for the composition of justification systems. This notion of nesting thus enables the modular definition of semantics of rulebased languages, and increases the representational capacities of justification theory. As we show in this paper, the original semantics for nested justification systems lead to the loss of information relevant for explanations. In view of this problem, we provide an alternative characterization of semantics of nested justification systems and show that this characterization is equivalent to the original semantics. Furthermore, we show how nested justification systems allow representing fixpoint definitions (Hou and Denecker 2009). 
14:44  PRESENTER: Fabrizio Riguzzi ABSTRACT. Hybrid Knowledge Bases based on Lifschitz's logic of Minimal Knowledge with Negation as Failure are a successful approach to combine the expressivity of Description Logics and Logic Programming in a single language. Their syntax, defined by Motik and Rosati, disallows function symbols. In order to define a wellfounded semantics for MKNF HKBs, Knorr et al. define a partition of the modal atoms occurring in it, called the alternating fixpoint partition. In this paper, we propose an iterated fixpoint semantics for HKBs with function symbols. We prove that our semantics extends Knorr et al.'s, in that, for a functionfree HKBs, it coincides with its alternating fixpoint partition. The proposed semantics lends itself well to a probabilistic extension with a distribution semantic approach, which is the subject of future work. 
15:06  A Fixpoint Characterization of ThreeValued Disjunctive Hybrid MKNF Knowledge Bases PRESENTER: Spencer Killen ABSTRACT. The logic of hybrid MKNF (minimal knowledge and negation as failure) is a powerful knowledge representation language that elegantly pairs ASP (answer set programming) with ontologies. One may handle a disjunctive rule by inducing a collection of normal rules; each with the same body and a single atom in its head. In this work, we refer to a set of such normal rules as a headcut. The question arises as to whether the semantics of disjunctive hybrid MKNF knowledge bases can be characterized using fixpoint constructions with headcuts. Earlier, we have shown that headcuts can be paired with fixpoint operators to capture the twovalued MKNF models of disjunctive hybrid MKNF knowledge bases. Threevalued semantics extend twovalued semantics with the ability to express partial information. In this work, we present a fixpoint construction that leverages headcuts using an operator that iteratively captures threevalued models of hybrid MKNF knowledge bases with disjunctive rules. This characterization also captures partial models of disjunctive logic programs since a program can be expressed as a disjunctive hybrid MKNF knowledge base with an empty ontology. We elaborate on a relationship between this characterization and approximators in AFT (approximation fixpoint theory) for normal hybrid MKNF knowledge bases. 
14:00  SumProduct Loop Programming: From Probabilistic Circuits to Loop Programming PRESENTER: Viktor Pfanschilling ABSTRACT. Recently, Probabilistic Circuits such as SumProduct Networks have received growing attention, as they can represent complex features but still provide tractable inference. Although quite successful, unfortunately, they lack the capability of handling control structures, such as for and while loops. In this work, we introduce SumProduct Loop Language (SPLL), a novel programming language that is capable of tractable inference on complex probabilistic code that includes loops. SPLL has dual semantics: every program has generative semantics familiar to most programmers and probabilistic semantics that assign a probability to each possible result. This way, the programmer can describe how to generate samples almost like in any standard programming language. The language takes care of computing the probability values of all results for free at run time. We demonstrate that SPLL inherits the beneficial properties of PCs, namely tractability and differentiability while generalizing to other distributions and programs. 
14:30  PRESENTER: Blai Bonet ABSTRACT. We consider the problem of learning generalized policies for classical planning domains without supervision and using graph neural networks (GNNs) from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at crisp experimental results and understanding: either the policy greedy in the learned value function achieves close to 100% generalization over larger instances than those in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the C2 fragment of firstorder logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms", encoding role compositions and transitive closures that do not fit into C2. The work is most closely related to the GNN approach for learning optimal general policies in a supervised fashion by Ståhlberg, Bonet and Geffner 2021; yet the learned policies are no longer required to be optimal (which expands the scope as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, valuebased reinforcement learning methods that aim to produce optimal policies, do not yield policies that generalize, as the goals of optimality and generality are indeed often in conflict. 
15:00  PRESENTER: Yaniv Aspis ABSTRACT. Neurosymbolic reasoning approaches proposed in recent years combine a neural perception component with a symbolic reasoning component to solve a downstream task. By doing so, these approaches can provide neural networks with symbolic reasoning capabilities, improve their interpretability and enable generalization beyond the training task. However, this often comes at the cost of poor training time, with potential scalability issues. In this paper, we propose a scalable neurosymbolic approach, called Embed2Sym. We complement a twostage (perception and reasoning) neural network architecture designed to solve a downstream task endtoend with a symbolic optimisation method for extracting learned latent concepts. Specifically, the trained perception network generates clusters in embedding space that are identified and labelled using symbolic knowledge and a symbolic solver. With the latent concepts identified, a neurosymbolic model is constructed by combining the perception network with the symbolic knowledge of the downstream task, resulting in a model that is interpretable and transferable. Our evaluation shows that Embed2Sym outperforms stateoftheart neurosymbolic systems on benchmark tasks in terms of training time by several orders of magnitude while providing similar if not better accuracy. 
14:00  PRESENTER: Curtis Bright ABSTRACT. Lam's problem is to prove the nonexistence of a type of geometric structure that has intrigued mathematicians for centuries. In this work we reduce Lam's problem to a Boolean satisfiability (SAT) problem and generate certificates which for the first time can be used to verify the resolution of Lam's problem. The search space underlying the problem has many nontrivial symmetries—necessitating an approach that detects and removes symmetries as early as possible. We couple a method of isomorphfree exhaustive generation with a SAT solver and show that this significantly improves the performance of the solver when symmetries are present. 
14:10  Rushing and Strolling among Answer Sets  Navigation Made Easy (Extended Abstract) PRESENTER: Dominik Rusovac ABSTRACT. Answer set programming (ASP) is a popular declarative programming paradigm with a wide range of applications in artificial intelligence. Oftentimes, when modeling an AI problem with ASP, and in particular when we are interested beyond simple search for optimal solutions, an actual solution, differences between solutions, or number of solutions of the ASP program matter. For example, when a user aims to identify a specific answer set according to her needs, or requires the total number of diverging solutions to comprehend probabilistic applications such as reasoning in medical domains. Then, there are only certain problem specific and handcrafted encoding techniques available to navigate the solution space of ASP programs, which is oftentimes not enough. In this paper, we propose a formal and general framework for interactive navigation toward desired subsets of answer sets analogous to faceted browsing. Our approach enables the user to explore the solution space by consciously zooming in or out of subspaces of solutions at a certain configurable pace. We illustrate that weighted faceted navigation is computationally hard. Finally, we provide an implementation of our approach that demonstrates the feasibility of our framework for incomprehensible solution spaces. 
14:20  A General Framework for Stable Roommates Problems using Answer Set Programming PRESENTER: Esra Erdem ABSTRACT. This is an extended abstract of our paper entitled "A General Framework for Stable Roommates Problems using Answer Set Programming", which is published in Theory and Practice of Logic Programming (https://doi.org/10.1017/S1471068420000277). The Stable Roommates problem (SR) is characterized by the preferences of agents over other agents as roommates: each agent ranks all others in strict order of preference. A solution to SR is then a partition of the agents into pairs so that each pair shares a room, and there is no pair of agents that would block this matching (i.e., who prefers the other to their roommate in the matching). There are interesting variations of SR that are motivated by applications (e.g., the preference lists may be incomplete (SRI) and involve ties (SRTI)), and that try to find a more fair solution (e.g., Egalitarian SR). Unlike the Stable Marriage problem, every SR instance is not guaranteed to have a solution. For that reason, there are also variations of SR that try to find a goodenough solution (e.g., Almost SR). Most of these variations are NPhard. We introduce a formal framework, called SRTIASP, utilizing the logic programming paradigm Answer Set Programming, that is provable and general enough to solve many of such variations of SR. Our empirical analysis shows that SRTIASP is also promising for applications. 
14:30  Reasoning about Cardinal Directions between 3Dimensional Extended Objects using Answer Set Programming PRESENTER: Yusuf Izmirlioglu ABSTRACT. We propose a novel formal framework (called 3DNCDCASP) to represent and reason about cardinal directions between extended objects in 3dimensional (3D) space, using Answer Set Programming (ASP). 3DNCDCASP extends Cardinal Directional Calculus (CDC) with a new type of default constraints, and NCDCASP to 3D. 3DNCDCASP provides a flexible platform offering different types of reasoning: Nonmonotonic reasoning with defaults, checking consistency of a set of constraints on 3D cardinal directions between objects, explaining inconsistencies, and inferring missing CDC relations. We prove the soundness of 3DNCDCASP, and illustrate its usefulness with applications. 
14:40  PRESENTER: Esra Erdem ABSTRACT. Hybrid classical planning, where classical task planning is integrated with lowlevel feasibility checks (e.g., motion planning that utilizes collision checks), has received attention in AI and Robotics communities~\cite{ErdemHPPU11,HertleDKN12,Plaku12,KaelblingL13,SrivastavaFRCRA14,LagriffoulDBSK14,HadfieldMenellGCA15,ErdemPS16}, due to the cognitive skills required by autonomous robots in the realworld. Since these studies rely on classical planning methods, they assume complete knowledge about the initial states, and full observability of the environment, while computing a hybrid classical plan (i.e., a sequence of actuation actions) offline. During the execution of these plans, discrepancies might occur between the expected states and the observed states, due to contingencies that were not or could not be considered during offline planning. For instance, a robot may consider that initially all utensils in a cabinet are clean, and compute a hybrid classical plan to set up a table accordingly. While executing this plan, the robot may detect (e.g., by its sensors) that the plate that it picks from the cabinet is not clean. To cope with such contingencies, the robots are usually equipped with a plan execution monitoring algorithm. According to such an algorithm, when a discrepancy is detected, the robot tries to recover from the discrepancy, generally by replanning from that point on. As an alternative approach to computing offline hybrid classical plans and then dealing with discrepancies/surprises online during plan execution by monitoring and replanning, we propose a parallel offline hybrid method, called \hcplan. This method suggests extending hybrid planning beyond classical planning, by inheriting advantages of conditional planning to deal with contingencies due to incomplete knowledge and partial observability. In conditional planning, in addition to actuation actions with deterministic outcomes, sensing actions with nondeterministic outcomes are considered as part of planning in order to gather the relevant missing knowledge. Every possible outcome of sensing actions leads to a possibly different conditional plan. Therefore, a conditional plan looks like a tree of actions, where the branching occurs at vertices that characterize sensing actions, and the other vertices denote actuation actions. Each branch of such a tree, from the root to a leaf, essentially represents a possible execution of actuation actions and sensing actions to reach a goal state from the initial state. \hcplan\ utilizes this advantageous aspect of conditional planning while computing offline plans: planning for nondeterministic sensing actions to gather missing knowledge when needed, while planning for deterministic actuation actions. Moreover, \hcplan\ integrates feasibility checks not only for executability of actuation actions, but also for executability of sensing actions. \hcplan\ is a parallel algorithm: it computes a hybrid conditional plan by intelligently orchestrating the computation of its branches in parallel. According to \hcplan, computation of every branch starting from a vertex that characterizes a sensing action is viewed as a computation task with a priority (e.g., defined relative to the depth of that vertex), and the batches of computation tasks are solved in parallel with respect to their priorities so as to consume the computational resources more effectively. Furthermore, \hcplan\ avoids recomputation of the same task that may appear at different parts of the tree. Each computation task handled by \hcplan\ takes as input an incomplete initial state, and returns a hybrid sequential plan (i.e., a sequence of deterministic actuation actions and nondeterministic sensing actions) that describes a branch of the tree. A hybrid sequential plan is not a hybrid classical plan, since it may involve nondeterministic sensing actions as well. Therefore, solving a computation task (i.e., the computation of each branch of a hybrid conditional plan) also requires cognitive abilities beyond hybrid classical planning. For that, \hcplan\ adapts some advantages of causalitybased nonmonotonic logics~\cite{Turner08,GelfondL98} for a novel solution to deal with these challenges. In particular, \hcplan\ utilizes defaults to express assumptions (e.g., the location of an object remains to be the same unless changed by an actuation action), exogeneity of actions to express that sensing actions may occur at any time when possible, and nondeterministic causal laws to choose an outcome of a sensing action included in the computation of a branch. The defaults are useful for formalizing assumptions in that, when an exception occurs contrary to the assumptions, it does not lead to an inconsistency. This is an important aspect of defaults, providing a solution to the famous frame problem. Exogeneity of actions is useful in that it is not required in advance to specify the order of nondeterministic sensing actions while solving a task. \hcplan\ uses the nonmonotonic reasoning system \ccalc~\cite{McCainT97} to compute each branch of a hybrid conditional plan. We show a realworld application of \hcplan\ where a mobile bimanual service robot sets up a table for lunch. While computing a hybrid conditional plan, the robot has to consider various contingencies that are not known in advance. Furthermore, while planning for its actions, the robot has to consider different types of feasibility checks (e.g., reachability, graspability, collisions), as well as commonsense knowledge. Once a hybrid conditional plan is computed by \hcplan, we also illustrate possible executions of this plan. To evaluate the strengths and weaknesses of \hcplan\ by means of experimental evaluations, we construct a benchmark suite over the kitchen table setup domain. We evaluate \hcplan\ from the perspectives of computation time and plan quality, compare it with the closely related planners, and with an execution monitoring algorithm that utilizes hybrid planning and guided replanning. We refer the reader to our journal paper~\cite{NoumanPE2021} for further information. 
14:50  LACE: A Logical Approach to Collective Entity Resolution PRESENTER: Meghyn Bienvenu ABSTRACT. In this paper, we revisit the problem of entity resolution and propose a novel, logical framework, LACE, which mixes declarative and procedural elements to achieve a number of desirable properties. Our approach is fundamentally declarative in nature: it utilizes hard and soft rules to specify conditions under which pairs of entity references must or may be merged, together with denial constraints that enforce consistency of the resulting instance. Importantly, however, rule bodies are evaluated on the instance resulting from applying the already ‘derived’ merges. It is the dynamic nature of our semantics that enables us to capture collective entity resolution scenarios, where merges can trigger further merges, while at the same time ensuring that every merge can be justified. As the denial constraints restrict which merges can be performed together, we obtain a space of (maximal) solutions, from which we can naturally define notions of certain and possible merges and query answers. We explore the computational properties of our framework and determine the precise computational complexity of the relevant decision problems. Furthermore, as a first step towards implementing our approach, we demonstrate how we can encode the various reasoning tasks using answer set programming. 
15:00  Rule Learning over Knowledge Graphs with Genetic Logic Programming PRESENTER: Emanuel Sallinger ABSTRACT. Using logical rules plays an important role in Knowledge Graph (KG) construction and completion. Such rules not only encode the expert background knowledge and the relational patterns among the data, but also infer new knowledge and insights from them. Due to the exponential search space, current approaches resort to exhaustive search with a number of heuristics and syntactic restrictions on the rule language, which limits the quality of the outcome rules. In this paper, we extend the rule hypothesis space to general Datalog rule space by proposing a novel genetic logic programming algorithm named Evoda. It has an iterative process to learn highquality rules over large scale KG for a matter of seconds. We performed experiments over multiple realworld KGs and various evaluation metrics to show its mining capabilities for higher quality rules and making more precise predictions in the extended rule space. Additionally, we applied it on the KG completion tasks to illustrate its competitiveness with several stateoftheart embedding models. 
Semantic Intermediate Representations for the Working Metatheoretician:
Abstract: Designers of typed programming languages commonly prove metatheoretic properties such as type soundness for at least a core of their language. But any practical language implementation must provide some way of interoperating with code written in other languages  usually via a foreignfunction interface (FFI)  which opens the door to new, potentially unsafe, behaviors that aren't accounted for in the original type soundness proof. Despite the prevalence of interoperability in practical software, principled foundations for the endtoend design, implementation, and verification of interoperability mechanisms have been largely neglected. In this talk, I'll advocate a proof technique for _working metatheoreticians_ seeking to ensure soundness or security properties of real languages, which implement interoperability using glue code that mediates interaction between languages after compilation to a common lowerlevel intermediate representation (IR). This proof technique involves building a _semantic intermediate representation_: a semantic model of sourcelanguage types as relations on terms of the lowerlevel IR. Semantic IRs can be used to guide the design and implementation of sound FFIs and to verify that the IR glue code used to implement conversions ensures type soundness. More interestingly, semantic IRs provide a basis for numerous avenues of future work on the principled design of language interoperability: how to support the inclusion of libraries whose behavior is foreign to the original language, how to prove soundness and security properties that are robust to behaviors (attackers) outside of the semantic IR, and how to develop a semanticIR compiler backend that makes it easier to implement and verify sound interoperability for a wide array of source languages.
14:00  Tight Bounds for Tseitin Formulas PRESENTER: Petr Smirnov ABSTRACT. We show that for any connected graph G the size of any regular resolution or OBDD(and, reordering) refutation of a Tseitin formula based on G is at least 2^Omega(tw(G)), where tw(G) is the treewidth of G. These lower bounds improve upon the previously known bounds and, moreover, they are tight. For both of the proof systems, there are constructive upper bounds that almost match the obtained lower bounds, hence the class of Tseitin formulas is almost automatable for regular resolution and for OBDD(and, reordering). 
14:30  A generalization of the Satisfiability Coding Lemma and its applications PRESENTER: Harry Sha ABSTRACT. The seminal Satisfiability Coding Lemma of Paturi, Pudlák, and Zane is a coding scheme for satisfying assignments of $k$CNF formulas. We generalize it to give a coding scheme for implicants and use this generalized scheme to establish new structural and algorithmic properties of prime implicants of $k$CNF formulas. Our first application is a nearoptimal bound of $n \cdot 3^{n(1\Omega(1/k))}$ on the number of prime implicants of any $n$variable $k$CNF formula. This resolves an open problem from the Ph.D. thesis of Talebanfard, who proved such a bound for the special case of constantread $k$CNF formulas. Our proof is algorithmic in nature, yielding an algorithm for computing the set of all prime implicantsthe Blake Canonical Formof a given $k$CNF formula. The problem of computing the Blake Canonical Form of a given function is a classic one, dating back to Quine, and our work gives the first nontrivial algorithm for $k$CNF formulas. 
15:00  On the Parallel Parameterized Complexity of MaxSAT Variants PRESENTER: Max Bannach ABSTRACT. In the maximum satisfiability problem (\textsc{maxsat}) we are given a propositional formula in conjunctive normal form and have to find an assignment that satisfies as many clauses as possible. We study the \emph{parallel} parameterized complexity of various versions of \textsc{maxsat} and provide the first \emph{constanttime} algorithms parameterized either by the solution size or by the allowed excess relative to some guarantee (``above guarantee'' versions). For the \emph{dual parameterized} version where the parameter is the number of clauses we are allowed to leave unsatisfied, we present the first parallel algorithm for \textsc{max2sat} (known as \textsc{almost2sat}). The difficulty in solving \textsc{almost2sat} in \emph{parallel} comes from the fact that the iterative compression method, originally developed to prove that the problem is fixedparameter tractable at all, is inherently \emph{sequential.} We observe that a graph flow whose value is a parameter can be computed in parallel and use this fact to develop a parallel algorithm for the vertex cover problem parameterized above the size of a given matching. Finally, we study the parallel complexity of \textsc{maxsat} parameterized by the vertex cover number, the treedepth, the feedback vertex set number, and the treewidth of the input's incidence graph. While \textsc{maxsat} is fixedparameter tractable for all of these parameters, we show that they allow different degrees of possible parallelization. For all four we develop dedicated parallel algorithms that are \emph{constructive}, meaning that they output an optimal assignment~~in contrast to results that can be obtained by parallel metatheorems, which often only solve the decision version. 
Linear recurrences and Skolem problem (rescheduled talks from Tuesday)
Edon Kelmendi. Computing the Density of the Positivity Set for Linear Recurrence Sequences15 min
Richard Lipton, Florian Luca, Joris Nieuwveld, Joel Ouaknine, David Purser and James Worrell. On the Skolem Problem and the Skolem Conjecture15 min
16:0016:30: Early Career Award
16:3017:30: ACP General Assembly
16:00  Early Career Award 
16:00  Strategies for Asymptotic Normalization PRESENTER: Giulio Guerrieri ABSTRACT. We introduce a technique to study normalizing strategies when termination is asymptotic, that is, the result appears as a limit, as opposite to reaching a normal form in a finite number of steps. Asymptotic termination occurs in several settings, such as effectful, and in particular probabilistic computationwhere the limits are distributions over the possible outputsor infinitary lambdacalculiwhere the limits are infinitary normal forms such as Böhm trees. As a concrete application, we obtain a result which is of independent interest: a normalization theorem for CallbyValue (andin a uniform wayfor CallbyName) probabilistic lambdacalculus. 
16:30  PRESENTER: Loïc Peyrot ABSTRACT. Solvability is a key notion in the theory of callbyname λcalculus, used in particular to identify meaningful terms. However, adapting this notion to other callbyname calculi, or extending it to different models of computation —such as callbyvalue—, is not straightforward. In this paper, we study solvability for callbyname and callbyvalue λcalculi with generalized applications, both variants inspired from von Plato’s natural deduction with generalized elimination rules. We develop an operational as well as a logical theory of solvability for each of them. The operational characterization relies on a notion of solvable reduction for generalized applications, and the logical characterization is given in terms of typability in an appropriate nonidempotent intersection type system. Finally, we show that solvability in generalized applications and solvability in the λcalculus are equivalent notions. 
17:00  PRESENTER: Willem Heijltjes ABSTRACT. We present normalization for intuitionistic combinatorial proofs (ICPs) and relate it to the simplytyped lambdacalculus. We prove confluence and strong normalization. Combinatorial proofs, or "proofs without syntax", form a graphical semantics of proof in various logics that is canonical yet complexityaware: they are a polynomialsized representation of sequent proofs that factors out exactly the nonduplicating permutations. Our approach to normalization aligns with these characteristics: it is canonical (free of permutations) and generic (readily applied to other logics). Our reduction mechanism is a canonical representation of reduction in sequent calculus with closed cuts (no abstraction is allowed below a cut), and relates to closed reduction in lambdacalculus and supercombinators. While we will use ICPs concretely, the notion of reduction is completely abstract, and can be specialized to give a reduction mechanism for any representation of typed normal forms. 
16:00  A NeuroSymbolic ASP Pipeline for Visual Question Answering PRESENTER: Nelson Higuera ABSTRACT. We present a neurosymbolic visual question answering (VQA) pipeline for CLEVR, which is a wellknown dataset that consists of pictures showing scenes with objects and questions related to them. Our pipeline covers (i) training neural networks for object classification and boundingbox prediction of the CLEVR scenes, (ii) statistical analysis on the distribution of prediction values of the neural networks to determine a threshold for highconfidence predictions, and (iii) a translation of CLEVR questions and network predictions that pass confidence thresholds into logic programs so that we can compute the answers using an ASP solver. By exploiting choice rules, we consider deterministic and nondeterministic scene encodings. Our experiments show that the nondeterministic scene encoding achieves good results even if the neural networks are trained rather poorly in comparison with the deterministic approach. This is important for building robust VQA systems if network predictions are lessthan perfect. Furthermore, we show that restricting nondeterminism to reasonable choices allows for more efficient implementations in comparison with related neurosymbolic approaches without loosing much accuracy. 
16:22  Knowledge Authoring with Factual English PRESENTER: Paul Fodor ABSTRACT. Knowledge representation and reasoning (KRR) systems represent knowledge as collections of facts and rules. Like databases, KRR systems contain information about domains of human activities like enterprises, science, and business. In addition to the capabilities of databases, KRRs can represent complex concepts and relations, and they can query and manipulate information in much more sophisticated ways. Unfortunately, the KRR technology has been hindered by the fact that specifying the requisite knowledge requires skills that most domain experts do not have, while knowledge engineering is a rare skill. One solution could be to extract knowledge from English text, and a number of works have attempted to do so (OpenSesame, etc.). Unfortunately, at present, extraction of logical facts from unrestricted natural language is still too inaccurate to be used for reasoning, while restricting the grammar of the language (socalled controlled natural language, or CNL) is hard for the users to learn and use. Nevertheless, some recent CNLbased approaches, such as the Knowledge Authoring Logic Machine (KALM), have shown to have very high accuracy compared to others, and a natural question is to what extent the CNL restrictions can be lifted. In this paper, we are trying to address this issue by transplanting the KALM framework to a neural natural language parser, MultiStanza. Here we limit our attention to authoring facts and queries and therefore our focus is what we call factual English statements. Authoring other types of knowledge, such as rules, will be considered in our followup work. As it turns out, neural network based parsers have problems of their own and the mistakes they make range from partofspeech tagging to lemmatization to dependency errors. We present a number of techniques for combating these problems and test the new system, KALM^FL (i.e., KALM for factual language) on a number of benchmarks, which show that KALM^FL achieves correctness in excess of 95%. 
16:44  PRESENTER: Kristian Reale ABSTRACT. In the last few years, we have been witnessing the spread of computing devices getting smaller and smaller (e.g., Smartphones, Smart Devices, Raspberry, etc.), and the production and availability of data getting bigger and bigger. In this work we introduce DLV Large Scale (DLVLS), a framework based on Answer Set Programming (ASP) for performing declarativebased reasoning tasks over dataintensive applications, possibly on Smart Devices. The framework encompasses DLV Mobile Edition(DLVME), an ASP based solver for Android systems and Raspberry devices, and DLV Enterprise Edition (DLVEE), an ASPbased platform, accessible by REST interfaces, for largescale reasoning over Big Data, classical relational database systems, and NoSQL databases. DLVLS enables Smart Devices to both locally perform reasoning over data generated by their own sensors and properly interact with DLVEE when more computational power is needed for harder tasks, possibly over bigger centralized data. We present also a realworld application of DLVLS; the use case consists of a tourist navigator that calculates the best routes and optimizes a tour of a tourist under customdefined time constraints. 
16:59  Modeling and Reasoning in Event Calculus using GoalDirected Constraint Answer Set Programming (Extended Abstract) PRESENTER: Joaquin Arias ABSTRACT. Automated commonsense reasoning is essential for building humanlike AI systems featuring, for example, explainable AI. Event Calculus (EC) is a family of formalisms that model commonsense reasoning with a sound, logical basis. Previous attempts to mechanize reasoning using EC faced difficulties in the treatment of the continuous change in dense domains (e.g., time and other physical quantities), constraints among variables, default negation, and the uniform application of different inference methods, among others. We propose the use of s(CASP), a querydriven, topdown execution model for Predicate Answer Set Programming with Constraints, to model and reason using EC. We show how EC scenarios can be elegantly modeled in s(CASP) and how its expressiveness makes it possible to perform deductive and abductive reasoning tasks in domains featuring, for example, constraints involving dense time and fluents. 
17:14  A Multishot ASP Encoding for the Aircraft Routing and Maintenance Planning Problem PRESENTER: Pierre Tassel ABSTRACT. Aircraft routing and maintenance planning are integral parts of the airline scheduling process. We model and solve these combinatorial optimization problems with Answer Set Programming (ASP), contrasting traditional singleshot ASP solving methods to multishot solving techniques geared to the rapid discovery of nearoptimal solutions to subproblems of gradually increasing granularity. 
Mario Roman. Monoidal Streams for Dataflow Programming
Paolo Pistone. Curry and Howard Meet Borel
Pascal Schweitzer. Choiceless Polynomial Time with Witnessed Symmetric Choice
Yoàv Montacute. The PebbleRelated Comonad in Finite Model Theory
Felipe Ferreira Santos. Separating LREC from LFP
Nikolas Mählmann. Model Checking on Interpretations of Classes of Bounded Local Cliquewidth
The FLoC Olympic Games 2022 follows the successful edition started in 2014 in conjunction with FLoC, in the spirit of the ancient Olympic Games. Every four years, as part of the Federated Logic Conference, the Games gather together all the challenging disciplines from a variety of computational logic in the form of the solver competitions.
At this Olympic Games Session, the competition organizers of the following competitions will present their competitions to the public and give away special prizes to their successful competitors: Confluence Competition, LP/CP Programming Contest, XCSP3, MiniZinc, SAT Competition, MaxSAT Evaluation, Model counting competition, Quantified Boolean Formula.
16:00  Proofs for Propositional Model Counting PRESENTER: Markus Hecher ABSTRACT. Although propositional model counting (#SAT) was long considered too hard to be practical, today’s highly efficient solvers facilitate applications in probabilistic reasoning, reliability estimation, quantitative design space exploration, and more. The current trend of solvers growing more capable every year is likely to continue as a diverse range of algorithms are explored in the field. However, to establish model counters as reliable tools like SATsolvers, correctness is as critical as speed. As in the nature of complex systems, bugs emerge as soon as the tools are widely used. To identify and avoid bugs, explain decisions, and provide trustworthy results, we need verifiable results. In this paper, we propose a novel proof system for certifying model counts. We show how proof traces can be generated efficiently for exact model counters based on dynamic programming, counting CDCL with component caching, as well as knowledge compilation to dDNNF. These approaches are the most widely used techniques in exact implementations. Furthermore, we provide proofofconcept software for emitting proofs from model counters and a parallel trace checker. Based on this, we show the feasibility of using certified model counting in an empirical experiment. 
16:30  PRESENTER: Marie Miceli ABSTRACT. We present and evaluate d4Max, an exact approach for solving the Weighted Max#SAT problem. The Max#SAT problem extends the model counting problem (#SAT) by considering a tripartition of the variables {X, Y, Z}, and consists in maximizing over the variables X the number of assignments to Y that can be extended to a solution with some assignment to Z. The weighted Max#SAT problem is an extension of the Max\SAT problem with weights associated on each interpretation. We test and compare our approach with other stateoftheart solvers on the challenging task in probabilistic inference of finding the marginal maximum a posteriori probability (MMAP) of a given subset of the variables in a Bayesian network and on existrandom quantified SSAT benchmarks. The results clearly show the overall superiority of d4Max in term of speed and number of instances solved. Moreover, we experimentally show that, in general, d4Max is able to quickly spot a solution that is close to optimal, thereby opening the door to an efficient anytime approach. 
17:00  Weighted Model Counting with TwinWidth PRESENTER: Stefan Szeider ABSTRACT. Bonnet et al. (FOCS 2020) introduced the graph invariant twinwidth and showed that many NPhard problems are tractable for graphs of bounded twinwidth, generalizing similar results for other width measures including treewidth and cliquewidth. In this paper, we investigate the use of twinwidth for solving the propositional satisfiability problem (SAT) and model counting. We particularly focus on Boundedones Weighted Model Counting (BWMC), which takes as input a CNF formula F along with a bound k and asks for the weighted sum of all models with at most k positive literals. BWMC generalizes not only SAT, but also (weighted) model counting. We develop the notion of "signed" twinwidth of CNF formulas, and establish that BWMC is fixedparameter tractable when parameterized by the certified signed twinwidth of F plus k. We show that this result is tight: it is neither possible to drop the bound k or use the vanilla twinwidth instead if one wishes to retain fixedparameter tractability, even for the easier problem SAT. Our theoretical results are complemented with an empirical evaluation and comparison of signed twinwidth on various classes of CNF formulas. 
17:00  PRESENTER: Tommaso Flaminio ABSTRACT. Conditionals play a key role in different areas of logic and probabilistic reasoning, and they have been studied and formalised from different angles. In this paper we focus on the de Finetti's notion of conditional as a threevalued object, with bettingbased semantics, and its related approach as random quantity as mainly developed by two of the authors. Compound conditionals have been studied in the literature, but not in full generality. In this paper we provide a natural procedure to explicitly attach conditional random quantities to arbitrary compound conditionals that also allows us to compute their previsions. By studying the properties of these random quantities, we show that, in fact, the set of compound conditionals can be endowed with a Boolean algebraic structure. In doing so, we pave the way to build a bridge between the long standing tradition of threevalued conditionals and a more recent proposal of looking at conditionals as elements from suitable Boolean algebras. 
17:30  A General Framework for Modelling Conditional Reasoning  Preliminary Report PRESENTER: Giovanni Casini ABSTRACT. We introduce and investigate here a formalisation for conditionals that allows the definition of a broad class of reasoning systems. This framework covers the most popular kinds of conditional reasoning in logicbased KR: the semantics we propose is appropriate for a structural analysis of those conditionals that do not satisfy closure properties associated to classical logics. 
17:00  Act for Your Duties but Maintain Your Rights PRESENTER: Shufang Zhu ABSTRACT. Most of the synthesis literature has focused on studying how to synthesize a strategy to fulfill a task. This task is a duty for the agent. In this paper, we argue that intelligent agents should also be equipped with rights, that is, tasks that the agent itself can choose to fulfill (e.g., the right of recharging the battery). The agent should be able to maintain these rights while acting for its duties. We study this issue in the context of LTLf synthesis: we give duties and rights in terms of LTLf specifications, and synthesize a suitable strategy to achieve the duties that can be modified onthefly to achieve also the rights, if the agent chooses to do so while executing it. We show that handling rights does not make synthesis substantially more difficult, although it requires a more sophisticated solution concept than standard LTLf synthesis. We also extend our results to the case in which further duties and rights are given to the agent while already executing. 
17:30  PRESENTER: Francois Schwarzentruber ABSTRACT. We compare the syntactic approach to the representation of epistemic states in the multiagent domain with the possibleworld semantic approach. The syntactic approach exploits belief bases and models belief change by means of dynamic operators for belief base expansion. The semantic approach relies on multirelational Kripke models and represents belief change through socalled action models of Dynamic Epistemic Logic (DEL). We first show how to translate a formula of the belief base approach into DEL: in particular, we provide a specific action model scheme corresponding to the process of expanding an agent's belief base with a formula. Conversely, we identify a fragment of DEL that can be translated into the multiagent dynamic epistemic language interpreted on belief bases. 
pickup at 18:00 from the Technion (Tour at Haifa, no food will be provided)