previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:00 Session 56A: Invited Talk (zoom link: https://technion.zoom.us/j/97111314813) (CP)
All Questions Answered

ABSTRACT. ZOOM link: https://technion.zoom.us/j/97111314813


During the past two years, the speaker has been drafting Section of The Art of Computer Programming, which is intended to be a solid introduction to techniques for solving Constraint Satisfaction Problems. The CP 2022 conference is an excellent opportunity for him to get feedback from the leading experts on the subject, and so he was delighted to learn that the organizers were also interested in hearing a few words from him. Rather than giving a canned lecture, he much prefers to let the audience choose the topics, and for all questions to be kept a secret from him until the lecture is actually in progress. (He believes that people often learn more from answers that are spontaneously fumbled than from responses that are carefully preplanned.) Questions related to constraints will naturally be quite welcome, but questions on any subject whatsoever will not be ducked! He'll try to answer them all as best he can, without spending a great deal of time on any one topic, unless there is special interest to go into more depth. Meanwhile he hopes to have drafted some notes for circulation before the conference begins, in case some attendees might wish to focus some of their questions on expository material related to his forthcoming book, either during this session or informally afterwards. Warning: His least favorite questions have the form "What is your favorite X?" If you want to ask such questions, please try to do it cleverly so that he doesn't have to choose between different things that he loves in different ways.

How to download the current draft of Section

You may have a look at the current draft at http://cs.stanford.edu/~knuth/fasc7a.ps.gz. Please note that this draft will hopefully be extended by a dozen or more pages by the time CP begins (the same link will be used). Don also points out that a reward of $2.56 per bug detected will be given to first responders, plus 32 cents per excellent suggestion for improvement.

How to ask questions?

If you wish to ask a question, please send it by e-mail to christine.solnon@insa-lyon.fr before July 31, 2022.





09:00-10:30 Session 56B: Type Theory and Formalization (FSCD)
An Analysis of Tennenbaum's Theorem in Constructive Type Theory
PRESENTER: Marc Hermes

ABSTRACT. Tennenbaum's theorem states that the only countable model of Peano arithmetic (PA) with computable arithmetical operations is the standard model of natural numbers. In this paper, we use constructive type theory as a framework to revisit and generalize this result.

The chosen framework allows for a synthetic approach to computability theory, by exploiting the fact that, externally, all functions definable in constructive type theory can be shown computable. We internalize this fact by assuming a version of Church's thesis expressing that any function on natural numbers is representable by a formula in PA. This assumption allows for a conveniently abstract setup to carry out rigorous computability arguments and feasible mechanization.

Concretely, we constructivize several classical proofs and present one inherently constructive rendering of Tennenbaum's theorem, all following arguments from the literature. Concerning the classical proofs in particular, the constructive setting allows us to highlight differences in their assumptions and conclusions which are not visible classically. All versions are accompanied by a unified mechanization in the Coq proof assistant.

Constructing Unprejudiced Extensional Type Theories with Choices via Modalities
PRESENTER: Vincent Rahli

ABSTRACT. Time-progressing expressions, i.e., expressions that compute to different values over time such as Brouwerian choice sequences or reference cells, are a common feature in many frameworks. For type theories to support such elements, they usually employ sheaf models. In this paper, we provide a general framework in the form of an extensional type theory incorporating various time-progressing elements along with a general possible-worlds forcing interpretation parameterized by modalities. The modalities can, in turn, be instantiated with topological spaces of bars, leading to a general sheaf model. This parameterized construction allows us to capture a distinction between theories that are “agnostic”, i.e., compatible with classical reasoning in the sense that classical axioms can be validated, and those that are “intuitionistic”, i.e., incompatible with classical reasoning in the sense that classical axioms can be proven false. This distinction is made via properties of the modalities selected to model the theory and consequently via the space of bars instantiating the modalities. We further identify a class of time-progressing elements that allows deriving “intuitionistic” theories that include not only choice sequences but also simpler operators, namely reference cells.

Division by two, in homotopy type theory
PRESENTER: Samuel Mimram

ABSTRACT. Natural numbers are isomorphism classes of finite sets and one can look for operations on sets which, after quotienting, allow recovering traditional arithmetic operations. Moreover, from a constructivist perspective, it is interesting to study whether those operations can be performed without resorting to the axiom of choice (the use of classical logic is usually necessary). Following the work of Bernstein, Sierpiński, Doyle and Conway, we study here ``division by two'' (or, rather, regularity of multiplication by two). We provide here a full formalization of this operation on sets, using the cubical variant of Agda, which is an implementation of the homotopy type theory setting, thus revealing some interesting points in the proof. Moreover, we show that this construction extends to general types, as opposed to sets.

09:00-10:30 Session 56C: Semantics (ICLP)
Location: Taub 9
On Syntactic Forgetting under Uniform Equivalence
PRESENTER: Matthias Knorr

ABSTRACT. Forgetting in Answer Set Programming (ASP) aims at reducing the language of a logic program without affecting the consequences over the remaining language. It has recently gained interest in the context of modular ASP where it allows simplifying a program of a module, making it more declarative, by omitting auxiliary atoms or hiding certain atoms/parts of the program not to be disclosed. Unlike for arbitrary programs, it has been shown that forgetting for modular ASP can always be applied, for input, output and hidden atoms, and preserve all dependencies over the remaining language (in line with uniform equivalence). However, the definition of the result is based solely on a semantic characterization in terms of HT-models. Thus, computing an actual result is a complicated process and the result commonly bears no resemblance to the original program, i.e., we are lacking a corresponding syntactic operator. In this paper, we show that there is no forgetting operator that preserves uniform equivalence (modulo the forgotten atoms) between the given program and its forgetting result by only manipulating the rules of the original program that contain the atoms to be forgotten. We then present a forgetting operator that preserves uniform equivalence and is syntactic whenever this is suitable. We also introduce a special class of programs, where syntactic forgetting is always possible, and as a complementary result, establish it as the largest known class where forgetting while preserving all dependencies is always possible.

Abduction in Probabilistic Logic Programs
PRESENTER: Fabrizio Riguzzi

ABSTRACT. In Probabilistic Abductive Logic Programming we are given a probabilistic logic program, a set of abducible facts, and a set of constraints. Inference in probabilistic abductive logic programs aims to find a subset of the abducible facts that is compatible with the constraints and that maximizes the joint probability of the query and the constraints. In this paper, we extend the PITA reasoner with an algorithm to perform abduction on probabilistic abductive logic programs exploiting Binary Decision Diagrams. Tests on several synthetic datasets show the effectiveness of our approach.

DeepStochLog: Neural Stochastic Logic Programming
PRESENTER: Giuseppe Marra

ABSTRACT. Recent advances in neural-symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural-symbolic framework based on stochastic definite clause grammars, a kind of stochastic logic program. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural-symbolic learning tasks.

Transforming Gringo Rules into Formulas in a Natural Way

ABSTRACT. Research on the input language of the ASP grounder gringo uses a translation that converts rules in that language into first-order formulas. That translation often transforms short rules into formulas that are syntactically complex. In this note we identify a class of rules that can be transformed into formulas in a simpler, more natural way. The new translation contributes to our understanding of the relationship between the language of gringo and first-order languages.

On Paraconsistent Belief Revision: the Case of Priest’s Logic of Paradox
PRESENTER: Nicolas Schwind

ABSTRACT. Belief revision aims at incorporating a new piece of information into the beliefs of an agent. This process can be modelled by the standard AGM/KM rationality postulates. However, these postulates suppose a classical setting and do not govern the case where (some parts of) the beliefs of the agent are not consistent. In this work we discuss how to adapt these postulates when the underlying logic is Priest’s Logic of Paradox, in order to model a rational change, while being a conservative extension of AGM/KM belief revision. This paper is a summary of [5].

Model Reconciliation in Logic Programs

ABSTRACT. Inspired by recent research in explainable planning, we investigate the model reconciliation problem between two logic programs \pi a and \pi h, which represent the knowledge bases of an agent and a human, respectively. Given \pi a, \pi h, and a query q such that \pi a entails q and \pi h does not entail q (or \pi a does not entail q and h entails q), the model reconciliation problem focuses on the question of how to modify \pi h, by adding \epsilon+⊆\pi a to \pi h and removing \epsilon−⊆\pi h from \pi h such that the resulting program \pî h=(\pi h∖\epsilon−)∪\epsilon+ has an answer set containing q (or has no answer set containing q). The pair (\epsilon+,\epsilon−) is referred to as a solution for the model reconciliation problem (\pi a,\pi h,q) (or (\pi a,\pi h,¬q)). We prove that, for a reasonable selected set of rules \epsilon+⊆\pi a there exists a way to modify \pi h such that \pî h is guaranteed to credulously entail q (or skeptically entail ¬q). Given that there are potentially several solutions, we discuss different characterizations of solutions and algorithms for computing solutions for model reconciliation problems.

09:00-10:30 Session 56E: Quantitative Algebra, Probabilities and Monads (LICS)

"Quantitative Algebra, Probabilities and Monads": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Varieties of quantitative algebras and their monads

ABSTRACT. Quantitative algebras, for a given signature with countable arities, are algebras equipped with a metric making all operations nonexpanding. They have been studied by Mardare, Panangaden and Plotkin who also introduced $c$-basic quantitative equations for regular cardinals $c$. Categories of quantitative algebras that can be presented by such equations for $c=\aleph_1$ are called $\omega_1$-varieties. We prove that they are precisely the monadic categories $\Met^\T$, where $\T$ is a countably basic monad on the category $\Met$ of metric spaces.

For finitary signatures one speaks about $\omega$-varieties in case $c= \aleph_0$. If all metrics used are restricted to $\bf{UMet}$, the category of ultrametric spaces, then $\omega$-varieties are precisely the monadic categories $\bf{UMet}^\T$ , where $\T$ is a finitely basic monad.

Beyond Nonexpansive Operations in Quantitative Algebraic Reasoning

ABSTRACT. The framework of quantitative equational logic has been successfully applied to reason about algebras whose carriers are metric spaces and operations are nonexpansive. We extend this framework in two orthogonal directions: algebras endowed with generalised metric space structures, and operations being nonexpansive up to a lifting. We apply our results to the algebraic axiomatisation of the Łukaszyk--Karmowski distance on probability distributions, which has recently found application in the field of representation learning on Markov processes.

Concrete categories and higher-order recursion (with applications including probability, differentiability, and full abstraction)
PRESENTER: Cristina Matache

ABSTRACT. We study concrete sheaf models for a call-by-value higher-order language with recursion. Our family of sheaf models is a generalization of many examples from the literature, such as models for probabilistic and differentiable programming, and fully abstract logical relations models. We treat recursion in the spirit of synthetic domain theory. We provide a general construction of a lifting monad starting from a class of admissible monomorphisms in the site of the sheaf category. In this way, we obtain a family of models parametrized by a concrete site and a class of monomorphisms, for which we prove a general computational adequacy theorem.

Probability monads with submonads of deterministic states

ABSTRACT. Probability theory can be studied synthetically as the computational effect embodied by a commutative monad. In the recently proposed Markov categories, one works with an abstraction of the Kleisli category and then defines deterministic morphisms equationally in terms of copying and discarding. The resulting difference between `pure' and `deterministic' leads us to investigate the `sober' objects for a probability monad, for which the two concepts coincide. We propose natural conditions on a probability monad which allow us to identify the sober objects and define an idempotent sobrification functor. Our framework applies to many examples of interest, including the Giry monad on measurable spaces, and allows us to sharpen a previously given version of de Finetti's theorem for Markov categories.

Monoidal Streams for Dataflow Programming
PRESENTER: Mario Román

ABSTRACT. We introduce monoidal streams: a generalization of synchronous stream processes to monoidal categories. Monoidal streams over a symmetric monoidal category form a feedback monoidal category. In the same way that classical streams provide semantics to dataflow programming with pure functions; monoidal streams provide semantics to dataflow programming where theories of processes are represented by a symmetric monoidal category. As an example, we study a stochastic dataflow language.

Partitions and Ewens Distributions in element-free Probability Theory

ABSTRACT. This article redevelops and deepens the probability theory of Ewens and others from the 1970s in population biology. At the heart of this theory are the so-called Ewens distributions describing biolological mutations. These distributions have a particularly rich (and beautiful) mathematical structure. The original work is formulated in terms of partitions, which are special multisets on natural numbers. The current redevelopment starts from multisets on arbitrary sets, with partitions as a special form that captures only the multiplicities of multiplicities, without naming the elements themselves. This `element-free' approach will be developed in parallel to the usual element-based theory. Ewens' famous sampling formula describes a cone of (parametrised) distributions on partitions. Another cone for this chain is described in terms of new (element-free) multinomials. They are well-defined because of a novel `partitions multinomial theorem' that extends the familiar multinomial theorem. This is based on a new concept of `division', as element-free distribution, in terms of multisets of probabilities.

09:00-10:30 Session 56F: QBF-2 + Stochastic Boolean Satisfiability (SAT)
Quantified CDCL with Universal Resolution

ABSTRACT. Quantified Conflict-Driven Clause Learning (QCDCL) solvers for QBF generate Q-resolution proofs. Pivot variables in Q-resolution must be existentially quantified. Allowing resolution on universally quantified variables leads to a more powerful proof system called QU-resolution, but so far, QBF solvers have used QU-resolution only in very limited ways. We present a modified version of QCDCL that generates proofs in QU-resolution by leveraging propositional unit propagation. Experiments with crafted instances and benchmarks from recent QBF evaluations demonstrate the viability of this approach.

Relating existing powerful proof systems for QBF

ABSTRACT. We advance the theory of QBF proof systems by showing the first simulation of the universal checking format QRAT by a theory-friendly system. We show that the sequent system G fully p-simulates QRAT, including the Extended Universal Reduction (EUR) rule which was recently used to show QRAT does not have strategy extraction. Because EUR heavily uses resolution paths our technique also brings resolution path dependency and sequent systems closer together. While we do not recommend G for practical applications this work can potentially show what features are needed for a new QBF checking format stronger than QRAT.

Quantifier Elimination in Stochastic Boolean Satisfiability

ABSTRACT. Stochastic Boolean Satisfiability (SSAT) generalizes quantified Boolean formulas (QBFs) by allowing quantification over random variables. Its generality makes SSAT powerful to model decision or optimization problems under uncertainty. On the other hand, this generalization complicates the computation in its counting nature. In this work, we address the following two questions: 1) Is there an analogy of quantifier elimination in SSAT, similar to QBF? 2) If quantifier elimination is possible for SSAT, can it be effective for SSAT solving? The answers are affirmative. We develop an SSAT decision procedure based on quantifier elimination. Experimental results demonstrate the unique benefits of the new method compared to the state-of-the-art solvers.

10:30-11:00Coffee Break
11:00-12:30 Session 58A: Modelling + Robust solutions (CP)

11:00-12:00: Modelling

12:00-12:30: Robust Solutions

Location: Taub 7
Computing relaxations for the three-dimensional stable matching problem with cyclic preferences
PRESENTER: Luis Quesada

ABSTRACT. Constraint programming has proven to be a successful framework for determining whether a given instance of the three-dimensional stable matching problem with cyclic preferences (3DSM-CYC) admits a solution. If such an instance is satisfiable, constraint models can even compute its optimal solution for several different objective functions. On the other hand, the only existing output for unsatisfiable 3DSM-CYC instances is a simple declaration of impossibility.

In this paper, we explore four ways to adapt constraint models designed for 3DSM-CYC to the maximum relaxation version of the problem, that is, the computation of the smallest part of an instance whose modification leads to satisfiability. We also extend our models to support the presence of costs on elements in the instance, and to return the relaxation with lowest total cost for each of the four types of relaxation. Empirical results reveal that our relaxation models are efficient, as in most cases, they show little overhead compared to the satisfaction version.

Understanding how people approach constraint modelling and solving
PRESENTER: Ruth Hoffmann

ABSTRACT. Research in constraint programming typically focuses on problem solving efficiency. However, the way users conceptualise problems and communicate with constraint programming tools is often sidelined. How humans think about constraint problems can be important for the development of efficient tools that are useful to a broader audience. For example, a system incorporating knowledge on how people think about constraint problems can provide explanations to users and improve the communication between the human and the solver.

We present an initial step towards better understanding of the human side of the constraint solving process. To our knowledge, this is the first human-centred study addressing how people approach constraint modelling and solving. We observed three sets of ten users each (constraint programmers, computer scientists and non-computer scientists) and analysed how they find solutions for well-known constraint problems. We found regularities offering clues about how to design systems that are more intelligible to humans.

Large Neighborhood Search for Robust Solutions for Constraint Satisfaction Problems with Ordered Domains
PRESENTER: Jheisson Lopez

ABSTRACT. In real applications, typically, there is a lack of information about the uncertainty/dynamismassociated with them. An effort has been done in the literature for finding robust solutions for their associated Constraint Satisfaction Problems (CSPs). Here, we analyze a previous exact/complete approach from the state-of-the-art that focuses on CSPs with ordered domains and dynamic bounds. However, this approach has low performance in large-scale CSPs. For this reason, in this paper, we present an inexact/incomplete approach that is faster at finding robust solutions for large-scale CSPs. This is specially handy when the computation time available for finding a solution is limited and/or a new solution must be re-computed on-line (because the original one is lost due to the dynamism). Specifically, we present a Large Neighbourhood Search (LNS) algorithm combined with Constraint Programming(CP) and Branch-and-bound (B&B) that searches for robust solutions. Furthermore, we also present a robust-value selection heuristic that guides the search towards more promising branches. We evaluate our approach with large-scale CSPs instances. Including the case study of scheduling problems. The evaluation shows a considerable improvement in the robustness of the solutions achieved by our algorithm for large-scale CSPs.

11:00-12:30 Session 58B: Applications (CP)
Location: Taub 4
Scheduling the Equipment Maintenance of an Electric Power Transmission Network using Constraint Programming
PRESENTER: Louis Popovic

ABSTRACT. Modern electrical power utilities must maintain their electrical equipment and replace them as they reach the end of their useful life. The Transmission Maintenance Scheduling (TMS) problem consists in generating an annual maintenance plan for electric power transportation equipment while maintaining the stability of the network and ensuring a continuous power flow for customers. Each year, a list of equipment that needs to be maintained or replaced is available and the goal is to generate an optimal maintenance plan. This paper proposes a constraint-based scheduling approach to solve the TMS problem. The model considers two types of constraints: (1) constraints that can be naturally formalized inside a constraint programming model, and (2) complex constraints that do not have a proper formalization from the field specialists. The latter cannot be integrated inside the model due to their complexity. Their satisfaction is thus verified by a black box tool, which is a simulator mimicking the impact of a maintenance schedule on the real power network. The simulator is based on complex differential power-flow equations. Experiments are carried out at five strategic points of Hydro-Québec power grid infrastructure, which involves more than 200 electrical equipment and 300 withdrawal requests. Results show that our model is able to comply with most of the formalized and unformalized constraints. It also generates maintenance schedules within an execution time of few minutes. The generated schedules are similar to the ones proposed by a field specialist and can be used to simulate maintenance programs for the next 10 years.

Solving the Constrained Single-Row Facility Layout Problem with Decision Diagrams
PRESENTER: Vianney Coppé

ABSTRACT. The Single-Row Facility Layout Problem is an NP-hard problem dealing with the ordering of departments with given lengths and pairwise traffic intensities in a facility. In this context, one seeks to minimize the sum of the distances between department pairs, weighted by the corresponding traffic intensities. Practical applications of this problem include the arrangement of rooms on a corridor in hospitals or offices, airplanes and gates in an airport or machines in a manufacture. This paper presents two novel exact models for the Constrained Single-Row Facility Layout Problem, a recent variant of the problem including positioning, ordering and adjacency constraints. On the one hand, the state-of-the-art mixed-integer programming model for the unconstrained problem is extended to incorporate the constraints. On the other hand, a decision diagram-based approach is described, based on an existing dynamic programming model for the unconstrained problem. Computational experiments show that both models outperform the only mixed-integer programming model in the literature, to the best of our knowledge. While the two models have execution times of the same order of magnitude, the decision diagram-based approach handles positioning constraints much better but the mixed-integer programming model has the advantage for ordering constraints.

Modeling and Solving Parallel Machine Scheduling with Contamination Constraints in the Agricultural Industry
PRESENTER: Felix Winter

ABSTRACT. Modern-day factories of the agricultural industry need to produce and distribute large amounts of compound feed to handle the daily demands of livestock farming. As a highly-automated production process is utilized to fulfill the large-scale requirements in this domain, finding efficient machine schedules is a challenging task which requires the consideration of complex constraints and the execution of optional cleaning jobs to prevent a contamination of the final products. Furthermore, it is critical to minimize job tardiness in the schedule, since the truck routes which are used to distribute the products to customers are sensitive to delays. Thus, there is a strong need for efficient automated methods which are able to produce optimized schedules in this domain.

This paper formally introduces a novel real-life problem in this area and investigates constraint-modeling techniques as well as a metaheuristic approach to efficiently solve practical scenarios. In particular, we investigate two innovative constraint programming model variants as well as an integer programming formulation to model the contamination constraints which require an efficient utilization of variables with a real valued domain. To tackle large-scale instances, we additionally provide a local search approach based on simulated annealing that utilizes problem-specific neighborhood operators.

We provide a set of new real-life problem instances that we use in an extensive experimental evaluation of all proposed approaches. Computational results show that our models can be successfully used together with state-of-the-art constraint solvers to provide several optimal results as well as high-quality bounds for many real-life instances. Additionally, the proposed metaheuristic approach could reach many optimal results and delivers the best upper bounds on many of the large practical instances in our experiments.

11:00-12:30 Session 58C: Applications: Concurrency, Quantum Computation (FSCD)
Type-Based Termination for Futures
PRESENTER: Siva Somayyajula

ABSTRACT. In sequential functional languages, sized types enable termination checking of programs with complex patterns of recursion in the presence of mixed inductive-coinductive types. In this paper, we adapt sized types and their metatheory to the concurrent setting. We extend the semi-axiomatic sequent calculus, a subsuming paradigm for futures-based functional concurrency, and its underlying operational semantics with recursion and arithmetic refinements. The latter enables a new and highly general sized type scheme we call sized type refinements. As a widely applicable technical device, we type recursive programs with infinitely deep typing derivations that unfold all recursive calls. Then, we observe that certain such derivations can be made infinitely wide but finitely deep. The resulting trees serve as the induction target of our strong normalization result, which we develop via a novel logical relations argument.

Addition and differentiation of ZX-diagrams

ABSTRACT. The ZX-calculus is a powerful framework for reasoning in quantum computing. It provides in particular a compact representation of matrices of interests. A peculiar property of the ZX-calculus is the absence of a formal sum allowing the linear combinations of arbitrary ZX-diagrams. The universality of the formalism guarantees however that for any two ZX-diagrams, the sum of their interpretations can be represented by a ZX-diagram. We introduce the first general, inductive definition of the summation of ZX-diagrams, relying on the construction of controlled diagrams. Based on this summation techniques, we provide an inductive differentiation of ZX-diagrams.

Indeed, given a ZX-diagram with variables in the description of its angles, one can differentiate the diagram according to one of these variables. Differentiation is ubiquitous in quantum mechanics and quantum computing (e.g. for solving optimisation problems). Technically, differentiation of ZX-diagrams is strongly related to summation as witnessed by the product rules.

We also introduce an alternative, non inductive, differentiation technique rather based on the isolation of the variables. Finally, we apply our results to deduce a diagram for an Ising Hamiltonian.

11:00-12:30 Session 58D: Semantics (ICLP)
Location: Taub 9
Strong Equivalence of Logic Programs with Ordered Disjunction: a Logical Perspective

ABSTRACT. Logic Programs with Ordered Disjunction (LPODs) extend classical logic programs with the capability of expressing preferential disjunctions in the heads of program rules. The initial semantics of LPODs (Brewka 2002; Brewka et al. 2004), although simple and quite intuitive, is not purely model-theoretic. A consequence of this is that certain properties of programs appear non-trivial to formalize in purely logical terms. An example of this state of affairs is the characterization of the notion of strong equivalence for LPODs (Faber et al. 2008). Although the results of (Faber et al. 2008) are accurately developed, they fall short of characterizing strong equivalence of LPODs as logical equivalence in some specific logic. This comes in sharp contrast with the well-known characterization of strong equivalence for classical logic programs, which, as proved in (Lifschitz et al. 2001), coincides with logical equivalence in the logic of here-and-there. In this paper we obtain a purely logical characterization of strong equivalence of LPODs as logical equivalence in a four-valued logic. Moreover, we provide a new proof of the coNP-completeness of strong equivalence for LPODs, which has an interest in its own right since it relies on the special structure of such programs. Our results are based on the recent logical semantics of LPODs introduced in (Charalambidis et al. 2021), a fact which we believe indicates that this new semantics may prove to be a useful tool in the further study of LPODs.

Strong Equivalence of Logic Programs with Counting

ABSTRACT. Under some conditions, a rule in the input language of the ASP grounder gringo can be transformed into a first-order sentence that has the same meaning under the stable model semantics. Such translations can be used for proving strong equivalence of logic programs and for verifying their correctness with respect to specifications expressed by first-order formulas. This paper defines a translation of this kind that is applicable to some rules containing arithmetic operations and aggregate expressions.

Analyzing Semantics of Aggregate Answer Set Programming Using Approximation Fixpoint Theory
PRESENTER: Linde Vanbesien

ABSTRACT. Aggregates provide a concise way to express complex knowledge. The problem of selecting an appropriate formalisation of aggregates for answer set programming (ASP) remains unsettled. This paper revisits the problem from the viewpoint of Approximation Fixpoint Theory (AFT). We introduce an AFT formalisation equivalent with the Gelfond-Lifschitz reduct for basic ASP programs and we extend it to handle aggregates. We analyse how existing approaches relate to our framework. We hope this work sheds some new light on the issue of a proper formalisation of aggregates.

Solving Problems in PH with ASP(Q)
PRESENTER: Francesco Ricca

ABSTRACT. Answer Set Programming with Quantifiers ASP(Q) is a recent extension of Answer Set Programming (ASP) that allows one to model problems from the entire polynomial hierarchy. Earlier work focused on demonstrating modeling capabilities of ASP(Q). However, further evidence of modeling efficacy of \QASP is needed and, even more importantly, effective solvers for ASP(Q) must be developed. In this paper, we address both these areas. First, we provide elegant quantified ASP encodings for additional relevant AI problems, such as \emph{Argumentation Coherence} ($\Pi_2^P$-complete) and \emph{Semi-stable Semantics for ASP} ($\Sigma_2^P$-complete). Second, we provide a modular ASP(Q) solver that translates a quantified ASP program together with a given data instance into a QBF to be solved by any QBF solver. We evaluate the performance of our solver on several instances and with different back-end QBF solvers, demonstrating its efficacy as a tool for rapid modeling and solving of complex combinatorial problems.

11:00-12:30 Session 58E: KR & Machine Learning (KR)
Location: Taub 2
Neural-Probabilistic Answer Set Programming
PRESENTER: Arseny Skryagin

ABSTRACT. The goal of combining the robustness of neural networks and the expressivity of symbolic methods has rekindled the interest in Neuro-Symbolic AI. One specifically interesting branch of research is deep probabilistic programming languages (DPPLs) which carry out probabilistic logical programming via the probability estimations of deep neural networks. However, recent SOTA DPPL approaches allow only for limited conditional probabilistic queries and do not offer the power of true joint probability estimation. In our work, we propose an easy integration of tractable probabilistic inference within a DPPL. To this end we introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logical program, united via answer set programming. NPPs are a novel design principle allowing for the unification of all deep model types and combinations thereof to be represented as a single probabilistic predicate. In this context, we introduce a novel $+/-$ notation for answering various types of probabilistic queries by adjusting the atom notations of a predicate. We evaluate SLASH on the benchmark task of MNIST addition as well as novel tasks for DPPLs such as missing data prediction, generative learning and set prediction with state-of-the-art performance, thereby showing the effectiveness and generality of our method.

Faithful Approaches to Rule Learning

ABSTRACT. Rule learning involves developing machine learning models that can be applied to a set of logical facts to predict additional facts, as well as providing methods for extracting from the learned model a set of logical rules that explain symbolically the model's predictions. Existing such approaches, however, do not describe formally the relationship between the model's predictions and the derivations of the extracted rules; rather, it is often claimed without justification that the extracted rules `approximate' or `explain' the model, and rule quality is evaluated by manual inspection. In this paper, we study the formal properties of Neural-LP--a prominent rule learning approach. We show that the rules extracted from Neural-LP models can be both unsound and incomplete: on the same input dataset, the extracted rules can derive facts not predicted by the model, and the model can make predictions not derived by the extracted rules. We also propose a modification to the Neural-LP model that ensures that the extracted rules are always sound and complete. Finally, we show that, on several prominent benchmarks, the classification performance of our modified model is comparable to that of the standard Neural-LP model. Thus, faithful learning of rules is feasible from both a theoretical and practical point of view.

Looking Inside the Black-Box: Logic-based Explanations for Neural Networks
PRESENTER: João Ferreira

ABSTRACT. Deep neural network-based methods have recently enjoyed great popularity due to their effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into an almost ubiquitous solution in multiple domains. However, due to the size and complexity of typical neural network models' architectures, as well as the sub-symbolical nature of the representations generated by their neuronal activations, neural networks are essentially opaque, making it nearly impossible to explain to humans the reasoning behind their decisions. We address this issue by developing a procedure to induce human-understandable logic-based theories that attempt to represent the classification process of a given neural network model. Exploring the setting of a synthetic image classification task, we provide empirical results to assess the quality of the developed theories for different neural network models, compare them to existing theories on that task, and give evidence that the theories developed through our method are faithful to the representations learned by the neural networks that they are built to describe.

11:00-12:30 Session 58F: Recently Published Research (KR)
Location: Taub 3
On Paraconsistent Belief Revision in LP (Extended Abstract)
PRESENTER: Nicolas Schwind

ABSTRACT. Belief revision aims at incorporating a new piece of information into the beliefs of an agent. This process can be modeled by the standard Katsuno and Mendelzon's (KM) rationality postulates. However, these postulates suppose a classical setting and do not govern the case where (some parts of) the beliefs of the agent are not consistent. In this work we discuss how to adapt these postulates when the underlying logic is Priest's Logic of Paradox, in order to model a rational change, while being a conservative extension of KM belief revision. This paper is a summary of (Schwind, Konieczny, and Pino Pérez 2022).

A conditional, a fuzzy and a probabilistic interpretation of self-organising maps (Abstract)
PRESENTER: Laura Giordano

ABSTRACT. This is an extended abstract of the paper:

Giordano L.; Gliozzi V.; and Theseider Dupre ́ D.; A conditional, a fuzzy and a probabilistic interpretation of self-organising maps. In Journal of Logic and Computation, Volume 32, Issue 2, March 2022, Pages 178-205. DOI: https://doi.org/10.1093/logcom/exab082

In this paper we establish a link between a preferential and a fuzzy semantics for description logics and self-organizing maps (SOMs), which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalization. In particular, we show that the input/output behavior of a SOM after training can be described by a fuzzy description logic interpretation as well as by a preferential interpretation, based on a concept-wise multi-preferential semantics, which takes into account preferences with respect to different concepts and has been recently proposed for ranked and for weighted defeasible description logics. Properties of the network can be proven by model checking both on the preferential and on the fuzzy interpretations. Starting from the fuzzy interpretation, we also provide a probabilistic account for this neural network model based on Zadeh's probability of fuzzy events.

Influence-Driven Explanations for Bayesian Network Classifiers
PRESENTER: Francesca Toni

ABSTRACT. We propose a novel approach to building influence-driven explanations (IDXs) for (discrete) Bayesian network classifiers (BCs). IDXs feature two main advantages wrt other commonly adopted explanation methods. First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing a deep, rather than shallow, account of the BCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count towards explanations. Our approach is thus flexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances capturing existing approaches. We demonstrate IDXs’ capability to explain various forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses.

Lifting Symmetry Breaking Constraints with Inductive Logic Programming
PRESENTER: Alice Tarzariol

ABSTRACT. Our work addresses the generation of first-order constraints to reduce symmetries and improve the solving performance for classes of instances of a given combinatorial problem. To this end, we devise a model-oriented approach obtaining positive and negative examples for an Inductive Logic Programming task by analyzing instance-specific symmetries for a training set of instances. The learned first-order constraints are interpretable and can be used to augment a general problem encoding in Answer Set Programming. This extented abstract introduces the context of our work, contributions and results.

Abstract Argumentation with Markov Networks (Extended Abstract)

ABSTRACT. Extended abstract for the Recently Published Research Track

Interpreting Neural Networks as Quantitative Argumentation Frameworks (Extended Abstract)

ABSTRACT. Extended abstract for the Recently Published Research Track

An Axiomatic Approach to Revising Preferences (Extended Abstract)
PRESENTER: Adrian Haret

ABSTRACT. We study a model of preference revision in which a prior preference over a set of alternatives is adjusted in order to accommodate input from an authoritative source, while maintaining certain structural constraints (e.g., transitivity, completeness), and without giving up more information than strictly necessary. We analyze this model under two aspects: the first allows us to capture natural distance-based operators, at the cost of a mismatch between the input and output formats of the revision operator. Requiring the input and output to be aligned yields a second type of operator, which we characterize using preferences on the comparisons in the prior preference. Preference revision is set in a logic-based framework and using the formal machinery of belief change, along the lines of the well-known AGM approach: we propose rationality postulates for each of the two versions of our model and derive representation results, thus situating preference revision within the larger family of belief change operators.

Simulating Multiwinner Voting Rules in Judgment Aggregation
PRESENTER: Julian Chingoma

ABSTRACT. We simulate voting rules for multiwinner elections in a model of judgment aggregation that distinguishes between rationality and feasibility constraints. These constraints restrict the structure of the individual judgments and of the collective outcome computed by the rule, respectively. We extend known results regarding the simulation of single-winner voting rules to the multiwinner setting, both for elections with ordinal preferences and for elections with approval-based preferences. This not only provides us with a new tool to analyse multiwinner elections, but it also suggests the definition of new judgment aggregation rules, by generalising some of the principles at the core of well-known multiwinner voting rules to this richer setting. We explore this opportunity with regards to the principle of proportionality. Finally, in view of the computational difficulty associated with many judgment aggregation rules, we investigate the computational complexity of our embeddings and of the new judgment aggregation rules we put forward.

11:00-12:30 Session 58G: Automata, Transducers and Games (LICS)

"Automata, Transducers and Games": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Abstractions for the local-time semantics of timed automata: a foundation for partial-order methods
PRESENTER: Igor Walukiewicz

ABSTRACT. A timed network is a parallel composition of timed automata synchronizing on common actions. We develop a methodology that allows to use partial-order methods for the reachability problem in timed networks. It is based on a local-time semantics [Bengtsson et al. (1998)]. A new simulation based abstraction of local-time zones is proposed. An efficient algorithm for testing subsumption with respect to this abstraction is developed. The abstraction is not finite in general. We show that, under some weak assumptions, there is no finite abstraction for local-time zones that works for arbitrary networks. We introduce a notion of bounded spread networks, where it is possible to use subsumption and partial-order methods at the same time.

The boundedness and zero isolation problems for weighted automata over nonnegative rationals
PRESENTER: David Purser

ABSTRACT. We consider linear cost-register automata (equivalently, weighted automata) over the semiring of nonnegative rationals, which generalise probabilistic automata. The two problems boundedness and zero isolation ask whether there is a sequence of words that converge to infinity and zero, respectively. In the general model both problems are undecidable so we focus on the linear copyless restriction. There, we show that the boundedness problem is decidable.

As for the zero isolation problem we need to further restrict the class. We obtain a model, where zero isolation becomes equivalent to universal coverability of orthant vector addition systems (OVAS), a new model in the VAS family interesting on its own. In standard VAS runs are considered only in the positive orthant, while in OVAS every orthant has its own set of vectors that can be applied in that orthant. Assuming Schanuel’s conjecture is true, we prove decidability of universal coverability for three-dimensional OVAS, which implies decidability of zero isolation in a model with at most three independent registers.

Efficient Construction of Reversible Transducers from Regular Transducer Expressions

ABSTRACT. The class of regular transformations has several equivalent characterizations such as functional MSO transductions, deterministic two-way transducers, streaming string transducers, as well as regular transducer expressions (RTE).

For algorithmic applications, it is very common and useful to transform a specification, here, an RTE, to a machine, here a transducer. In this paper, we give an efficient construction of a two way reversible transducer (2RFT) equivalent to a given RTE. 2RFTs are a well behaved class of transducers which are deterministic and co-deterministic (hence allow evaluation in linear time wrt the input word), and where composition has only polynomial complexity.

For full RTE, the constructed 2RFT has size doubly exponential in the size of the expression. If the RTE does not use Hadamard product or chained-star, the constructed 2RFT has size exponential, which is a tight bound.

Active learning for sound negotiations
PRESENTER: Igor Walukiewicz

ABSTRACT. We present two active learning algorithms for sound deterministic negotiations. Sound deterministic negotiations are models of distributed systems, a kind of Petri nets or Zielonka automata with additional structure. We show that this additional structure allows to minimize such negotiations. The two active learning algorithms differ in the type of membership queries they use. Both have similar complexity to Angluin’s L* algorithm, in particular, the number of queries is polynomial in the size of the negotiation, and not in the number of configurations.

Stochastic Games with Synchronizing Objectives

ABSTRACT. We consider two-player stochastic games played on a finite graph for infinitely many rounds. Stochastic games generalize both Markov decision processes (MDP) by adding an adversary player, and two-player deterministic games by adding stochasticity. The outcome of the game is a sequence of distributions over the states of the game graph. We consider synchronizing objectives, which require the probability mass to accumulate in a set of target states, either always, once, infinitely often, or always after some point in the outcome sequence; and the winning modes of sure winning (if the accumulated probability is equal to 1) and almost-sure winning (if the accumulated probability is arbitrarily close to 1).

We present algorithms to compute the set of winning distributions for each of these synchronizing modes, showing that the corresponding decision problem is PSPACE-complete for synchronizing once and infinitely often, and PTIME-complete for synchronizing always and always after some point. These bounds are remarkably in line with the special case of MDPs, while the algorithmic solution and proof technique are considerably more involved, even for deterministic games. This is because those games have a flavour of imperfect information, in particular they are not determined and randomized strategies need to be considered, even if there is no stochastic choice in the game graph. Moreover, in combination with stochasticity in the game graph, finite-memory strategies are not sufficient in general (for synchronizing infinitely often).

Characterizing Positionality in Games of Infinite Duration over Infinite Graphs

ABSTRACT. We study turn-based quantitative games of infinite duration opposing two antagonistic players and played over graphs. This model is widely accepted as providing the adequate framework for formalizing the synthesis question for reactive systems. This important application motivates the question of strategy complexity: which valuations (or payoff functions) admit optimal positional strategies (without memory)? Valuations for which both players have optimal positional strategies have been characterized by Gimbert and Zielonka for finite graphs and by Colcombet and Niwiński for infinite graphs. However, for reactive synthesis, existence of optimal positional strategies for the opponent (which models an antagonistic environment) is irrelevant. Despite this fact, not much is known about valuations for which the protagonist admits optimal positional strategies, regardless of the opponent. In this work, we characterize valuations which admit such strategies over infinite graphs. Our characterization uses the vocabulary of universal graphs, which has also proved useful in understanding recent breakthrough results regarding the complexity of parity games. More precisely, we show that a valuation admitting universal graphs which are monotonic and well-ordered is positional over all game graphs, and – more surprisingly – that the converse is also true for valuations admitting neutral letters. We prove the applicability and elegance of the framework by unifying a number of known positionality results, establishing stronger ones, and providing a new closure property.

11:00-12:30 Session 58H: Applications + pragmatics (SAT)
A SAT Attack on Rota's Basis Conjecture

ABSTRACT. The SAT modulo Symmetries (SMS) is a recently introduced framework for dynamic symmetry breaking in SAT instances. It combines a CDCL SAT solver with an external lexicographic minimality checking algorithm.

We extend SMS from graphs to matroids and use it to progress on Rota's Basis Conjecture (1989), which states that one can always decompose a collection of r disjoint bases of a rank r matroid into r disjoint rainbow bases. Through SMS, we establish that the conjecture holds for all matroids of rank 4 and certain special cases of matroids of rank 5. Furthermore, we extend SMS with the facility to produce DRAT proofs. External tools can then be used to verify the validity of additional axioms produced by the lexicographic minimality check.

As a byproduct, we have utilized our framework to enumerate matroids modulo isomorphism and to support the investigation of various other problems on matroids.

The packing chromatic number of the infinite square grid is at least 14

ABSTRACT. A k-packing coloring of a graph G = (V, E) is a mapping from V to {1, ..., k} such that any pair of vertices u, v that receive the same color i must be at distance greater than i in G. Arguably the most fundamental problem regarding packing colorings is to determine the packing chromatic number of the infinite square grid. A sequence of previous works has proved this number to be between 13 and 15. Our work improves the lower bound to 14. Moreover, we present a new encoding that is asymptotically more compact than the previously used ones.

Migrating Solver State
PRESENTER: Benjamin Kiesl

ABSTRACT. We present approaches to store and restore the state of a SAT solver, allowing us to migrate the state between different compute resources, or even between different solvers. This can be used in many ways, e.g., to improve the fault tolerance of solvers, to schedule SAT problems on a restricted number of cores, or to use dedicated preprocessing tools for inprocessing. We identify a minimum viable subset of the solver state to migrate such that the loss of performance is small. We then present and implement two different approaches to state migration: one approach stores the state at the end of a solver run whereas the other approach stores the state continuously as part of the proof trace. We show that our approaches enable the generation of correct models and valid unsatisfiability proofs. Experimental results confirm that the overhead is reasonable and that in several cases solver performance actually improves.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 59A: Awards (CP)

14:00-15:00: Best paper awards

15:00-15:30: Dissertation award

Location: Taub 7
Peel-and-Bound: Generating Stronger Relaxed Bounds with Multivalued Decision Diagrams
PRESENTER: Isaac Rudich

ABSTRACT. Decision diagrams are an increasingly important tool in cutting-edge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warm-start technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. We present a method of peeling off a sub-graph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peel-and-bound. We test the method on the sequence ordering problem, and our results indicate that our peel-and-bound scheme generates stronger bounds than a branch-and-bound scheme using the same propagators, and at significantly less computational cost.

Exploiting Functional Constraints in Generating Dominance Breaking Nogoods for Constraint Optimization
PRESENTER: Allen Z. Zhong

ABSTRACT. Dominance breaking is an effective technique to reduce the time for solving constraint optimization problems. Lee and Zhong propose an automated dominance breaking framework for a class of constraint optimization problems based on specific forms of objectives and constraints. In this paper, we propose to enhance the framework for problems with nested function calls which can be flattened to functional constraints. In particular, we focus on aggregation functions and exploit such properties as monotonicity, commutativity and associativity to give an efficient procedure for generating effective dominance breaking nogoods. Experimentation also shows orders-of-magnitude runtime speedup using the generated dominance breaking nogoods and demonstrates the ability to discover new dominance relations on problems with ineffective or no known dominance breaking constraints in the literature.

Doctoral Research Award
14:00-15:30 Session 59B: Automata and Computability (FSCD)
Restricting tree grammars with term rewriting
PRESENTER: Felix Laarmann

ABSTRACT. We investigate the problem of enumerating all terms generated by a tree-grammar which are also in normal form with respect to a set of directed equations (rewriting relation). To this end we show that deciding emptiness and finiteness of the resulting set is EXPTIME-complete. The emptiness result is inspired by a prior result by Common and Jacquemard on ground reducibility. The finiteness result is based on modification of pumping arguments used by Common and Jacquemard. We highlight practical applications and limitations. We provide and evaluate a prototype implementation. Limitations are somewhat surprising in that, while deciding emptiness and finiteness is EXPTIME-complete for linear and nonlinear rewrite relations, the linear case is practically feasible while the nonlinear case is infeasible, even for a trivially small example. The algorithms provided for the linear case also improve on prior practical results by Kallat et al.

On Lookaheads in Regular Expressions with Backreferences
PRESENTER: Nariyoshi Chida

ABSTRACT. Many modern regular expression engines employ various extensions to give more expressive support for real-world usages. Among the major extensions employed by many of the modern regular expression engines are backreferences and lookaheads. A question of interest about these extended regular expressions is their expressive power. Previous works have shown that (i) the extension by lookaheads does not enhance the expressive power, i.e., the expressive power of regular expressions with lookaheads is still regular, and that (ii) the extension by backreferences enhances the expressive power, i.e., the expressive power of regular expressions with backreferences (abbreviated as rewb) is no longer regular. This raises the following natural question: Does the extension of regular expressions with backreferences by lookaheads enhance the expressive power of regular expressions with backreferences? This paper answers the question positively by proving that adding either positive lookaheads or negative lookaheads increases the expressive power of rewb (the former abbreviated as rewblp and the latter as rewbln). In particular, we show that, unlike rewb, rewbln is closed under intersection. A consequence of our result is that neither the class of finite state automata nor that of memory automata (MFA) of Schmid (or any other known class of automata, to our knowledge) corresponds to rewblp or rewbln. To fill the void, as a first step toward building such automata, we propose a new class of automata called memory automata with positive lookaheads (PLMFA) that corresponds to rewblp. The key idea of PLMFA is to extend MFA with a new kind of memories, called positive-lookahead memory, that is used to simulate the backtracking behavior of positive lookaheads.

Certified Decision Procedures for Two-Counter Machines

ABSTRACT. Two-counter machines, pioneered by Minsky in the 1960s, constitute a particularly simple, universal model of computation. Universality of reversible two-counter machines (having a right-unique step relation) has been shown by Morita in the 1990s. Therefore, the halting problem for reversible two-counter machines is undecidable. Surprisingly, this statement is specific to certain instruction sets of the underlying machine model.

In the present work we consider two-counter machines (CM2) with instructions inc(c) (increment counter c, go to next instruction), dec(c,q) (if counter c is zero, then go to next instruction, otherwise decrement counter c and go to instruction q). While the halting problem for CM2 is undecidable, we give a decision procedure for the halting problem for reversible CM2, contrasting Morita's result.

We supplement our result with decision procedures for uniform boundedness (is there a uniform bound on the number of reachable configurations?) and uniform mortality (is there a uniform bound on the number of steps in any run?) for CM2.

Termination and correctness of each presented decision procedure is certified using the Coq proof assistant. In fact, both the implementation and certification is carried out simultaneously using the tactic language of the Coq proof assistant. Building upon existing infrastructure, the mechanized decision procedures are contributed to the Coq library of undecidability proofs.

14:00-15:30 Session 59C: Justification & Hybrid Knowledge Bases (ICLP)
Location: Taub 9
Tree-Like Justification Systems are Consistent
PRESENTER: Bart Bogaerts

ABSTRACT. Justification theory is an abstract unifying formalism that captures semantics of various non-monotonic logics. One intriguing problem that has received significant attention is the consistency problem: under which conditions are justification for a fact and justifications for its negation suitably related. Two variants of justification theory exist: one in which justifications are trees and one in which they are graphs. In this work we resolve the consistency problem once and for all for the tree-like setting by showing that all reasonable tree-like justification systems are consistent.

Nested Justification Systems
PRESENTER: Bart Bogaerts

ABSTRACT. Justification theory is a general framework for the definition of semantics of rule-based languages that has a high explanatory potential. Nested justification systems, first introduced by Denecker et al. (2015), allow for the composition of justification systems. This notion of nesting thus enables the modular definition of semantics of rule-based languages, and increases the representational capacities of justification theory. As we show in this paper, the original semantics for nested justification systems lead to the loss of information relevant for explanations. In view of this problem, we provide an alternative characterization of semantics of nested justification systems and show that this characterization is equivalent to the original semantics. Furthermore, we show how nested justification systems allow representing fixpoint definitions (Hou and Denecker 2009).

An Iterative Fixpoint Semantics for MKNF Hybrid Knowledge Bases with Function Symbols
PRESENTER: Fabrizio Riguzzi

ABSTRACT. Hybrid Knowledge Bases based on Lifschitz's logic of Minimal Knowledge with Negation as Failure are a successful approach to combine the expressivity of Description Logics and Logic Programming in a single language. Their syntax, defined by Motik and Rosati, disallows function symbols. In order to define a well-founded semantics for MKNF HKBs, Knorr et al. define a partition of the modal atoms occurring in it, called the alternating fixpoint partition. In this paper, we propose an iterated fixpoint semantics for HKBs with function symbols. We prove that our semantics extends Knorr et al.'s, in that, for a function-free HKBs, it coincides with its alternating fixpoint partition. The proposed semantics lends itself well to a probabilistic extension with a distribution semantic approach, which is the subject of future work.

A Fixpoint Characterization of Three-Valued Disjunctive Hybrid MKNF Knowledge Bases
PRESENTER: Spencer Killen

ABSTRACT. The logic of hybrid MKNF (minimal knowledge and negation as failure) is a powerful knowledge representation language that elegantly pairs ASP (answer set programming) with ontologies. One may handle a disjunctive rule by inducing a collection of normal rules; each with the same body and a single atom in its head. In this work, we refer to a set of such normal rules as a head-cut. The question arises as to whether the semantics of disjunctive hybrid MKNF knowledge bases can be characterized using fixpoint constructions with head-cuts. Earlier, we have shown that head-cuts can be paired with fixpoint operators to capture the two-valued MKNF models of disjunctive hybrid MKNF knowledge bases. Three-valued semantics extend two-valued semantics with the ability to express partial information. In this work, we present a fixpoint construction that leverages head-cuts using an operator that iteratively captures three-valued models of hybrid MKNF knowledge bases with disjunctive rules. This characterization also captures partial models of disjunctive logic programs since a program can be expressed as a disjunctive hybrid MKNF knowledge base with an empty ontology. We elaborate on a relationship between this characterization and approximators in AFT (approximation fixpoint theory) for normal hybrid MKNF knowledge bases.

14:00-15:30 Session 59D: KR & Machine Learning (KR)
Location: Taub 2
Sum-Product Loop Programming: From Probabilistic Circuits to Loop Programming

ABSTRACT. Recently, Probabilistic Circuits such as Sum-Product Networks have received growing attention, as they can represent complex features but still provide tractable inference. Although quite successful, unfortunately, they lack the capability of handling control structures, such as for and while loops. In this work, we introduce Sum-Product Loop Language (SPLL), a novel programming language that is capable of tractable inference on complex probabilistic code that includes loops. SPLL has dual semantics: every program has generative semantics familiar to most programmers and probabilistic semantics that assign a probability to each possible result. This way, the programmer can describe how to generate samples almost like in any standard programming language. The language takes care of computing the probability values of all results for free at run time. We demonstrate that SPLL inherits the beneficial properties of PCs, namely tractability and differentiability while generalizing to other distributions and programs.

Learning generalized policies without supervision using GNNs

ABSTRACT. We consider the problem of learning generalized policies for classical planning domains without supervision and using graph neural networks (GNNs) from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at crisp experimental results and understanding: either the policy greedy in the learned value function achieves close to 100% generalization over larger instances than those in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the C2 fragment of first-order logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms", encoding role compositions and transitive closures that do not fit into C2. The work is most closely related to the GNN approach for learning optimal general policies in a supervised fashion by Ståhlberg, Bonet and Geffner 2021; yet the learned policies are no longer required to be optimal (which expands the scope as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, value-based reinforcement learning methods that aim to produce optimal policies, do not yield policies that generalize, as the goals of optimality and generality are indeed often in conflict.

Embed2Sym - Scalable Neuro-Symbolic Reasoning via Clustered Embeddings
PRESENTER: Yaniv Aspis

ABSTRACT. Neuro-symbolic reasoning approaches proposed in recent years combine a neural perception component with a symbolic reasoning component to solve a downstream task. By doing so, these approaches can provide neural networks with symbolic reasoning capabilities, improve their interpretability and enable generalization beyond the training task. However, this often comes at the cost of poor training time, with potential scalability issues. In this paper, we propose a scalable neuro-symbolic approach, called Embed2Sym. We complement a two-stage (perception and reasoning) neural network architecture designed to solve a downstream task end-to-end with a symbolic optimisation method for extracting learned latent concepts. Specifically, the trained perception network generates clusters in embedding space that are identified and labelled using symbolic knowledge and a symbolic solver. With the latent concepts identified, a neuro-symbolic model is constructed by combining the perception network with the symbolic knowledge of the downstream task, resulting in a model that is interpretable and transferable. Our evaluation shows that Embed2Sym outperforms state-of-the-art neuro-symbolic systems on benchmark tasks in terms of training time by several orders of magnitude while providing similar if not better accuracy.

14:00-15:30 Session 59E: Recently Published Research (KR)
Location: Taub 3
Solving Lam's Problem via SAT and Isomorph-Free Exhaustive Generation
PRESENTER: Curtis Bright

ABSTRACT. Lam's problem is to prove the nonexistence of a type of geometric structure that has intrigued mathematicians for centuries. In this work we reduce Lam's problem to a Boolean satisfiability (SAT) problem and generate certificates which for the first time can be used to verify the resolution of Lam's problem. The search space underlying the problem has many nontrivial symmetries⁠—necessitating an approach that detects and removes symmetries as early as possible. We couple a method of isomorph-free exhaustive generation with a SAT solver and show that this significantly improves the performance of the solver when symmetries are present.

Rushing and Strolling among Answer Sets -- Navigation Made Easy (Extended Abstract)
PRESENTER: Dominik Rusovac

ABSTRACT. Answer set programming (ASP) is a popular declarative programming paradigm with a wide range of applications in artificial intelligence. Oftentimes, when modeling an AI problem with ASP, and in particular when we are interested beyond simple search for optimal solutions, an actual solution, differences between solutions, or number of solutions of the ASP program matter. For example, when a user aims to identify a specific answer set according to her needs, or requires the total number of diverging solutions to comprehend probabilistic applications such as reasoning in medical domains. Then, there are only certain problem specific and handcrafted encoding techniques available to navigate the solution space of ASP programs, which is oftentimes not enough. In this paper, we propose a formal and general framework for interactive navigation toward desired subsets of answer sets analogous to faceted browsing. Our approach enables the user to explore the solution space by consciously zooming in or out of sub-spaces of solutions at a certain configurable pace. We illustrate that weighted faceted navigation is computationally hard. Finally, we provide an implementation of our approach that demonstrates the feasibility of our framework for incomprehensible solution spaces.

A General Framework for Stable Roommates Problems using Answer Set Programming

ABSTRACT. This is an extended abstract of our paper entitled "A General Framework for Stable Roommates Problems using Answer Set Programming", which is published in Theory and Practice of Logic Programming (https://doi.org/10.1017/S1471068420000277).

The Stable Roommates problem (SR) is characterized by the preferences of agents over other agents as roommates: each agent ranks all others in strict order of preference. A solution to SR is then a partition of the agents into pairs so that each pair shares a room, and there is no pair of agents that would block this matching (i.e., who prefers the other to their roommate in the matching). There are interesting variations of SR that are motivated by applications (e.g., the preference lists may be incomplete (SRI) and involve ties (SRTI)), and that try to find a more fair solution (e.g., Egalitarian SR). Unlike the Stable Marriage problem, every SR instance is not guaranteed to have a solution. For that reason, there are also variations of SR that try to find a good-enough solution (e.g., Almost SR). Most of these variations are NP-hard. We introduce a formal framework, called SRTI-ASP, utilizing the logic programming paradigm Answer Set Programming, that is provable and general enough to solve many of such variations of SR. Our empirical analysis shows that SRTI-ASP is also promising for applications.

Reasoning about Cardinal Directions between 3-Dimensional Extended Objects using Answer Set Programming

ABSTRACT. We propose a novel formal framework (called 3D-NCDC-ASP) to represent and reason about cardinal directions between extended objects in 3-dimensional (3D) space, using Answer Set Programming (ASP). 3D-NCDC-ASP extends Cardinal Directional Calculus (CDC) with a new type of default constraints, and NCDC-ASP to 3D. 3D-NCDC-ASP provides a flexible platform offering different types of reasoning: Nonmonotonic reasoning with defaults, checking consistency of a set of constraints on 3D cardinal directions between objects, explaining inconsistencies, and inferring missing CDC relations. We prove the soundness of 3D-NCDC-ASP, and illustrate its usefulness with applications.

Hybrid Conditional Planning for Robotic Applications (Extended Abstract)

ABSTRACT. Hybrid classical planning, where classical task planning is integrated with low-level feasibility checks (e.g., motion planning that utilizes collision checks), has received attention in AI and Robotics communities~\cite{ErdemHPPU11,HertleDKN12,Plaku12,KaelblingL13,SrivastavaFRCRA14,LagriffoulDBSK14,Hadfield-MenellGCA15,ErdemPS16}, due to the cognitive skills required by autonomous robots in the real-world. Since these studies rely on classical planning methods, they assume complete knowledge about the initial states, and full observability of the environment, while computing a hybrid classical plan (i.e., a sequence of actuation actions) offline. During the execution of these plans, discrepancies might occur between the expected states and the observed states, due to contingencies that were not or could not be considered during offline planning. For instance, a robot may consider that initially all utensils in a cabinet are clean, and compute a hybrid classical plan to set up a table accordingly. While executing this plan, the robot may detect (e.g., by its sensors) that the plate that it picks from the cabinet is not clean. To cope with such contingencies, the robots are usually equipped with a plan execution monitoring algorithm. According to such an algorithm, when a discrepancy is detected, the robot tries to recover from the discrepancy, generally by replanning from that point on.

As an alternative approach to computing offline hybrid classical plans and then dealing with discrepancies/surprises online during plan execution by monitoring and replanning, we propose a parallel offline hybrid method, called \hcplan. This method suggests extending hybrid planning beyond classical planning, by inheriting advantages of conditional planning to deal with contingencies due to incomplete knowledge and partial observability. In conditional planning, in addition to actuation actions with deterministic outcomes, sensing actions with nondeterministic outcomes are considered as part of planning in order to gather the relevant missing knowledge. Every possible outcome of sensing actions leads to a possibly different conditional plan. Therefore, a conditional plan looks like a tree of actions, where the branching occurs at vertices that characterize sensing actions, and the other vertices denote actuation actions. Each branch of such a tree, from the root to a leaf, essentially represents a possible execution of actuation actions and sensing actions to reach a goal state from the initial state. \hcplan\ utilizes this advantageous aspect of conditional planning while computing offline plans: planning for nondeterministic sensing actions to gather missing knowledge when needed, while planning for deterministic actuation actions. Moreover, \hcplan\ integrates feasibility checks not only for executability of actuation actions, but also for executability of sensing actions.

\hcplan\ is a parallel algorithm: it computes a hybrid conditional plan by intelligently orchestrating the computation of its branches in parallel. According to \hcplan, computation of every branch starting from a vertex that characterizes a sensing action is viewed as a computation task with a priority (e.g., defined relative to the depth of that vertex), and the batches of computation tasks are solved in parallel with respect to their priorities so as to consume the computational resources more effectively. Furthermore, \hcplan\ avoids re-computation of the same task that may appear at different parts of the tree.

Each computation task handled by \hcplan\ takes as input an incomplete initial state, and returns a hybrid sequential plan (i.e., a sequence of deterministic actuation actions and nondeterministic sensing actions) that describes a branch of the tree. A hybrid sequential plan is not a hybrid classical plan, since it may involve nondeterministic sensing actions as well. Therefore, solving a computation task (i.e., the computation of each branch of a hybrid conditional plan) also requires cognitive abilities beyond hybrid classical planning. For that, \hcplan\ adapts some advantages of causality-based non-monotonic logics~\cite{Turner08,GelfondL98} for a novel solution to deal with these challenges. In particular, \hcplan\ utilizes defaults to express assumptions (e.g., the location of an object remains to be the same unless changed by an actuation action), exogeneity of actions to express that sensing actions may occur at any time when possible, and nondeterministic causal laws to choose an outcome of a sensing action included in the computation of a branch. The defaults are useful for formalizing assumptions in that, when an exception occurs contrary to the assumptions, it does not lead to an inconsistency. This is an important aspect of defaults, providing a solution to the famous frame problem. Exogeneity of actions is useful in that it is not required in advance to specify the order of nondeterministic sensing actions while solving a task. \hcplan\ uses the nonmonotonic reasoning system \ccalc~\cite{McCainT97} to compute each branch of a hybrid conditional plan.

We show a real-world application of \hcplan\ where a mobile bi-manual service robot sets up a table for lunch. While computing a hybrid conditional plan, the robot has to consider various contingencies that are not known in advance. Furthermore, while planning for its actions, the robot has to consider different types of feasibility checks (e.g., reachability, graspability, collisions), as well as commonsense knowledge. Once a hybrid conditional plan is computed by \hcplan, we also illustrate possible executions of this plan.

To evaluate the strengths and weaknesses of \hcplan\ by means of experimental evaluations, we construct a benchmark suite over the kitchen table setup domain. We evaluate \hcplan\ from the perspectives of computation time and plan quality, compare it with the closely related planners, and with an execution monitoring algorithm that utilizes hybrid planning and guided re-planning.

We refer the reader to our journal paper~\cite{NoumanPE2021} for further information.

LACE: A Logical Approach to Collective Entity Resolution
PRESENTER: Meghyn Bienvenu

ABSTRACT. In this paper, we revisit the problem of entity resolution and propose a novel, logical framework, LACE, which mixes declarative and procedural elements to achieve a number of desirable properties. Our approach is fundamentally declarative in nature: it utilizes hard and soft rules to specify conditions under which pairs of entity references must or may be merged, together with denial constraints that enforce consistency of the resulting instance. Importantly, however, rule bodies are evaluated on the instance resulting from applying the already ‘derived’ merges. It is the dynamic nature of our semantics that enables us to capture collective entity resolution scenarios, where merges can trigger further merges, while at the same time ensuring that every merge can be justified. As the denial constraints restrict which merges can be performed together, we obtain a space of (maximal) solutions, from which we can naturally define notions of certain and possible merges and query answers. We explore the computational properties of our framework and determine the precise computational complexity of the relevant decision problems. Furthermore, as a first step towards implementing our approach, we demonstrate how we can encode the various reasoning tasks using answer set programming.

Rule Learning over Knowledge Graphs with Genetic Logic Programming

ABSTRACT. Using logical rules plays an important role in Knowledge Graph (KG) construction and completion. Such rules not only encode the expert background knowledge and the relational patterns among the data, but also infer new knowledge and insights from them. Due to the exponential search space, current approaches resort to exhaustive search with a number of heuristics and syntactic restrictions on the rule language, which limits the quality of the outcome rules. In this paper, we extend the rule hypothesis space to general Datalog rule space by proposing a novel genetic logic programming algorithm named Evoda. It has an iterative process to learn high-quality rules over large scale KG for a matter of seconds. We performed experiments over multiple real-world KGs and various evaluation metrics to show its mining capabilities for higher quality rules and making more precise predictions in the extended rule space. Additionally, we applied it on the KG completion tasks to illustrate its competitiveness with several state-of-the-art embedding models.

14:00-15:00 Session 59F: Invited Talk by Amal Ahmed: Semantic Intermediate Representations for the Working Metatheoretician (LICS)

Semantic Intermediate Representations for the Working Metatheoretician:

Abstract: Designers of typed programming languages commonly prove meta-theoretic properties such as type soundness for at least a core of their language. But any practical language implementation must provide some way of interoperating with code written in other languages -- usually via a foreign-function interface (FFI) -- which opens the door to new, potentially unsafe, behaviors that aren't accounted for in the original type soundness proof. Despite the prevalence of interoperability in practical software, principled foundations for the end-to-end design, implementation, and verification of interoperability mechanisms have been largely neglected. In this talk, I'll advocate a proof technique for _working metatheoreticians_ seeking to ensure soundness or security properties of real languages, which implement interoperability using glue code that mediates interaction between languages after compilation to a common lower-level intermediate representation (IR). This proof technique involves building a _semantic intermediate representation_: a semantic model of source-language types as relations on terms of the lower-level IR. Semantic IRs can be used to guide the design and implementation of sound FFIs and to verify that the IR glue code used to implement conversions ensures type soundness. More interestingly, semantic IRs provide a basis for numerous avenues of future work on the principled design of language interoperability: how to support the inclusion of libraries whose behavior is foreign to the original language, how to prove soundness and security properties that are robust to behaviors (attackers) outside of the semantic IR, and how to develop a semantic-IR compiler backend that makes it easier to implement and verify sound interoperability for a wide array of source languages.

Location: Taub 1
14:00-15:30 Session 59G: Theory (SAT)
Tight Bounds for Tseitin Formulas
PRESENTER: Petr Smirnov

ABSTRACT. We show that for any connected graph G the size of any regular resolution or OBDD(and, reordering) refutation of a Tseitin formula based on G is at least 2^Omega(tw(G)), where tw(G) is the treewidth of G. These lower bounds improve upon the previously known bounds and, moreover, they are tight.

For both of the proof systems, there are constructive upper bounds that almost match the obtained lower bounds, hence the class of Tseitin formulas is almost automatable for regular resolution and for OBDD(and, reordering).

A generalization of the Satisfiability Coding Lemma and its applications

ABSTRACT. The seminal Satisfiability Coding Lemma of Paturi, Pudlák, and Zane is a coding scheme for satisfying assignments of $k$-CNF formulas. We generalize it to give a coding scheme for implicants and use this generalized scheme to establish new structural and algorithmic properties of prime implicants of $k$-CNF formulas.

Our first application is a near-optimal bound of $n \cdot 3^{n(1-\Omega(1/k))}$ on the number of prime implicants of any $n$-variable $k$-CNF formula. This resolves an open problem from the Ph.D. thesis of Talebanfard, who proved such a bound for the special case of constant-read $k$-CNF formulas. Our proof is algorithmic in nature, yielding an algorithm for computing the set of all prime implicants---the Blake Canonical Form---of a given $k$-CNF formula. The problem of computing the Blake Canonical Form of a given function is a classic one, dating back to Quine, and our work gives the first non-trivial algorithm for $k$-CNF formulas.

On the Parallel Parameterized Complexity of MaxSAT Variants
PRESENTER: Max Bannach

ABSTRACT. In the maximum satisfiability problem (\textsc{max-sat}) we are given a propositional formula in conjunctive normal form and have to find an assignment that satisfies as many clauses as possible. We study the \emph{parallel} parameterized complexity of various versions of \textsc{max-sat} and provide the first \emph{constant-time} algorithms parameterized either by the solution size or by the allowed excess relative to some guarantee (``above guarantee'' versions). For the \emph{dual parameterized} version where the parameter is the number of clauses we are allowed to leave unsatisfied, we present the first parallel algorithm for \textsc{max-2sat} (known as \textsc{almost-2sat}). The difficulty in solving \textsc{almost-2sat} in \emph{parallel} comes from the fact that the iterative compression method, originally developed to prove that the problem is fixed-parameter tractable at all, is inherently \emph{sequential.} We observe that a graph flow whose value is a parameter can be computed in parallel and use this fact to develop a parallel algorithm for the vertex cover problem parameterized above the size of a given matching. Finally, we study the parallel complexity of \textsc{max-sat} parameterized by the vertex cover number, the treedepth, the feedback vertex set number, and the treewidth of the input's incidence graph. While \textsc{max-sat} is fixed-parameter tractable for all of these parameters, we show that they allow different degrees of possible parallelization. For all four we develop dedicated parallel algorithms that are \emph{constructive}, meaning that they output an optimal assignment~--~in contrast to results that can be obtained by parallel meta-theorems, which often only solve the decision version.

15:00-15:30 Session 60: Linear Recurrences and Skolem Problem (LICS)

Linear recurrences and Skolem problem (rescheduled talks from Tuesday)

Edon Kelmendi. Computing the Density of the Positivity Set for Linear Recurrence Sequences15 min

Richard Lipton, Florian Luca, Joris Nieuwveld, Joel Ouaknine, David Purser and James Worrell. On the Skolem Problem and the Skolem Conjecture15 min

Location: Taub 1
15:30-16:00Coffee Break
16:00-17:30 Session 61A: Awards + ACP General Assembly (CP)

16:00-16:30: Early Career Award

16:30-17:30: ACP General Assembly

Location: Taub 7
Early Career Award
16:00-17:30 Session 61B: Lambda Calculus and Normalization/Reduction (FSCD)
Strategies for Asymptotic Normalization
PRESENTER: Giulio Guerrieri

ABSTRACT. We introduce a technique to study normalizing strategies when termination is asymptotic, that is, the result appears as a limit, as opposite to reaching a normal form in a finite number of steps. Asymptotic termination occurs in several settings, such as effectful, and in particular probabilistic computation---where the limits are distributions over the possible outputsor infinitary lambda-calculi---where the limits are infinitary normal forms such as Böhm trees.

As a concrete application, we obtain a result which is of independent interest: a normalization theorem for Call-by-Value (and---in a uniform way---for Call-by-Name) probabilistic lambda-calculus.

Solvability for Generalized Applications
PRESENTER: Loïc Peyrot

ABSTRACT. Solvability is a key notion in the theory of call-by-name λ-calculus, used in particular to identify meaningful terms. However, adapting this notion to other call-by-name calculi, or extending it to different models of computation —such as call-by-value—, is not straightforward. In this paper, we study solvability for call-by-name and call-by-value λ-calculi with generalized applications, both variants inspired from von Plato’s natural deduction with generalized elimination rules. We develop an operational as well as a logical theory of solvability for each of them. The operational characterization relies on a notion of solvable reduction for generalized applications, and the logical characterization is given in terms of typability in an appropriate non-idempotent intersection type system. Finally, we show that solvability in generalized applications and solvability in the λ-calculus are equivalent notions.

Normalization without syntax
PRESENTER: Willem Heijltjes

ABSTRACT. We present normalization for intuitionistic combinatorial proofs (ICPs) and relate it to the simply-typed lambda-calculus. We prove confluence and strong normalization.

Combinatorial proofs, or "proofs without syntax", form a graphical semantics of proof in various logics that is canonical yet complexity-aware: they are a polynomial-sized representation of sequent proofs that factors out exactly the non-duplicating permutations. Our approach to normalization aligns with these characteristics: it is canonical (free of permutations) and generic (readily applied to other logics).

Our reduction mechanism is a canonical representation of reduction in sequent calculus with closed cuts (no abstraction is allowed below a cut), and relates to closed reduction in lambda-calculus and supercombinators. While we will use ICPs concretely, the notion of reduction is completely abstract, and can be specialized to give a reduction mechanism for any representation of typed normal forms.

16:00-17:30 Session 61C: Applications (ICLP)
Location: Taub 9
A Neuro-Symbolic ASP Pipeline for Visual Question Answering
PRESENTER: Nelson Higuera

ABSTRACT. We present a neuro-symbolic visual question answering (VQA) pipeline for CLEVR, which is a well-known dataset that consists of pictures showing scenes with objects and questions related to them. Our pipeline covers (i) training neural networks for object classification and bounding-box prediction of the CLEVR scenes, (ii) statistical analysis on the distribution of prediction values of the neural networks to determine a threshold for high-confidence predictions, and (iii) a translation of CLEVR questions and network predictions that pass confidence thresholds into logic programs so that we can compute the answers using an ASP solver. By exploiting choice rules, we consider deterministic and non-deterministic scene encodings. Our experiments show that the non-deterministic scene encoding achieves good results even if the neural networks are trained rather poorly in comparison with the deterministic approach. This is important for building robust VQA systems if network predictions are less-than perfect. Furthermore, we show that restricting non-determinism to reasonable choices allows for more efficient implementations in comparison with related neuro-symbolic approaches without loosing much accuracy.

Knowledge Authoring with Factual English

ABSTRACT. Knowledge representation and reasoning (KRR) systems represent knowledge as collections of facts and rules. Like databases, KRR systems contain information about domains of human activities like enterprises, science, and business. In addition to the capabilities of databases, KRRs can represent complex concepts and relations, and they can query and manipulate information in much more sophisticated ways. Unfortunately, the KRR technology has been hindered by the fact that specifying the requisite knowledge requires skills that most domain experts do not have, while knowledge engineering is a rare skill. One solution could be to extract knowledge from English text, and a number of works have attempted to do so (OpenSesame, etc.). Unfortunately, at present, extraction of logical facts from unrestricted natural language is still too inaccurate to be used for reasoning, while restricting the grammar of the language (so-called controlled natural language, or CNL) is hard for the users to learn and use. Nevertheless, some recent CNL-based approaches, such as the Knowledge Authoring Logic Machine (KALM), have shown to have very high accuracy compared to others, and a natural question is to what extent the CNL restrictions can be lifted. In this paper, we are trying to address this issue by transplanting the KALM framework to a neural natural language parser, Multi-Stanza. Here we limit our attention to authoring facts and queries and therefore our focus is what we call factual English statements. Authoring other types of knowledge, such as rules, will be considered in our followup work. As it turns out, neural network based parsers have problems of their own and the mistakes they make range from part-of-speech tagging to lemmatization to dependency errors. We present a number of techniques for combating these problems and test the new system, KALM^FL (i.e., KALM for factual language) on a number of benchmarks, which show that KALM^FL achieves correctness in excess of 95%.

Applications of Answer Set Programming to Smart Devices and Large Scale Reasoning
PRESENTER: Kristian Reale

ABSTRACT. In the last few years, we have been witnessing the spread of computing devices getting smaller and smaller (e.g., Smartphones, Smart Devices, Raspberry, etc.), and the production and availability of data getting bigger and bigger. In this work we introduce DLV Large Scale (DLV-LS), a framework based on Answer Set Programming (ASP) for performing declarative-based reasoning tasks over data-intensive applications, possibly on Smart Devices. The framework encompasses DLV Mobile Edition(DLV-ME), an ASP based solver for Android systems and Raspberry devices, and DLV Enterprise Edition (DLV-EE), an ASP-based platform, accessible by REST interfaces, for large-scale reasoning over Big Data, classical relational database systems, and NoSQL databases. DLV-LS enables Smart Devices to both locally perform reasoning over data generated by their own sensors and properly interact with DLV-EE when more computational power is needed for harder tasks, possibly over bigger centralized data. We present also a real-world application of DLV-LS; the use case consists of a tourist navigator that calculates the best routes and optimizes a tour of a tourist under custom-defined time constraints.

Modeling and Reasoning in Event Calculus using Goal-Directed Constraint Answer Set Programming (Extended Abstract)
PRESENTER: Joaquin Arias

ABSTRACT. Automated commonsense reasoning is essential for building human-like AI systems featuring, for example, explainable AI. Event Calculus (EC) is a family of formalisms that model common-sense reasoning with a sound, logical basis. Previous attempts to mechanize reasoning using EC faced difficulties in the treatment of the continuous change in dense domains (e.g., time and other physical quantities), constraints among variables, default negation, and the uniform application of different inference methods, among others. We propose the use of s(CASP), a query-driven, top-down execution model for Predicate Answer Set Programming with Constraints, to model and reason using EC. We show how EC scenarios can be elegantly modeled in s(CASP) and how its expressiveness makes it possible to perform deductive and abductive reasoning tasks in domains featuring, for example, constraints involving dense time and fluents.

A Multi-shot ASP Encoding for the Aircraft Routing and Maintenance Planning Problem
PRESENTER: Pierre Tassel

ABSTRACT. Aircraft routing and maintenance planning are integral parts of the airline scheduling process. We model and solve these combinatorial optimization problems with Answer Set Programming (ASP), contrasting traditional single-shot ASP solving methods to multi-shot solving techniques geared to the rapid discovery of near-optimal solutions to sub-problems of gradually increasing granularity.

16:00-17:00 Session 61D: Invited Talk (KR)
Location: Taub 2
Probabilities, Reasoning, and Argumentation

ABSTRACT. The talk will review different ways in which the study of human cognition has invoked probabilities, and examined the extent to which laypeople match up to probabilistic norms. This provides both evidence on the breadth of challenge faced by computational systems seeking to tackle real-world reasoning problem and indicates where computational systems could most usefully provide support.

16:00-17:30 Session 61E: Poster Session (LICS)

Mario Roman. Monoidal Streams for Dataflow Programming

Paolo Pistone. Curry and Howard Meet Borel

Pascal Schweitzer. Choiceless Polynomial Time with Witnessed Symmetric Choice

Yoàv Montacute. The Pebble-Related Comonad in Finite Model Theory

Felipe Ferreira Santos. Separating LREC from LFP

Nikolas Mählmann. Model Checking on Interpretations of Classes of Bounded Local Cliquewidth

16:00-17:30 Session 61F (LICS)

LICS SC meeting (on invitation only)

Location: Taub 201
16:00-17:30 Session 61G: The Olympic Games Session (Block 1) (Olympic Games)

The FLoC Olympic Games 2022 follows the successful edition started in 2014 in conjunction with FLoC, in the spirit of the ancient Olympic Games. Every four years, as part of the Federated Logic Conference, the Games gather together all the challenging disciplines from a variety of computational logic in the form of the solver competitions.

At this Olympic Games Session, the competition organizers of the following competitions will present their competitions to the public and give away special prizes to their successful competitors: Confluence Competition, LP/CP Programming Contest, XCSP3, MiniZinc, SAT Competition, MaxSAT Evaluation, Model counting competition, Quantified Boolean Formula.

16:00-17:30 Session 61H: Model Counting (SAT)
Proofs for Propositional Model Counting
PRESENTER: Markus Hecher

ABSTRACT. Although propositional model counting (#SAT) was long considered too hard to be practical, today’s highly efficient solvers facilitate applications in probabilistic reasoning, reliability estimation, quantitative design space exploration, and more. The current trend of solvers growing more capable every year is likely to continue as a diverse range of algorithms are explored in the field. However, to establish model counters as reliable tools like SAT-solvers, correctness is as critical as speed. As in the nature of complex systems, bugs emerge as soon as the tools are widely used. To identify and avoid bugs, explain decisions, and provide trustworthy results, we need verifiable results. In this paper, we propose a novel proof system for certifying model counts. We show how proof traces can be generated efficiently for exact model counters based on dynamic programming, counting CDCL with component caching, as well as knowledge compilation to d-DNNF. These approaches are the most widely used techniques in exact implementations. Furthermore, we provide proof-of-concept software for emitting proofs from model counters and a parallel trace checker. Based on this, we show the feasibility of using certified model counting in an empirical experiment.

A New Exact Solver for (Weighted) Max#SAT
PRESENTER: Marie Miceli

ABSTRACT. We present and evaluate d4Max, an exact approach for solving the Weighted Max#SAT problem. The Max#SAT problem extends the model counting problem (#SAT) by considering a tripartition of the variables {X, Y, Z}, and consists in maximizing over the variables X the number of assignments to Y that can be extended to a solution with some assignment to Z. The weighted Max#SAT problem is an extension of the Max\SAT problem with weights associated on each interpretation. We test and compare our approach with other state-of-the-art solvers on the challenging task in probabilistic inference of finding the marginal maximum a posteriori probability (MMAP) of a given subset of the variables in a Bayesian network and on exist-random quantified SSAT benchmarks. The results clearly show the overall superiority of d4Max in term of speed and number of instances solved. Moreover, we experimentally show that, in general, d4Max is able to quickly spot a solution that is close to optimal, thereby opening the door to an efficient anytime approach.

Weighted Model Counting with Twin-Width
PRESENTER: Stefan Szeider

ABSTRACT. Bonnet et al. (FOCS 2020) introduced the graph invariant twin-width and showed that many NP-hard problems are tractable for graphs of bounded twin-width, generalizing similar results for other width measures including treewidth and clique-width. In this paper, we investigate the use of twin-width for solving the propositional satisfiability problem (SAT) and model counting. We particularly focus on Bounded-ones Weighted Model Counting (BWMC), which takes as input a CNF formula F along with a bound k and asks for the weighted sum of all models with at most k positive literals. BWMC generalizes not only SAT, but also (weighted) model counting.

We develop the notion of "signed" twin-width of CNF formulas, and establish that BWMC is fixed-parameter tractable when parameterized by the certified signed twin-width of F plus k. We show that this result is tight: it is neither possible to drop the bound k or use the vanilla twin-width instead if one wishes to retain fixed-parameter tractability, even for the easier problem SAT. Our theoretical results are complemented with an empirical evaluation and comparison of signed twin-width on various classes of CNF formulas.

17:00-18:00 Session 62A: Conditionals (KR)
Location: Taub 2
Compound conditionals as random quantities and Boolean algebras
PRESENTER: Tommaso Flaminio

ABSTRACT. Conditionals play a key role in different areas of logic and probabilistic reasoning, and they have been studied and formalised from different angles. In this paper we focus on the de Finetti's notion of conditional as a three-valued object, with betting-based semantics, and its related approach as random quantity as mainly developed by two of the authors. Compound conditionals have been studied in the literature, but not in full generality. In this paper we provide a natural procedure to explicitly attach conditional random quantities to arbitrary compound conditionals that also allows us to compute their previsions. By studying the properties of these random quantities, we show that, in fact, the set of compound conditionals can be endowed with a Boolean algebraic structure. In doing so, we pave the way to build a bridge between the long standing tradition of three-valued conditionals and a more recent proposal of looking at conditionals as elements from suitable Boolean algebras.

A General Framework for Modelling Conditional Reasoning - Preliminary Report
PRESENTER: Giovanni Casini

ABSTRACT. We introduce and investigate here a formalisation for conditionals that allows the definition of a broad class of reasoning systems. This framework covers the most popular kinds of conditional reasoning in logic-based KR: the semantics we propose is appropriate for a structural analysis of those conditionals that do not satisfy closure properties associated to classical logics.

17:00-18:00 Session 62B: Actions (KR)
Location: Taub 3
Act for Your Duties but Maintain Your Rights
PRESENTER: Shufang Zhu

ABSTRACT. Most of the synthesis literature has focused on studying how to synthesize a strategy to fulfill a task. This task is a duty for the agent. In this paper, we argue that intelligent agents should also be equipped with rights, that is, tasks that the agent itself can choose to fulfill (e.g., the right of recharging the battery). The agent should be able to maintain these rights while acting for its duties. We study this issue in the context of LTLf synthesis: we give duties and rights in terms of LTLf specifications, and synthesize a suitable strategy to achieve the duties that can be modified on-the-fly to achieve also the rights, if the agent chooses to do so while executing it. We show that handling rights does not make synthesis substantially more difficult, although it requires a more sophisticated solution concept than standard LTLf synthesis. We also extend our results to the case in which further duties and rights are given to the agent while already executing.

Epistemic Actions: Comparing Multi-agent Belief Bases with Action Models

ABSTRACT. We compare the syntactic approach to the representation of epistemic states in the multi-agent domain with the possible-world semantic approach. The syntactic approach exploits belief bases and models belief change by means of dynamic operators for belief base expansion. The semantic approach relies on multi-relational Kripke models and represents belief change through so-called action models of Dynamic Epistemic Logic (DEL). We first show how to translate a formula of the belief base approach into DEL: in particular, we provide a specific action model scheme corresponding to the process of expanding an agent's belief base with a formula. Conversely, we identify a fragment of DEL that can be translated into the multi-agent dynamic epistemic language interpreted on belief bases.

18:30-20:30 Walking tour (at Haifa) (FLoC)

pickup at 18:00 from the Technion (Tour at Haifa, no food will be provided)