previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 73A: DEI Special Event (CP)

9:00 Introduction (Chris Beck)

9:05 "Diversity, Equity and Inclusion in CP: Presenting survey results and recent findings" (presenter: Maria Garcia de la Banda)

9:45 Panel on DEI in CP  Panelists: Noa Agmon, Luis Quesada, Hélène Verhaeghe  Panel facilitators: Chris Beck, Maria Garcia de la Banda

Location: Taub 7
09:00-10:30 Session 73B: Reasoning & Solving (ICLP)
Location: Taub 9
Stream Reasoning with Deadlines

ABSTRACT. The specifications of various domains include properties which persist, at the latest, by some future time. In multi-agent voting protocols, e.g., a proposed motion may be seconded at the latest by some specified time. Contemporary applications additionally require efficient temporal pattern matching over high-velocity streams, for computing the values of such properties with minimal latency. To address these issues, we present a formal computational framework, based on the Event Calculus, that handles effectively streams including events with temporally-constrained effects. We present the syntax, semantics, reasoning algorithms and complexity of our proposed framework. Furthermore, we present an empirical analysis on large synthetic and real data streams.

Problem Decomposition and Multi-shot ASP Solving for Job-shop Scheduling

ABSTRACT. Scheduling methods are important for effective production and logistics management, where tasks need to be allocated and performed by limited resources. In particular, the Job-shop Scheduling Problem (JSP) is a well-known and challenging combinatorial optimization problem in which tasks sharing a machine are to be arranged in sequence such that encompassing jobs can be completed as early as possible. Given that already moderately sized JSP instances can turn out as highly combinatorial, so that neither optimal schedules nor the runtime to termination of complete optimization methods is known, efficient approaches to approximate good-quality schedules are of interest. In this paper, we propose problem decomposition into time windows whose operations can be successively scheduled and optimized by means of multi-shot ASP solving. From a computational perspective, decomposition aims to split highly complex scheduling tasks into better manageable subproblems with a balanced number of operations, so that good-quality or even optimal partial solutions can be reliably found in a small fraction of runtime. Regarding the feasibility and quality of solutions, problem decomposition must respect the precedence of operations within their jobs, and partial schedules optimized by time windows should yield better global solutions than obtainable in similar runtime on the full problem. We devise and investigate a variety of decomposition strategies in terms of the number and size of time windows as well as heuristics for choosing their operations. Moreover, we incorporate time window overlapping and compression techniques into the iterative scheduling process in order to counteract limitations of window-wise optimization restricted to partial schedules. Our experiments on JSP benchmark sets of several sizes show that successive optimization by multi-shot ASP solving leads to substantially better schedules within the runtime limit than global optimization on the full problem, where the gap increases with the number of operations to schedule.

Efficient Knowledge Compilation Beyond Weighted Model Counting (Best Student Paper Award)
PRESENTER: Rafael Kiesel

ABSTRACT. Quantitative extensions of logic programming often require the solution of so called *second level* inference tasks, i.e., problems that involve a third operation, such as maximization or normalization, on top of addition and multiplication, and thus go beyond the well-known weighted or algebraic model counting setting of probabilistic logic programming under the distribution semantics. We introduce Second Level Algebraic Model Counting (2AMC) as a generic framework for this kind of problems. As 2AMC is to (algebraic) model counting what forall-exists-SAT is to propositional satisfiability, it is notoriously hard to solve. First level techniques based on Knowledge Compilation (KC) have been adapted for specific 2AMC instances by imposing variable order constraints on the resulting circuit. However, those constraints can severely increase the circuit size and thus decrease the efficiency of such approaches. We show that we can exploit the logical structure of a 2AMC problem to omit parts of these constraints, thus limiting the negative effect. Furthermore, we introduce and implement a strategy to generate a sufficient set of constraints statically, with a priori guarantees for the performance of KC. Our empirical evaluation on several benchmarks and tasks confirms that our theoretical results can translate into more efficient solving in practice.

Verifying Catamorphism-Based Contracts using Constrained Horn Clauses
PRESENTER: Fabio Fioravanti

ABSTRACT. We address the problem of verifying that the functions of a program meet their contracts, specified by pre/postconditions. We follow an approach based on Constrained Horn Clauses (CHCs) by which the verification problem is reduced to the problem of checking satisfiability of a set of clauses derived from the given program and contracts. We consider programs that manipulate Algebraic Data Types (ADTs) and a class of contracts specified by using catamorphisms, that is, functions defined by a simple recursion schema on the given ADT. We show by several examples that state-of-the-art CHC satisfiability tools are not effective at solving the satisfiability problems obtained by direct translation of the contracts into CHCs. Then, we propose a transformation technique that removes the ADT terms from CHCs, thereby deriving new sets of clauses that work on basic sorts, such as integers and booleans. We prove that the transformation is sound, in the sense that if the transformed set of CHCs is satisfiable, then so is the original set. We also prove that the transformation always terminates for the class of contracts specified by catamorphisms. Finally, we present the experimental results obtained by an implementation of our technique when verifying many non-trivial contracts relative to list sorting and tree manipulating programs.

09:00-10:30 Session 73C: Description Logics (KR)
Location: Taub 2
Interpolants and Explicit Definitions in Extensions of the Description Logic EL
PRESENTER: Marie Fortin

ABSTRACT. We show that the vast majority of extensions of the description logic EL do not enjoy the Craig interpolation nor the projective Beth definability property. This is the case, for example, for EL with nominals, EL with the universal role, EL with a role hierarchies and transitive roles, and for ELI. It follows, in particular, that the existence of an explicit definition of a concept or individual name cannot be reduced to subsumption checking via implicit definability. We show that nevertheless the existence of interpolants and explicit definitions can be decided in polynomial time for standard tractable extensions of EL (such as EL++) and in ExpTime for ELI and various extensions. It follows that these existence problems are not harder than subsumption which is in sharp contrast to the situation for expressive DLs. We also obtain tight bounds for the size of interpolants and explicit definitions and the complexity of computing them: single exponential for tractable standard extensions of EL and double exponential for ELI and extensions. We close with a discussion of Horn-DLs such as Horn-ALCI.

Sticky Policies in OWL2: Extending PL with fixpoints and Transitive Closure
PRESENTER: Luigi Sauro

ABSTRACT. PL is a low-complexity profile of OWL2, expressly designed in the H2020 project SPECIAL to encode data usage policies and personal data protection regulations - such as the GDPR - in a machine understandable way. With PL, the compliance of privacy policies with the GDPR and with the data subjects' consent to processing can be checked automatically and in real time. The followup H2020 project TRAPEZE is extending PL to support richer policies, including "sticky policies". They are a sort of license that applies to data transfers, and specifies how the recipient can use the data. Sticky policies may be "recursive", i.e. they may apply not only to the first data transfer, but also to all subsequent transfer operations that the (direct or indirect) recipients may execute in the future. Such recursive sticky policies may be encoded with fixpoints or transitive role closure. In this paper we prove that such extensions make compliance checking intractable. Since the scalability of compliance checking is a major requirement in this area, these results justify an ad-hoc, low complexity approach to encoding sticky policies.

Pushing Optimal ABox Repair from EL Towards More Expressive Horn-DLs

ABSTRACT. Ontologies based on Description Logic (DL) represent general background knowledge in a terminology (TBox) and the actual data in an ABox. DL systems can then be used to compute consequences (such as answers to certain queries) from an ontology consisting of a TBox and an ABox. Since both human-made and machine-learned data sets may contain errors, which manifest themselves as unintuitive or obviously incorrect consequences, repairing DL-based ontologies in the sense of removing such unwanted consequences is an important topic in DL research. Most of the repair approaches described in the literature produce repairs that are not optimal, in the sense that they do not guarantee that only a minimal set of consequences is removed. In a series of papers, we have developed an approach for computing optimal repairs, starting with the restricted setting of an EL instance store, extending this to the more general setting of a quantified ABox (where some individuals may be anonymous), and then adding a static EL TBox.

Here, we extend the expressivity of the underlying DL considerably, by adding nominals, inverse roles, regular role inclusions and the bottom concept to EL, which yields a fragment of the well-known DL Horn-SROIQ. The ideas underlying our repair approach still apply to this DL, though several non-trivial extensions are needed to deal with the new constructors and axioms. The developed repair approach can also be used to treat unwanted consequences expressed by certain conjunctive queries or regular path queries, and to handle Horn-ALCOI TBoxes with regular role inclusions.

09:00-10:30 Session 73D: Systems & Robotics / Existential Rules (KR)
Location: Taub 3
Querying Inconsistent Prioritized Data with ORBITS: Algorithms, Implementation, and Experiments
PRESENTER: Meghyn Bienvenu

ABSTRACT. We investigate practical algorithms for inconsistency-tolerant query answering over prioritized knowledge bases, which consist of a logical theory, a set of facts, and a priority relation between conflicting facts. We consider three well-known semantics (AR, IAR and brave) based upon two notions of optimal repairs (Pareto and completion). Deciding whether a query answer holds under these semantics is (co)NP-complete in data complexity for a large class of logical theories, and SAT-based procedures have been devised for repair-based semantics when there is no priority relation, or the relation has a special structure. The present paper introduces the first SAT encodings for Pareto- and completion-optimal repairs w.r.t. general priority relations and proposes several ways of employing existing and new encodings to compute answers under (optimal) repair-based semantics, by exploiting different reasoning modes of SAT solvers. The comprehensive experimental evaluation of our implementation compares both (i) the impact of adopting semantics based on different kinds of repairs, and (ii) the relative performances of alternative procedures for the same semantics.

Forecasting Argumentation Frameworks
PRESENTER: Francesca Toni

ABSTRACT. We introduce Forecasting Argumentation Frameworks (FAFs), a novel argumentation-based methodology for forecasting informed by recent judgmental forecasting research. FAFs comprise update frameworks which empower (human or artificial) agents to argue over time about the probability of outcomes, e.g. the winner of an election or a fluctuation in inflation rates, whilst flagging perceived irrationality in the agents' behaviour with a view to improving their forecasting accuracy. FAFs include five argument types, amounting to standard pro/con arguments, as in bipolar argumentation, as well as novel proposal arguments and increase/decrease amendment arguments. We adapt an existing gradual semantics for bipolar argumentation to determine the aggregated dialectical strength of proposal arguments and define irrational behaviour. We then give a simple aggregation function which produces a final group forecast from rational agents' individual forecasts. We identify and study properties of FAFs, and conduct an empirical evaluation which signals FAFs' potential to increase the forecasting accuracy of participants.

ALASPO: An Adaptive Large-Neighbourhood ASP Optimiser
PRESENTER: Tobias Geibinger

ABSTRACT. We present the system ALASPO which implements Adaptive Large-neighbourhood search for Answer Set Programming (ASP) Optimisation. Large-neighbourhood search (LNS) is a meta heuristic where parts of a solution are destroyed and reconstructed in an attempt to improve an overall objective. ALASPO currently supports the ASP solver clingo, as well as its extensions clingo-dl and clingcon for difference and full integer constraints, and multi-shot solving for an efficient implementation of the LNS loop. Neighbourhoods can be defined in code or declaratively as part of the ASP encoding, and they can be used in a portfolio together with differ-ent search configurations. Switching can be done on the fly during search, or the system can be run in full self-adaptive modes. To demonstrate the benefits of adaptive LNS for ASP, we evaluate ALASPO on different optimisation benchmarks, thereby complementing previous work.

On the Relationship between Shy and Warded Datalog+/-
PRESENTER: Teodoro Baldazzi

ABSTRACT. Datalog^E is the extension of Datalog with existential quantification. While its high expressive power, underpinned by a simple syntax and the support for full recursion, renders it particularly suitable for modern applications on Knowledge Graphs, query answering (QA) over such language is known to be undecidable in general. For this reason, different fragments have emerged, introducing syntactic limitations to Datalog^E that strike a balance between its expressive power and the computational complexity of QA, to achieve decidability. In this short paper, we focus on two promising tractable candidates, namely Shy and Warded Datalog+/-. Reacting to an explicit interest from the community, we shed light on the relationship between these fragments. Moreover, we carry out an experimental analysis of the systems implementing Shy and Warded, respectively DLV^E and Vadalog.

Chasing streams with existential rules
PRESENTER: Thomas Eiter

ABSTRACT. We study the problem of performing reasoning with existential rules on streams of data for query answering about the present and future. Although reasoning with existential rules is a problem that has been widely studied (e.g., see chasing algorithms), current works mainly focused on static data and we are not aware on any extension to streams of data. To cover this gap, we considered LARS, currently one of the most prominent frameworks for rule-based stream reasoning. LARS is an ideal starting point because it offers many stream operators (e.g., windowing), but it does not support value invention. To remove this limitation, we show how we can extend LARS with existentially quantified variables, introduce a procedure to translate LARS reasoning into a set of existential rules, and describe how we can leverage the temporal nature of the stream to implement stronger acyclicity notions. Our contribution also includes a preliminary experimental evaluation over artificial streams.

09:00-10:00 Session 73E: Invited talk by Mikolaj Bojanczyk (LICS)

Transducers of polynomial growth

Abstract: The polyregular functions are a class of string-to-string functions that have polynomial size outputs, and which can be defined using finite state models. There are many equivalent definitions of this class, with roots in automata theory, programming  languages and logic. The talk surveys recent results on polyregular functions. It presents five of the equivalent definitions, and gives self-contained proofs for most of the equivalences. Decision problems as well as restricted subclasses of the polyregular functions are also discussed.

Location: Taub 1
09:15-10:30 Session 74 (Mentoring Workshop)
Location: Taub 6
PRESENTER: Marijana Lazić
Formal methods for Machine Learning
10:00-10:30 Session 75: Poster Session (LICS)

Daniel Gratzer. Normalization for multimodal type theory

Clemens Grabmayer. Milner's Proof System for Regular Expressions Modulo Bisimilarity is Complete

Soumodev Mal. On the Satisfiability of Context-free String Constraints with Subword Ordering

Matthias Naaf. Zero-One Laws and Almost Sure Valuations of First-Order Logic in Semiring Semantics

Jana Hofmann. Deciding Hyperproperties combined with Functional Specifications


10:30-11:00Coffee Break
11:00-12:30 Session 76A: Theory + TDM (CP)

11:00-12:00: Theory

12:00-12:30: Trustworthy Decision Making

Location: Taub 7
Complexity of Minimum-Size Arc-Inconsistency Explanations
PRESENTER: Emmanuel Hebrard

ABSTRACT. Explaining the outcome of programs has become one of the main concerns in AI research. In constraint programming, a user may want the system to explain why a given variable assignment is not feasible or how it came to the conclusion that the problem does not have any solution. One solution to the latter is to return to the user a sequence of simple reasoning steps that lead to inconsistency. Arc consistency is a well-known form of reasoning that can be understood by a human. We consider explanations as sequences of propagation steps of a constraint on a variable (i.e. the ubiquitous revise function in arc consistency algorithms). We characterize, on binary CSPs, cases for which providing a shortest such explanation is easy: when domains are Boolean or when the constraint graph has maximum degree two. However, it is NP-hard in general. Surprisingly, it remains NP-hard for as few as four variables. It also remains NP-hard if the problem has a tree structure, despite the fact that arc consistency is a decision procedure on trees.

Weisfeiler-Leman Invariant Promise Valued CSPs
PRESENTER: Silvia Butti

ABSTRACT. In a recent line of work, Butti and Dalmau have shown that a fixed-template Constraint Satisfaction Problem is solvable by a certain natural linear programming relaxation (equivalent to the basic linear programming relaxation) if and only if it is solvable on a certain distributed network, and this happens if and only if its set of Yes instances is closed under Weisfeiler-Leman equivalence. We generalize this result to the much broader framework of fixed-template Promise Valued Constraint Satisfaction Problems. Moreover, we show that two commonly used linear programming relaxations are no longer equivalent in this broader framework.

An Auditable Constraint Programming Solver
PRESENTER: Ciaran McCreesh

ABSTRACT. We describe the design and implementation of a new constraint programming solver that can produce an auditable record of what problem was solved and how the solution was reached. As well as a solution, this solver provides an independently verifiable proof log demonstrating that the solution is correct. This proof log uses the VeriPB proof system, which is based upon cutting planes reasoning with extension variables. We explain how this system can support proof logging when using global constraints, variables with large domains, and reformulation, despite not natively understanding any of these concepts.

11:00-12:30 Session 76B: Propagation + Heuristics (CP)
Location: Taub 4
A Portfolio-Based Approach to Select Efficient Variable Ordering Heuristics for Constraint Satisfaction Problems

ABSTRACT. Variable ordering heuristics (VOH) play a central role in solving Constraint Satisfaction Problems (CSP). The performance of different VOHs may vary greatly in solving the same CSP instance. In this paper, we propose an approach to select efficient VOHs for solving different CSP instances. The approach contains two phases. The first phase is a probing procedure that runs a simple portfolio strategy containing several different VOHs. The portfolio tries to use each of the candidate VOHs to guide backtracking search to solve the CSP instance within a limited number of failures. If the CSP is not solved by the portfolio, one of the candidates is selected for it by analysing the information attached in the search trees generated by the candidates. The second phase uses the selected VOH to guide backtracking search to solve the CSP. The experiments are run with the MiniZinc benchmark suite and four different VOHs which are considered as the state of the art are involved. The results show that the proposed approach finds the best VOH for more than 67% instances and it solves more instances than all the candidate VOHs and an adaptive VOH based on Multi-Armed Bandit. It could be an effective adaptive search strategy for black-box CSP solvers.

Explaining Propagation for Gini and Spread with Variable Mean
PRESENTER: Alexander Ek

ABSTRACT. In optimisation problems involving multiple agents (stakeholders) we often want to make sure that the solution is balanced and fair. That is, to maximise total utility subject to an upper bound on the statistical dispersion (e.g., spread or the Gini coefficient) of the utility given to different agents, or minimise dispersion subject to some lower bounds on utility. These needs arise in, for example, balancing tardiness in scheduling, unwanted shifts in rostering, and desired resources in resource allocation, or minimising deviation from a baseline in schedule repair, to name a few. These problems are often quite challenging. To solve them efficiently we want to effectively reason about dispersion. Previous work has studied the case where the mean is fixed, but this may not be possible for many problems, e.g. scheduling where total utility depends on the final schedule. In this paper we introduce two log-linear-time dispersion propagators—(a) spread (variance, and indirectly standard deviation) and (b) the Gini coefficient—capable of explaining their propagations, thus allowing effective learning solvers to be applied to these problems. Propagators for (a) exist in the literature but do not explain themselves, while propagators for (b) have not been previously studied. We avoid introducing floating point variables, which are usually not supported by learning solvers, by reasoning on an integer versions or scaled versions. We show that learning can substantially improve the solving of problems where we want to bound dispersion and optimise total utility and vice versa.

Structured Set Variable Domains in Bayesian Network Structure Learning

ABSTRACT. Constraint programming is a state of the art technique for learning the structure of Bayesian Networks from data (Bayesian Network Structure Learning -- BNSL). However, scalability both for CP and other combinatorial optimization techniques for this problem is limited by the fact that the basic decision variables are set variables with domain sizes Omega(n^log n) and potentially even larger, where n is the number of random variables. Usual techniques for handling set variables in CP are not useful, as they lead to poor bounds. In this paper, we propose using data structures for storing sets of sets to represent set variable domains. We show that relatively simple operations are sufficient to implement all propagation and bounding algorithms, and that the use of these data structures improves scalability of a state of the art CP-based solver for BNSL.

11:00-12:30 Session 76C: Rewriting (FSCD)
Polynomial Termination over N is Undecidable

ABSTRACT. In this paper we prove via a reduction from Hilbert's 10th problem that the problem whether the termination of a given rewrite system can be shown by a polynomial interpretation in the natural numbers is undecidable, even for rewrite systems that are incrementally polynomially terminating. We also prove that incremental polynomial termination is an undecidable property of terminating rewrite systems.

Compositional Confluence Criteria
PRESENTER: Nao Hirokawa

ABSTRACT. We show how confluence criteria based on decreasing diagrams are generalized to ones composable with other criteria. In order to demonstrate it the methods of rule labeling and critical pair systems for term rewriting are recast as composable criteria. In addition, we prove that Toyama's parallel closedness result based on parallel critical pairs subsumes his almost parallel closedness theorem.

Rewriting for monoidal closed categories
PRESENTER: David Sprunger

ABSTRACT. This paper develops a formal string diagram language for monoidal closed categories. Previous work has shown that string diagrams for freely generated symmetric monoidal categories can be viewed as hypergraphs with interfaces, and the axioms of these categories can be realized by rewriting systems. This work extends this slightly to general symmetric monoidal categories, then proposes hierarchical hypergraphs as a suitable formalization of string diagrams for monoidal closed categories. We then show double pushout rewriting captures the axioms of these closed categories.

11:00-12:00 Session 76D: Invited talk by Fabrizio Riguzzi: Probabilistic Logic Programming: Semantics, Inference and Learning (ICLP)

Probabilistic Logic Programming: Semantics, Inference and Learning [LINK TO SLIDES]

Abstract: Probabilistic Logic Programming is now more than 30 years old and has become an active field of research at the intersection of Logic Programming and Uncertainty in AI.Among the various semantics, the Distribution Semantics (DS) has emerged as one of the most intuitive and versatile. The talk will introduce the DS and its various extensions to deal with function symbols, continuous random variables and multiple stable models.Computing the probability of queries is the task of inference, which can be solved either by exact or approximate algorithms. Exact inference is usually performed by means of knowledge compilation while approximate inference by means of Monte Carlo.Inference is at the basis of learning programs from data that has recently received much attention. The talk will provide an overview of algorithms for learning the parameters and for learning the structure of programs, discussing the various Inductive Logic Programming techniques that have been adopted and the various tradeoffs between quality of the model and speed of learning.

Location: Taub 9
11:00-12:30 Session 76E: Belief Merging/Revision (KR)
Location: Taub 2
The More the Worst-Case-Merrier: A Generalized Condorcet Jury Theorem for Belief Fusion
PRESENTER: Jonas Karge

ABSTRACT. In multi-agent belief fusion, there is increasing interest in results and methods from social choice theory. As a theoretical cornerstone, the Condorcet Jury Theorem (CJT) states that given a number of equally competent, independent agents where each is more likely to guess the true out of two alternatives, the chances of determining this objective truth by majority voting increase with the number of participating agents, approaching certainty. Past generalizations of the CJT have shown that some of its underlying assumptions can be weakened. Motivated by requirements from practical belief fusion scenarios, we provide a significant further generalization that subsumes several of the previous ones. Our considered setting simultaneously allows for heterogeneous competence levels across the agents (even tolerating entirely incompetent or even malicious voters), and voting for any number of alternatives from a finite set. We derive practical lower bounds for the numbers of agents needed to give probabilistic guarantees for determining the true state through approval voting. We also demonstrate that the non-asymptotic part of the CJT fails in our setting for arbitrarily high numbers of voters.

Region-Based Merging of Open-Domain Terminological Knowledge

ABSTRACT. This paper introduces a novel method for merging open-domain terminological knowledge. It takes advantage of the Region Connection Calculus (RCC5), a formalism used to represent regions in a topological space and to reason about their set-theoretic relationships. To this end, we first propose a faithful translation of terminological knowledge provided by several and potentially conflicting sources into region spaces. The merging is then performed on these spaces, and the result is translated back into the underlying language of the input sources. Our approach allows us to benefit from the expressivity and the flexibility of RCC5 while dealing with conflicting knowledge in a principled way.

Projection of Belief in the Presence of Nondeterministic Actions and Fallible Sensing
PRESENTER: Jens Classen

ABSTRACT. In a recent paper, we presented a Situation Calculus-based framework for modelling an agent that has incomplete or inaccurate knowledge about its environments, whose actions are non-deterministic, and whose sensor might give incorrect results. Generalizing earlier proposals, the presented approach represented the agent's epistemic state by a set of situations ranked by their respective plausibility, which would then be updated by modifying the plausibility ranks accordingly. Two notions of belief were distinguished, an extensional "bird's eye" view on the one hand, and an intensional one on the other hand, where beliefs are represented from the agent's perspective. In this short paper, we extend our earlier work by considering the problem of projection in this framework, i.e. the question whether a certain (epistemic) formula will hold after a given sequence of actions. We present results on both regression, where the query is transformed into an equivalent one about the initial situation, as well as progression, where the knowledge base is updated to reflect the situation after executing the action sequence in question.

Inference with System W Satisfies Syntax Splitting
PRESENTER: Jonas Haldimann

ABSTRACT. In this paper, we investigate inductive inference with system W from conditional belief bases with respect to syntax splitting. The concept of syntax splitting for inductive inference states that inferences about independent parts of the signature should not affect each other. This was captured in work by Kern-Isberner, Beierle, and Brewka in the form of postulates for inductive inference operators expressing syntax splitting as a combination of relevance and independence; it was also shown that c-inference fulfils syntax splitting, while system P inference and system Z both fail to satisfy it. System W is a recently introduced inference system for nonmonotonic reasoning that captures and properly extends system Z as well as c-inference. We show that system W fulfils the syntax splitting postulates for inductive inference operators by showing that it satisfies the required properties of relevance and independence. This makes system W another inference operator besides c-inference that fully complies with syntax splitting, while in contrast to c-inference, also extending rational closure.

Iterated Belief Change, Computationally
PRESENTER: Kai Sauerwald

ABSTRACT. Iterated Belief Change is the research area that investigates principles for the dynamics of beliefs over (possibly unlimited) many subsequent belief changes. In this paper, we demonstrate how iterated belief change is connected to computation. In particular, we show that iterative belief revision is Turing complete, even under the condition that broadly accepted principles like the Darwiche-Pearl postulates for iterated revision hold.

11:00-12:30 Session 76F: Datalog & Existential Rules (KR)
Location: Taub 3
Conservative Extensions for Existential Rules
PRESENTER: Carsten Lutz

ABSTRACT. We study the problem to decide, given sets T1, T2 of TGDs (also called existential rules), whether T2 is a conservative extension of T1. We consider two natural notions of conservative extension, one pertaining to answers to conjunctive queries over databases and one to homomorphisms between chased databases. Our main results are that these problems are undecidable for linear TGDs, undecidable for guarded TGDs even when T1 is empty, and decidable for frontier-one TGDs.

Revisiting Semiring Provenance for Datalog
PRESENTER: Liat Peterfreund

ABSTRACT. Data provenance consists in bookkeeping meta information during query evaluation, in order to enrich query results with their trust level, likelihood, evaluation cost, and more. The framework of semiring provenance abstracts from the specific kind of meta information that annotates the data. While the definition of semiring provenance is uncontroversial for unions of conjunctive queries, the picture is less clear for Datalog. Indeed, the original definition might include infinite computations, and is not consistent with other proposals for Datalog semantics over annotated data. In this work, we propose and investigate several provenance semantics, based on different approaches for defining classical Datalog semantics. We study the relationship between these semantics, and introduce properties that allow us to analyze and compare them.

Normalisations of Existential Rules: Not so Innocuous!

ABSTRACT. Existential rules are an expressive knowledge representation language mainly developed to query data. In the literature, they are often supposed to be in some normal form that simplifies technical developments. For instance, a common assumption is that rule heads are atomic, i.e., restricted to a single atom. Such assumptions are considered to be made without loss of generality as long as all sets of rules can be normalised while preserving entailment. However, an important question is whether the properties that ensure the decidability of reasoning are preserved as well. We provide a systematic study of the impact of these procedures on the different chase variants with respect to chase (non-)termination and FO-rewritability. This also leads us to solve open problems related to chase termination of independent interest.

11:00-12:30 Session 76G: Verification (LICS)

"Verification": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
On the Satisfiability of Context-free String Constraints with Subword Ordering
PRESENTER: Soumodev Mal

ABSTRACT. We consider a variant of string constraints given by membership constraints in context-free languages and subword relation between variables. The satisfiability problem for this variant turns out to be undecidable. We consider a fragment in which the subword order constraints do not impose any cyclic dependency between variables. We show that this fragment is NEXPTIME-complete. As an application of our result, we settle the complexity of control state reachability in acyclic lossy channel pushdown systems, an important distributed system model. The problem was shown to be decidable by Atig et al. in 2008. However, no elementary upper bound was known. We show that this problem is NEXPTIME-complete.

The complexity of soundness in workflow nets

ABSTRACT. Workflow nets are a popular variant of Petri nets that allow for the algorithmic formal analysis of business processes. The central decision problems concerning workflow nets deal with soundness, where the initial and final configurations are specified. Intuitively, soundness states that from every reachable configuration one can reach the final configuration. We settle the widely open complexity of the three main variants of soundness: classical, structural and generalised soundness. The first two are EXPSPACE-complete, and, surprisingly, the latter is PSPACE-complete, thus computationally simpler.

Lower Bounds for the Reachability Problem in Fixed Dimensional VASSes

ABSTRACT. We study the complexity of the reachability problem for Vector Addition Systems with States (VASSes) in fixed dimensions. We provide four lower bounds improving the currently known state-of-art: 1) NP-hardness for unary flat 4-VASSes (VASSes in dimension 4), 2) PSpace-hardness for unary 5-VASSes, 3) ExpSpace-hardness for binary 6-VASSes and 4) Tower-hardness for unary 8-VASSes.

Probabilistic Verification Beyond Context-Freeness
PRESENTER: Andrzej Murawski

ABSTRACT. Probabilistic pushdown automata (recursive state machines) are a widely known model of probabilistic computation associated with many decidable problems about termination (time) and linear-time model checking. Higher-order recursion schemes (HORS) are a prominent formalism for the analysis of higher-order computation. Recent studies showed that, for the probabilistic variant of HORS, even the basic problem of determining whether a scheme terminates almost surely is undecidable. Moreover, the undecidability already holds for order-2 schemes (order-1 schemes are known to correspond to pushdown automata). Motivated by these results, we study restricted probabilistic tree-stack automata (rPTSA), which in the nondeterministic setting are known to characterise a proper extension of context-free languages, namely, the multiple context-free languages. We show that several verification problems, such as almost-sure termination, positive almost-sure termination and omega-regular model checking are decidable for this class. At the level of higher-order recursion schemes, this corresponds to being able to verify a probabilistic version of the so-called affine-additive higher-order recursion schemes (PAHORS). PAHORS extend order-1 recursion schemes and are incomparable with order-2 schemes.

Ramsey Quantifiers over Automatic Structures: Complexity and Applications to Verification

ABSTRACT. Automatic structures are infinite structures that are finitely represented by synchronized finite-state automata. We investigate Ramsey quantifiers over automatic structures, which express the existence of infinite cliques in an edge relation, and observe connections to two problems in verification: (1) reachability with (generalized) Büchi conditions in regular model checking can be seen as Ramsey quantification over transitive automatic graphs, (2) checking monadic decomposability of automatic relations can be viewed as Ramsey quantification over co-transitive automatic graphs. We provide a comprehensive complexity landscape of Ramsey quantifiers, all between NL and EXP, which yields a wealth of new results with precise complexity.

The complexity of bidirected reachability in valence systems
PRESENTER: Georg Zetzsche

ABSTRACT. An infinite-state system is bidirected (sometimes called reversible) if for every transition, there exists another with opposite effect. We study reachability in bidirected systems in the framework of valence systems, an abstract model featuring finitely many control states and an infinite-state storage that is specified by a finite graph. Picking suitable graphs yields counters as in vector addition systems, pushdowns, integer counters, and combinations. We provide a comprehensive complexity analysis. For example, the complexity of bidirected reachability drops substantially (often to polynomial time) from that of general reachability, for almost every storage mechanism where general reachability is known to be decidable.

11:00-12:30 Session 76H (Mentoring Workshop)
Location: Taub 6
How to be an Abstraction Engineer

ABSTRACT. Both computing and mathematics are devoted to studying abstraction. Mathematics is the science of abstraction, and computing is abstraction engineering. Abstractions in mathematics include sets, graphs, functions, algebras, and spaces. In computing, abstractions typically include floating-point numbers, stacks, lists, channels, processes, protocols, instruction sets, type systems, and programming languages. The "computer" is itself a key abstraction since we use it to represent and manipulate other abstractions. We outline how the perspective of computing as abstraction engineering yields concrete problem-solving wisdom.

How to Give a Good (Research) Talk
11:00-12:30 Session 76I: Bryant Discoveries Day (BDD) event + Tool papers (SAT)
(BDD event)
Pedant: A Certifying DQBF Solver

ABSTRACT. Pedant is a solver for Dependency Quantified Boolean Formulas (DQBF) that combines propositional definition extraction with Counterexample-Guided Inductive Synthesis (CEGIS) to compute a model of a given formula. Pedant 2 improves upon an earlier version in several ways. We extend the notion of dependencies by allowing existential variables to depend on other existential variables. This leads to more and smaller definitions, as well as more concise repairs for counterexamples. Additionally, we reduce counterexamples by determining minimal separators in a conflict graph, and use decision tree learning to obtain default functions for undetermined variables. An experimental evaluation on standard benchmarks showed a significant increase in the number of solved instances compared to the previous version of our solver.

QBF Programming with the Modeling Language Bule

ABSTRACT. We introduce Bule, a modeling language for problems from the complexity class PSPACE via quantified Boolean formulas (QBF), that is, propositional formulas in which the variables might be existentially or universally quantified. The language features a natural minimal models semantics and a subsequent grounding based generation of a quantified Boolean formula in conjunctive normal form. This makes Bule easy and intuitive to learn and an ideal language to get started with modeling of PSPACE problems in terms of QBF. We implemented a tool of the same name that provides an interface to aribtrary QBF solvers, so that the modeled problems can also be solved. We analyze the complexity theoretic properties of our modeling language, provide a library for common modeling patterns, and showcase its use on several examples.

OptiLog v2: Model, Solve, Tune And Run

ABSTRACT. We present an extension of the OptiLog Python framework. We fully redesign the solvers module to support dynamic loading of incremental SAT solvers with support for external libraries. We introduce new modules for modelling problems into Non-CNF format with support for Pseudo Boolean constraints, for evaluating and parsing the results of applications, and we add support for constrained execution of blackbox programs and SAT-heritage integration. All these enhancements allow OptiLog to become a swiss knife for SAT-based applications in academic and industrial environments.

12:00-12:30 Session 77: ASP Optimization (ICLP)
Location: Taub 9
Rushing and Strolling among Answer Sets – Navigation Made Easy
PRESENTER: Dominik Rusovac

ABSTRACT. Answer set programming (ASP) is a popular declarative programming paradigm with a wide range of applications in artificial intelligence. Oftentimes, when modeling an AI problem with ASP, and in particular when we are interested beyond simple search for optimal solutions, an actual solution, differences between solutions, or number of solutions of the ASP program matter. For example, when a user aims to identify a specific answer set according to her needs, or requires the total number of diverging solutions to comprehend probabilistic applications such as reasoning in medical domains. Then, there are only certain problem specific and handcrafted encoding techniques available to navigate the solution space of ASP programs, which is oftentimes not enough. In this paper, we propose a formal and general framework for interactive navigation toward desired subsets of answer sets analogous to faceted browsing. Our approach enables the user to explore the solution space by consciously zooming in or out of sub-spaces of solutions at a certain configurable pace. We illustrate that weighted faceted navigation is computationally hard. Finally, we provide an implementation of our approach that demonstrates the feasibility of our framework for incomprehensible solution spaces.

Large-Neighbourhood Search for ASP Optimisation
PRESENTER: Tobias Geibinger

ABSTRACT. While answer-set programming (ASP) is a successful approach to declarative problem solving, optimisation can still be a challenge for it. Large-neighbourhood search (LNS) is a metaheuristic technique where parts of a solution are alternately destroyed and reconstructed, which has high but untapped potential for ASP solving. We present a framework for LNS optimisation for ASP, in which neighbourhoods can be specified either declaratively as part of the ASP encoding, or automatically generated by code. In the full paper, we illustrate the framework on different optimisation problems, some of which are notoriously difficult, including shift planning and a parallel machine scheduling problem from semi-conductor production which demonstrate the effectiveness of the LNS approach.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 78A: Categorical Semantics (FSCD)
Stateful Structural Operational Semantics
PRESENTER: Henning Urbat

ABSTRACT. Compositionality of denotational semantics is an important concern in programming semantics. Mathematical operational semantics in the sense of Turi and Plotkin guarantees compositionality, but seen from the point of view of stateful computation it applies only to very fine-grained equivalences that essentially assume unrestricted interference by the environment between any two statements. We introduce the more restrictive Stateful SOS rule format for stateful languages. We show that compositionality of two more coarse-grained semantics, respectively given by assuming read-only interference or no interference between steps, remains an undecidable property even for \isos{}. However, further restricting the rule format in a manner inspired by the \emph{cool} GSOS formats of Bloom and van Glabbeek, we obtain the \emph{streamlined} and \emph{cool} \isos{} formats, which respectively guarantee compositionality of the two more abstract equivalences.

A combinatorial approach to higher-order structure for polynomial functors
PRESENTER: Hugo Paquet

ABSTRACT. Polynomial functors are categorical structures used in a variety of applications across theoretical computer science; for instance, in database theory, denotational semantics, functional programming, and type theory. A well-known problem is that the bicategory of finitary polynomial functors between categories of indexed sets is not cartesian closed, despite its success and influence on denotational models and linear logic.

This paper introduces a formal bridge between the model of finitary polynomial functors and the combinatorial theory of generalised species of structures. Our approach consists in viewing finitary polynomial functors as free analytic functors, corresponding to free generalised species. In order to characterize finitary polynomial functors from this combinatorial perspective, we consider groupoids with additional logical structure which we use to constrain the generalised species between them. The result is a new cartesian closed bicategory that embeds finitary polynomial functors.

Galois connecting call-by-value and call-by-name
PRESENTER: Dylan McDermott

ABSTRACT. We establish a general framework for reasoning about the relationship between call-by-value and call-by-name.

In languages with side-effects, call-by-value and call-by-name executions of programs often have different, but related, observable behaviours. For example, if a program might diverge but otherwise has no side-effects, then whenever it terminates under call-by-value, it terminates with the same result under call-by-name. We propose a technique for stating and proving these properties. The key ingredient is Levy's call-by-push-value calculus, which we use as a framework for reasoning about evaluation orders. We construct maps between the call-by-value and call-by-name interpretations of types. We then identify properties of side-effects that imply these maps form a Galois connection. These properties hold for some side-effects (such as divergence), but not others (such as mutable state). This gives rise to a general reasoning principle that relates call-by-value and call-by-name. We apply the reasoning principle to example side-effects including divergence and nondeterminism.

14:00-15:30 Session 78B: Logic Programming, Constraints & Machine Learning (ICLP)
Location: Taub 9
A Preliminary Data-driven Analysis of Common Errors Encountered by Novice Answer Set Programmers
PRESENTER: Zach Hansen

ABSTRACT. Answer Set Programming (ASP), a modern development of Logic Programming, is one of the foremost paradigms in the important branch of artificial intelligence (AI) known as Knowledge Representation and Reasoning. ASP enables a natural integration of Computing and AI education with STEM subjects. This integration addresses a widely acknowledged challenge in K-12 education, and early empirical results on ASP-based integration are promising. Although ASP is considered a simple language when compared with imperative programming languages, programming errors can still be a significant barrier for students. This is particularly true for K-12 students who are novice users of ASP. And while categorizing errors and measuring their difficulty has yielded insights into imperative languages like Java, little is known about the types and difficulty of errors encountered by K-12 students using ASP. To address this, we collected high school student programs submitted during a 4-session seminar teaching an ASP language known as SPARC. From error messages in this dataset, we identify a collection of error classes, then measure how frequently each error occurs and how difficult it is to resolve. We find that errors related to the sort system of SPARC are the most worthy of emphasis from educators based on their frequency of occurrence and resolution difficulty.

On Model Reconciliation: How to Reconcile When Robot Does not Know Human’s Model?

ABSTRACT. The Model Reconciliation Problem (MRP) was introduced to address issues in explainable AI planning. A solution to a MRP is an explanation for the differences between the models of the human and the planning agent (robot). Most approaches to solving MRPs assume that the robot, who needs to provide explanations, knows the human model. This assumption is not always realistic in several situations (e.g., the human might decide to update her model and the robot is unaware of the updates).

In this paper, we propose a dialog-based approach for computing explanations of MRPs under the assumptions that (i) the robot does not know the human model; (ii) the human and the robot share the set of predicates of the planning domain and their exchanges are about action descriptions and fluents’ values; (iii) communication between the parties is perfect; and (iv) the parties are truthful. A solution of a MRP is computed through a dialog, defined as a sequence of rounds of exchanges, between the robot and the human. In each round, the robot sends a potential explanation, called proposal, to the human who replies with her evaluation of the proposal, called response. We develop algorithms for computing proposals by the robot and responses by the human and implement these algorithms in a system that combines imperative means with answer set programming using the multi-shot feature of clingo.

An ASP approach for reasoning on neural networks under a finitely many-valued semantics for weighted conditional knowledge bases
PRESENTER: Laura Giordano

ABSTRACT. Weighted knowledge bases for description logics with typicality have been recently considered under a “concept-wise” multipreference semantics (in both the two-valued and fuzzy case), as the basis of a logical semantics of MultiLayer Perceptrons (MLPs). In this paper we consider weighted conditional ALC knowledge bases with typicality in the finitely many-valued case, through three different semantic constructions. For the boolean fragment LC of ALC we exploit ASP and asprin for reasoning with the concept-wise multipreference entailment under a φ-coherent semantics, suitable to characterize the stationary states of MLPs. As a proof of concept, we experiment the proposed approach for checking properties of trained MLPs.

Efficient lifting of symmetry breaking constraints for complex combinatorial problems (Best Student Paper Award)
PRESENTER: Alice Tarzariol

ABSTRACT. Many industrial applications require finding solutions to challenging combinatorial problems. Efficient elimination of symmetric solution candidates is one of the key enablers for high-performance solving. However, existing model-based approaches for symmetry breaking are limited to problems for which a set of representative and easily-solvable instances is available, which is often not the case in practical applications. This work extends the learning framework and implementation of a model-based approach for Answer Set Programming to overcome these limitations and address challenging problems, such as the Partner Units Problem. In particular, we incorporate a new conflict analysis algorithm in the Inductive Logic Programming system ILASP, redefine the learning task, and suggest a new example generation method to scale up the approach. The experiments conducted for different kinds of Partner Units Problem instances demonstrate the applicability of our approach and the computational benefits due to the first-order constraints learned.

14:00-15:00 Session 78C: Invited Talk (KR)
Location: Taub 2
From Non-monotonic Reasoning to Argumentation and Commonsense (Great Moments in KR Talk)

ABSTRACT. Interest in non-monotonic reasoning arose from the need to model how people in everyday life use defeasible knowledge (i.e. knowledge that is normally correct but can have exceptions). One path of research that has evolved from this is computational models of argument. This is concerned with modelling the human ability to handle incomplete and conflicting situations through the identification and analysis of arguments and counterarguments. A key problem in computational models of argument is the need to handle enthymemes. These are arguments that have some premises, and possibly the claim, being implicit. To understand enthymemes, for example exchanged during a dialogue, we often require commonsense reasoning. Whilst the need to formalise commonsense reasoning was a driver for research in non-monotonic reasoning, the questions of how to formalise commonsense reasoning, and how to acquire the knowledge necessary for commonsene reasoning, remain largely unanswered. In this talk, we will review the fields of non-monotonic reasoning, computational models of argument, and commonsense reasoning, and focus on the handling of enthymemes.

14:00-15:30 Session 78D: Complexity and Circuits (LICS)

"Complexity and Circuits": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Separating LREC from LFP

ABSTRACT. LREC= is an extension of first-order logic with a logarithmic recursion operator. It was introduced by Grohe et al. and shown to capture the complexity class L (log-space) over trees and interval graphs. It does not capture L in general as it is contained in FPC - fixed-point logic with counting. We show that this containment is strict. In particular, we show that the path systems problem, a classic P-complete problem which is definable in LFP - fixed-point logic - is not definable in LREC. This shows that the logarithmic recursion mechanism is provably weaker than general least fixed points.

Reasonable Space for the Lambda-Calculus, Logarithmically
PRESENTER: Gabriele Vanoni

ABSTRACT. Can the lambda-calculus be considered as a reasonable computational model? Can we use it for measuring the time and space consumption of algorithms? While the literature contains positive answers about time, much less is known about space. This paper presents a new reasonable space cost model for the lambda-calculus, given by the space used by a variant over the Krivine abstract machine, and which is the first one able to account for logarithmic space. Moreover, we study the time behavior of our machine and show how to transport the results to the call-by-value lambda-calculus.

Cyclic Implicit Complexity
PRESENTER: Gianluca Curzi

ABSTRACT. Circular (or cyclic) proofs have received increasing attention in recent years, and have been proposed as an alternative setting for studying (co)inductive reasoning. In particular, now several type systems based on circular reasoning have been proposed. However, little is known about the complexity theoretic aspects of circular proofs, which exhibit sophisticated loop structures atypical of more common 'recursion schemes'.

This paper attempts to bridge the gap between circular proofs and implicit computational complexity (ICC). Namely we introduce a circular proof system based on Bellantoni and Cook's famous safe-normal function algebra, and we identify proof theoretical constraints, inspired by ICC, to characterise the polynomial-time and elementary computable functions. Along the way we introduce new recursion theoretic implicit characterisations of these classes that may be of interest in their own right.

Identity testing for radical expressions
PRESENTER: Klara Nosan

ABSTRACT. We study the Radical Identity Testing problem (RIT): Given an algebraic circuit representing a multivariate polynomial $f(x_1, \dots, x_k)$ and nonnegative integers $a_1, \dots, a_k$ and $d_1, \dots, d_k$, written in binary, test whether the polynomial vanishes at the real radicals $\sqrt[d_1]{a_1}, \dots,\sqrt[d_k]{a_k}$, i.e., test whether $f(\sqrt[d_1]{a_1}, \dots, \sqrt[d_k]{a_k}) = 0$. We place the problem in coNP assuming the Generalised Riemann Hypothesis (GRH), improving on the straightforward PSPACE upper bound obtained by reduction to the existential theory of reals. Next we consider a restricted version, called 2-RIT, where the radicals are square roots of prime numbers, written in binary. It was known since the work of Chen and Kao that 2-RIT is at least as hard as the polynomial identity testing problem, however no better upper bound than PSPACE was known prior to our work. We show that 2-RIT is in coRP assuming GRH and in coNP unconditionally. Our proof relies on theorems from algebraic and analytic number theory, such as the Chebotarev density theorem and quadratic reciprocity.

Complexity of Modular Circuits
PRESENTER: Piotr Kawałek

ABSTRACT. We study how the complexity of modular circuits computing AND depends on the depth of the circuits and the prime factorization of the modulus they use. In particular, our construction of subexponential circuits of depth 2 for AND helps us to classify (modulo Exponential Time Hypothesis) modular circuits with respect to the complexity of their satisfiability. We also study a precise correlation between this complexity and the sizes of modular circuits realizing AND. On the other hand showing that AND can be computed by a polynomial size probabilistic modular circuit of depth 2 (with O(log n) random bits) providing a probabilistic computational model that can not be derandomized.

We apply our methods to determine (modulo ETH) the complexity of solving equations over groups of symmetries of regular polygons with an odd number of sides. These groups form a paradigm for some of the remaining cases in characterizing finite groups with respect to the complexity of their equation solving.

Choiceless Polynomial Time with Witnessed Symmetric Choice

ABSTRACT. We extend Choiceless Polynomial Time (CPT), the currently only remaining promising candidate in the quest for a logic capturing PTime, so that this extended logic has the following property: for every class of structures for which isomorphism is definable, the logic automatically captures PTime.

For the construction of this logic we extend CPT by a witnessed symmetric choice operator. This operator allows for choices from definable orbits. But, to ensure polynomial time evaluation, automorphisms have to be provided to certify that the choice set is indeed an orbit.

We argue that, in this logic, definable isomorphism implies definable canonization. Thereby, our construction removes the non-trivial step of extending isomorphism definability results to canonization. This step was a part of proofs that show that CPT or other logics capture PTime on a particular class of structures. The step typically required substantial extra effort.

14:00-15:30 Session 78E (Mentoring Workshop)
Location: Taub 6
On Time and Space (of Humans)

ABSTRACT. Work-life balance and time management are inextricably intertwined. Especially in the demanding careers (and lives) of humans specializing in formal methods, having a clear strategy for time and space is vital. How do we manage our time effectively and find the space, both in our schedules and in the places we live and work, to fit in what's necessary and what's important? While this has long been a challenge, the COVID-19 pandemic further complicated the solution space. In this talk, I will discuss my personal lessons learned from an unconventional path and offer strategies for others to find theirs. Sometimes, earning a PhD in formal logic necessitates winning the California Triple Crown (a cycling event), and increasing office productivity by 20% involves re-arranging office furniture. This talk will explain how to make such discoveries.

From PhD to industry: A recent graduate’s perspective

ABSTRACT. To a PhD student who deeply focuses on academic research in their day-to-day work, moving to industry may seem unattainable. As a recent graduate, I will share my personal experience of applying for internships, doing coding interviews, working in a startup, as well as working in a big company. I will also give an overview of the project I am currently working on as an applied scientist in the Prime Video Automated Reasoning team.

14:00-15:30 Session 78F: QBF + Awards / competitions (SAT)
QBF Merge Resolution is powerful but unnatural
PRESENTER: Gaurav Sood

ABSTRACT. The Merge Resolution proof system (MRes) for QBFs, proposed by Beyersdorff et al. in 2019, explicitly builds partial strategies inside refutations. The original motivation for this approach was to overcome the limitations encountered in long-distance Q-Resolution proof system (LDQRes), where the syntactic side-conditions, while prohibiting all unsound resolutions, also end up prohibiting some sound resolutions. However, while the advantage of MRes over many other resolution-based QBF proof systems was already demonstrated, a comparison with LDQRes itself had remained open. In this paper, we settle this question. We show that MRes has an exponential advantage over not only LDQRes, but even over LQUplusRes and IRM, the most powerful among currently known resolution-based QBF proof systems. Combining this with results from Beyersdorff et al. 2020, we conclude that MRes is incomparable with LQURes and LQUplusRes.

Our proof method reveals two additional and curious features about MRes: (i) MRes is not closed under restrictions, and is hence not a natural proof system, and (ii) weakening axiom clauses with existential variables provably yields an exponential advantage over MRes without weakening. We further show that in the context of regular derivations, weakening axiom clauses with universal variables provably yields an exponential advantage over MRes without weakening. These results suggest that MRes is better used with weakening, though whether MRes with weakening is closed under restrictions remains open. We note that even with weakening, MRes continues to be simulated by QBFeFrege (the simulation of ordinary MRes was shown recently by Chew and Slivovsky).

Awards / Competition Results - I (with Live Broadcast)

ABSTRACT. The following awards will be presented

SAT Best Paper Awards: 10 minutes

Test of Time Award: 35 minutes

MaxSAT Competition:  20 minutes


The ceremony will be broadcast live at: https://tinyurl.com/sat22awards

15:00-15:30 Session 79A: Planning (KR)
Location: Taub 2
Discovering User-Interpretable Capabilities of Black-Box Planning Agents
PRESENTER: Pulkit Verma

ABSTRACT. Several approaches have been developed for answering users' specific questions about AI behavior and for assessing their core functionality in terms of primitive executable actions. However, the problem of summarizing an AI agent's broad capabilities for a user is comparatively new. This paper presents an algorithm for discovering from scratch the suite of high-level "capabilities" that an AI system with arbitrary internal planning algorithms/policies can perform. It computes conditions describing the applicability and effects of these capabilities in user-interpretable terms. Starting from a set of user-interpretable state properties, an AI agent, and a simulator that the agent can interact with, our algorithm returns a set of high-level capabilities with their parameterized descriptions. Empirical evaluation on several game-based scenarios shows that this approach efficiently learns descriptions of various types of AI agents in deterministic, fully observable settings. User studies show that such descriptions are easier to understand and reason with than the agent's primitive actions.

15:00-15:30 Session 79B: Deontic Logic (KR)
Location: Taub 3
Dynamic Deontic Logic for Permitted Announcements

ABSTRACT. In this paper, we introduce and study a dynamic deontic logic for permitted announcements. In our logic framework, it is permitted to announce $\phi$ if announcing $\phi$ would not lead to forbidden knowledge. It is shown that the logic is not compact, and we propose a sound and weakly complete Hilbert-style axiomatisation. We also study the computational complexity of the model checking problem and the decidability of the satisfiability problem. Finally, we introduce a neighbourhood semantics with a strongly complete axiomatisation

15:30-16:00Coffee Break
15:30-17:00 Session 80A: KR & Machine Learning (KR)
Location: Taub 2
Learning Typed Rules over Knowledge Graphs

ABSTRACT. Rule learning from large datasets has regained extensive interest as rules are useful for developing explainable approaches to many applications in knowledge graphs. However, existing methods for rule learning are still limited in terms of scalability and rule quality. This paper presents a new method for learning typed rules by employing entity class information. Our experimental evaluation shows the superiority of our system compared to state-of-the-art rule learners. In particular, we demonstrate the usefulness of typed rules in reasoning for link prediction.

A Graph Neural Network Reasoner for Game Description Language
PRESENTER: Alvaro Gunawan

ABSTRACT. General Game Playing (GGP) aims to develop agents that are able to play any game with only rules given. The game rules are encoded in the Game Description Language (GDL). A GGP player processes the game rules to obtain game states and expand the game tree search for an optimal move. The recent accomplishments of AlphaGo and AlphaZero have triggered new works in extending neural network approaches to GGP. In these works, the neural networks are used only for optimal move selection, while the components dealing with GDL still use logic-based methods. This motivates us to explore if a neural network based method would be able to approximate the logical inference in GDL with a high accuracy. The structured nature of logic tends to be a difficulty for neural networks, which rely heavily on statistical features. Inspired by the recent works on neural network learning for logical entailments, we propose a neural network based reasoner that is able to learn logical inferences for GDL. We present three key contributions: (i) a general, game-agnostic graph-based representation for game states described in GDL, (ii) methods for generating samples and datasets to frame the GDL inference task as a neural network based machine learning problem and (iii) a GNN based neural reasoner that is able to learn and infer various game states with a high accuracy and has some capability of transfer learning across games.

Explaining Causal Models with Argumentation: the Case of Bi-variate Reinforcement
PRESENTER: Francesca Toni

ABSTRACT. Causal models are playing an increasingly important role in machine learning, particularly in the realm of explainable AI. We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for the models’ outputs. The conceptualisation is based on reinterpreting desirable properties of semantics of AFs as explanation moulds, which are means for characterising argumentatively the relations in the causal model. We demonstrate our methodology by reinterpreting the property of Bi-variate Reinforcement as an explanation mould to forge bipolar AFs as explanations for the outputs of causal models. We perform a theoretical evaluation of these argumentative explanations, examining whether they satisfy a range of desirable explanatory and argumentative properties.

15:30-16:30 Session 80B: Argumentation (KR)
Location: Taub 3
On Dynamics in Structured Argumentation Formalisms
PRESENTER: Anna Rapberger

ABSTRACT. In this paper we contribute to the investigation of dynamics in assumption-based argumentation (ABA) and investigate situations where a given knowledge base undergoes certain changes. We show that two frequently investigated problems, namely enforcement of a given target atom and deciding strong equivalence of two given ABA frameworks, are NP-complete in general. Interestingly, these problems are both tractable for abstract argumentation frameworks (AFs) which admit a close correspondence to ABA by constructing semantics-preserving instances. Inspired by this observation, we search for tractable fragments for ABA frameworks by means of the instantiated AFs. We argue that the usual instantiation procedure is not suitable for the investigation of dynamic scenarios since too much information is lost when constructing the AF. We thus consider an extension of AFs, called cvAFs, equipping arguments with conclusions and vulnerabilities in order to better anticipate their role after the underlying knowledge base is extended. We investigate enforcement and strong equivalence for cvAFs and present syntactic conditions to decide them. We show that the correspondence between cvAFs and ABA frameworks is close enough to capture ABA also in dynamic scenarios. This yields the desired tractable ABA fragment. We furthermore discuss consequences for the corresponding problems for logic programs.

Defining Defense and Defeat in Abstract Argumentation From Scratch -- A Generalizing Approach
PRESENTER: Lydia Blümel

ABSTRACT. We propose a general framework to investigate semantics of Dung-style argumentation frameworks (AFs) by means of generic defeat operators. After establishing the technical foundations, we propose natural generic versions of Dung's classical semantics. We demonstrate how classical as well as recent proposals can be captured by our approach when utilizing suitable notions of defeat. We perform an investigation of basic properties which semantics inherit from the underlying defeat operator. In particular, we show under which conditions a counterpart to Dung's fundamental lemma can be inferred and how it ensures the existence of the generalized version of complete extensions. We contribute to a principle-based study of AF semantics by discussing properties tailored to compare different defeat operators. Finally, we report computational complexity results which hold in our general framework.

16:00-16:30 Session 81B: Scheduling & Planning (ICLP)
Location: Taub 9
Determining Action Reversibility in STRIPS Using Answer Set Programming with Quantifiers
PRESENTER: Wolfgang Faber

ABSTRACT. In the field of automated planning, an action is called reversible when other actions can be applied in order to revert the effects of this action and return to the original state. In recent years, there has been renewed interest in this topic, which led to novel results in the widely known STRIPS formalism and the PDDL planning language.

In this paper, we aim to solve the computational problem of deciding action reversibility in a practical setting, applying recent advances in the field of logic programming. In particular, a quantified extension of Answer Set Programming (ASP) named ASP with Quantifiers (ASP(Q)) has been proposed by Amendola, Ricca, and Truszczynski, which allows for stacking logic programs by quantifying over answer sets of the previous layer. This language is well-suited to express encodings for the action reversibility problem, since this problem naturally contains a quantifier alternation. In addition, a prototype solver for ASP(Q) is currently developed. We make use of the ASP(Q) language to offer an encoding for action reversibility, and then report on preliminary benchmark results on how well this encoding performs compared to classical ASP.

Solving a Multi-resource Partial-ordering Flexible Variant of the Job-shop Scheduling Problem with Hybrid ASP

ABSTRACT. In many industries, scheduling is a key component to efficient management of resources and, thus, ensuring competitiveness and success of companies by minimizing the waste of time and money. In this work, we propose a Multi-resource Partial-ordering Flexible Job-shop Scheduling (MPFJSS) formulation to capture scheduling scenarios where partially-ordered sequences of operations must be scheduled on multiple required resources, such as tools and specialists. The resources are flexible and can process one or more operations based on their characteristics. We have built a model using Answer Set Programming (ASP) in which the execution time of operations is determined using Difference Logic. Furthermore, we introduced two multi-shot strategies that identify the time bounds and find a schedule while minimizing the total tardiness. We conducted experiments on a set of instances provided by a semiconductor manufacturer, and the results showed that the proposed model could find schedules for 87 out of 91 real-world instances.

16:00-16:45 Session 81C: CSP, SAT and Boolean Algebra (LICS)

"CSP, SAT, Boolean algebra": 3 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
On Almost-Uniform Generation of SAT Solutions: The power of 3-wise independent hashing

ABSTRACT. Given epsilon and a Boolean formula F, the problem of almost-uniform generation seeks to generate solutions such that every solution is generated with a probability that is within (1+epsilon)-multiplicative factor of 1/#F where #F is the number of solutions of F. The problem of almost-uniform generation was shown to be inter-reducible to that of randomized approximate counting in the seminal work of Jerrum, Valiant, and Vazirani (TCS, 1986). The proposed reduction, however, requires a linear number of calls to approximate counter, and therefore, provides an O(n log (n) log (n/epsilon)) algorithm that employs pairwise independent hash functions.

In this work, we propose a new algorithm that makes only one call to the approximate counter, and in turn, provides an O( log n * log (1/epsilon) + 1/epsilon) algorithm for an almost-uniform generation. The key ingredient of our approach is a beautiful combination of the usage of approximate counting and 3-wise independent hash functions. Since the standard tabulation-based hash family proposed by Carter and Wegman (STOC 1977) is known to be 3-wise independent, our scheme can be highly efficient in practical applications where a SAT solver is typically used in lieu of a NP oracle. We demonstrate that theoretical improvements translate to practice; in particular, we conduct a comprehensive study over 562 benchmarks and demonstrate that while JVV would time out for 544 out of 562 instances, our proposed scheme can handle all the 562 instances. To the best of our knowledge, this is the first almost-uniform generation scheme that can handle practical instances from real-world applications. We also present a nuanced analysis focusing on the both the size of SAT queries as well as the number of queries.

Smooth approximations and CSPs over finitely bounded homogeneous structures
PRESENTER: Michael Pinsker

ABSTRACT. We introduce the novel machinery of smooth approximations, and apply it to confirm the CSP dichotomy conjecture for first-order reducts of the random tournament, and to give new short proofs of the conjecture for various homogeneous graphs including the random graph (STOC'11, ICALP'16), and for expansions of the order of the rationals (STOC'08). Apart from obtaining these dichotomy results, we show how our new proof technique allows to unify and significantly simplify the previous results from the literature. For all but the last structure, we moreover characterise for the first time those CSPs which are solvable by local consistency methods, again using the same machinery.

A first-order completeness result about characteristic Boolean algebras in classical realizability

ABSTRACT. We prove the following completeness result about classical realizability: given any Boolean algebra with at least two elements, there exists a Krivine-style classical realizability model whose characteristic Boolean algebra is elementarily equivalent to it. This is done by controlling precisely which combinations of so-called "angelic" (or "may") and "demonic" (or "must") nondeterminism exist in the underlying model of computation.

16:00-17:00 Session 81E: Awards / Competitions (SAT)

Awards / Competitions

Awards and Competition - II (with Live Broadcast)

ABSTRACT. The results of the following competitions will be announced:


QBF Competition: 20 minutes

MC Competition: 20 mintues

SAT Competition: 30 minutes

The ceremony will be broadcast live at: https://tinyurl.com/sat22awards


16:50-17:30 Session 83: Award Session: Kleene award, LICS Test-of-Time award, Church award (LICS)

Award Session: Kleene award, LICS Test-of-Time award, Church award

Location: Taub 1
17:00-17:30 Session 84B: Closing (KR)
Location: Taub 2
KR 2022 Closing

ABSTRACT. KR 2022 Closing slides, announcing KR 2023