previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 26A (FoMLAS)

FoMLAS Session 5

Location: Ullmann 201
Scalable Verification of GNN-based Job Schedulers

ABSTRACT. Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users' expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying image and NLP classifiers. We develop vegas, the first general framework for verifying both single-step and multi-step properties of these schedulers based on carefully designed algorithms that combine abstractions, refinements, solvers, and proof transfer. Our experimental results show that vegas achieves significant speed-up when verifying important properties of a state-of-the-art GNN-based scheduler compared to previous methods.

VPN: Verification of Poisoning in Neural Networks
PRESENTER: Youcheng Sun

ABSTRACT. Neural networks are successfully used in many domains including safety and security critical applications. As a result researchers have proposed formal verification techniques for verifying neural network properties. A large majority of previous efforts have focused on checking local robustness in neural networks. We instead focus on another neural network security issue, namely data poisoning, whereby an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger causes the classifier to predict some target class. In this paper, we show how to formulate absence of data poisoning as a property that can be checked with off-the-shelf verification tools, such as Marabou and nneum. Counterexamples of failed checks constitute potential triggers that we validate through testing. We further show that the discovered triggers are ‘transferable’ from a small model to a larger, better-trained model, allowing us to analyze state-of-the art performant models trained for image classification tasks.

Verification-Aided Deep Ensemble Selection

ABSTRACT. Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks. However, as highlighted by many recent studies, even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN. This renders DNNs vulnerable to strategic input manipulations by attackers, and also prone to oversensitivity to environmental noise.

To mitigate this phenomenon, practitioners apply joint classification by an ensemble of DNNs. By aggregating the classification outputs of different individual DNNs for the same input, ensemble-based classification reduces the risk of misclassifications due to the specific realization of the stochastic training process of any single DNN. However, the effectiveness of a DNN ensemble is highly dependent on its members not simultaneously erring on many different inputs.

In this case study, we harness recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed --- resulting in more robustly-accurate ensemble-based classification.

Our proposed framework uses a DNN verifier as a backend, and includes heuristics that help reduce the high complexity of directly verifying ensembles. More broadly, our work puts forth a novel universal objective for formal verification that can potentially improve the robustness of real-world, deep-learning-based systems across a variety of application domains.

09:00-10:30 Session 26B: Foundations (GDE)

Welcome ceremony to the 2nd workshop on Goal-directed Execution of Answer Set Programs (GDE 2022) followed by a tutorial (40 minutes) and two talks, each roughly 20 minutes plus 5 minutes for discussion and questions.

Location: Ullmann 310
Tutorial: Automating Commonsense Reasoning

ABSTRACT. Automating commonsense reasoning, i.e., automating the human thought process, has been considered fiendishly difficult. It is widely believed that automation of commonsense reasoning is needed to build intelligent systems that can rival humans. We argue that answer set programming (ASP) along with its goal-directed implementation allows us to reach this automation goal. We discuss essential elements needed for automating the human thought process, and show how they are realized in ASP and the s(CASP) goal-directed ASP engine.

A Query Evaluation Method for ASP with Abduction

ABSTRACT. In this paper, we present a goal-directed proof procedure for ASP with abduction. Our proposed procedure in this paper is correct for any consistent abductive framework proposed in \cite{Kakas}. In other words, if the procedure succeeds, there is a set of hypotheses which satisfies a query, and if the procedure finitely fails, there is no such set. If we do not consider abducibles, this procedure is a goal-directed proof procedure for ASP as well. NOTE: This paper is an extended abstract of a paper with the title ``A Query Evaluation Method for Abductive Logic Programming'' that appeared in the Proceedings of the Joint International Conference and Symposium on Logic Programming (JICSLP'92), pp. 671 -- 685.

First order logic and commonsense reasoning: a path less travelled
PRESENTER: Tanel Tammet

ABSTRACT. The context of the paper is developing logic-based components for hybrid -- machine learning plus logic -- commonsense question answering systems. The paper presents the main principles and several lessons learned from implementing an automated reasoner able to handle both undecidable exceptions and numerical confidences for full first order logic. Although the described reasoner is based on the resolution method, some of these lessons may be useful for the further development of ASP systems as well.

09:00-10:30 Session 26C: Confluence criteria and completeness (IWC)
Location: Ullmann 306
Development Closed Critical Pairs: Towards a Formalized Proof
PRESENTER: Aart Middeldorp

ABSTRACT. Having development closed critical pairs is a well-known sufficient condition for confluence of left-linear term rewrite systems. We present formalized results involving proof terms and unification that play an important role in the proof.

On Confluence of Parallel-Innermost Term Rewriting
PRESENTER: Carsten Fuhs

ABSTRACT. We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures. We propose a simple sufficient criterion for confluence of parallel-innermost rewriting based on non-overlappingness. Our experiments on a large benchmark set indicate the practical usefulness of our criterion. We close with a challenge to the community to develop more powerful dedicated techniques for this problem.

Uniform Completeness

ABSTRACT. We introduce uniform completeness and give a local characterisation of it. We show it yields a complete method for showing completeness of rewrite systems.

09:00-10:30 Session 26D (LogTeach)
Location: Ullmann 301
Teaching Logic

ABSTRACT. We follow Ludwig Wittgenstein: "How can the all-embracing logic which mirrors the world use such special catches and manipulations? Only because all these are connected into an infinitely fine network, the great mirror." (Tractatus Logico-Philosophicus 5.511) Teaching logic means therefore to teach how surprisingly straightforward insights generate the most abstract structures known. In this lecture we intend to develop tools to motivate this generation for beginners.

Logic for computer science: starting earlier, at school

ABSTRACT. While it is true that the place of logic in undergraduate computer science curricula is either in danger or has already been lost, we suggest that solutions should look not only at the immediate term, by way of redesigning logic teaching at the undergraduate level, but also consider the long term. We propose that this requires bringing in some logic at the high school level, and re-examining logic in undergraduate mathematics curricula.

09:00-10:30 Session 26E: Joint QBF Session (PC and QBF)
Location: Ullmann 309
QBF Solvers and their Proof Systems

ABSTRACT. We give an overview of the main paradigms in QBF solving, as well as techniques for preprocessing. In each case, we present corresponding proof systems and discuss lower bound results. We point to open problems, and consider current trends in solver development.

QCDCL with Cube Learning or Pure Literal Elimination – What is best?
PRESENTER: Benjamin Böhm

ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL.

09:00-10:30 Session 26F (PCCR)
Location: Ullmann 203
Parameterized algorithmics in access control: linking theory and practice

ABSTRACT. Access control is an important area of information security. The first paper in application of parameterized algorithmics in access control was published in 2010 by Qihua Wang and Ninghui Li. In the paper, the authors suggested a parameterization of the Workflow Satisfiability Problem (WSP). In its basic form, WSP is simply the Constraint Satisfaction Problem, which Wang and Li parameterized by the number of steps (= variables in CSP terms). Wang and Li proved that WSP is W[1]-hard and identified a practically important class of constraints for which WSP is FPT. They also conducted computational experiments using an off-the-shelf solver. Unfortunately, in their paper, the theoretical results and practical computation were not linked.

In the last decade, Jason Crampton (Information Security Group, RHUL), the speaker and their co-authors studied theoretical and practical computing aspects initially for WSP with user-independent (UI) non-unary constraints and its variations and then for other access control problems. Several FPT algorithms and lower bounds were obtained, FPT algorithms were implemented and off-the-shelf solvers were used in computational experiments. Among practical computing highlights are FPT algorithms for WSP with UI non-unary constraints performing in practice much better than off-the-shelf solvers, and  the possibility for off-the-shelf solvers to emulate FPT algorithms for WSP and other access control problems (approximately, the runtime exponential growth is wrt the parameter only).This theoretical and practical computing research was highly acknowledged by the access control community: most of the papers were published in top information security journals and some papers received the best paper awards in the annual symposium on access control (SACMAT).

MaxSAT: Parameterized, Parallel, Absolute

ABSTRACT. In this talk I will discuss two promising applications of parameterized complexity for the natural generalization of Boolean satisfiability to optimization problems: MaxSAT, where the task is to find an assignment that maximizes the total weight of all satisfied clauses. As for most intractable optimization problems, it is rather hard to obtain parallel algorithms for MaxSAT with meaningful theoretical guarantees in the classical complexity theoretic sense. The situation is fundamentally different, however, if we study the \emph{parallel parameterized complexity} of the problem. In the first part of the talk I will give a gentle introduction to the underling framework of \emph{parallel parameterized algorithms} and \emph{parameterized circuit complexity.} Along the line we will motivate the definitions by pointing out on a high-level how to achieve various levels of parallelization for MaxSAT for different parameters within this model~--~in contrast, in a sequential world all these results would simply collapse into the observation that MaxSAT is fixed-parameter tractable parameterized by any of these parameters.

The second application is an observation about the target function that we wish to maximize. The literature almost solely considers the case of positive weights. While the general case of the problem is only restricted slightly by this constraint, many special cases become trivial in the absence of negative weights. I present various versions of the problem that are parameterized intractable if negative weights are present. We will observe that negative weights lead to a new interesting version: Instead of maximizing the sum of weights of satisfied clauses, we can maximize the absolute value of that sum. This turns out to be surprisingly expressive even restricted to monotone formulas in disjunctive normal form with at most two literals per clause. In contrast to the versions without the absolute value, however, we will discuss that these variants are parallel fixed-parameter tractable by providing an easy to compute kernel.

09:00-10:30 Session 26G: Applications (POS)
Location: Ullmann 311
Towards an Efficient CNF Encoding of Block Ciphers

ABSTRACT. SAT-solvers are one of the primary tools to assess the security of block ciphers automatically. Common CNF encodings of s-boxes are based on algebraic representations (finding low-degree polynomials) or symbolic execution of considered function implementation. However, those methods are not strictly connected with algorithms used in efficient SAT-solvers. Therefore, we propose an application of minimal propagate complete encodings, which in their definition are tuned for modern satisfiability checking algorithms.

During the construction of the Boolean formula, there is often the problem of encoding linear XOR equations. The standard procedure includes a greedy shortening algorithm to decrease the size of the resulting encoding. Recently, the problem of a straight-line program has been successfully applied in obtaining efficient implementations of MDS matrices. In this paper, we propose to use the algorithm for finding a short straight-line program as a shortening procedure for a system of linear XOR equations.

As a result, we achieved a 20x speed-up of algebraic cryptanalysis of Small Scale AES block cipher to widely used algebraic representations by combining two approaches.

Calculating Sufficient Reasons for Random Forest Classifiers

ABSTRACT. In this paper, we formalize the implementations of decision tree and random forest classifiers of the Python Scikit-learn package, and we present a CNF encoding which can be used to calculate sufficient reasons a.k.a. prime implicants for them. Our decision tree encoding resembles a monotonic combinational circuit with pure input variables, of which we take advantage in our method for incremental enumeration of its prime implicants. Encoding the combination of several decision trees in a random forest would add a non-monotonic evaluation function to the root of the encoded circuit. In our approach, we solve an auxiliary SAT problem for enumerating valid leaf-node combinations of the random forest. Each valid leaf-node combination is used to incrementally update the original monotonic circuit encoding by extending the DNF at its root with a new term. Preliminary results show that enumeration of prime implicants by incrementally updating the encoding is by order of magnitudes faster than one-shot solving of the monolithic formula. We present preliminary runtime data, and some initial data about the size and number of samples found when translating the prime implicants back into database queries.

Adding Dual Variables to Algebraic Reasoning for Gate-Level Multiplier Verification
PRESENTER: Jakob Nordstrom

ABSTRACT. Algebraic reasoning has proven to be one of the most effective approaches for verifying gate-level integer multipliers, but it struggles with certain components, necessitating the complementary use of SAT solvers. For this reason validation certificates require proofs in two different formats. Approaches to unify the certificates are not scalable, meaning that the validation results can only be trusted up to the correctness of compositional reasoning. We show in this work that using dual variables in the algebraic encoding, together with a novel tail substitution and carry rewriting method, removes the need for SAT solvers in the verification flow and yields a single, uniform proof certificate.

This is a presentation-only submission for a paper previously published at DATE '22.

09:00-10:30 Session 26H: Joint PC/QBF Session (QBF)

This session will be held jointly with the Proof Complexity (PC) Workshop.

Location: Ullmann 309
QBF Solvers and their Proof Systems

ABSTRACT. We give an overview of the main paradigms in QBF solving, as well as techniques for preprocessing. In each case, we present corresponding proof systems and discuss lower bound results. We point to open problems, and consider current trends in solver development.

09:00-10:30 Session 26I: Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part I) (REAI)

Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part I)

Location: Ullmann 102
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22 https://cs.uwaterloo.ca/~david/kr22/)

ABSTRACT. The tutorial introduces the audience to the concept of referring expressions, formulae that can be used to communicate identities of otherwise abstract objects. The formalism provides foundations for a successful and unambiguous exchange of information about individuals between agents sharing common knowledge about such individuals, a task that is indispensable in most modern applications of knowledge representation and semantic technologies. (tutorial web page)

09:00-10:30 Session 26J (TERMGRAPH)

Invited Talk by Delia Kesner

Location: Ullmann 101
A Computational Interpretation of Girard’s Intuitionistic Proof-Nets

ABSTRACT. In this talk we will present a computational interpretation of Girard's proof nets for intuitionistic linear logic. More precisely, proof nets are a graphical formalism representing bureaucracy-free proofs, i.e., the order in which independent logical rules are applied in a derivation is abstracted. Despite the obvious advantage of the graphical notation, the essence of their corresponding operational semantics is not always clear from a programming point of view, and a term syntax often provides a complementary understanding of a (bureaucracy-free) graph framework.

Our goal is to define a new term language that establishes a faithful and fine-grained Curry-Howard correspondence with Girard's intuitionistic NP, both from a static (objects) and a dynamic (reductions) point of view. On the static side, we identify an equivalence relation on terms which is sound and complete with respect to the classical notion of structural equivalence for proof-nets. On the dynamic side, we show that every single (exponential) step in the term calculus translates to a different single (exponential) step in the graphical formalism, thus capturing the original Girard’s granularity of proof-nets but on the level of terms.

09:00-09:50 Session 26K (VardiFest)
Location: Taub 1
On the Effectiveness of Logic in Robotics

ABSTRACT. The research of Moshe Vardi has influenced many researchers in the field of robotics. During this talk I will give a birds' eye view of how logic and synthesis are used in robotics where a central question is how to produce robot motion from high-level specifications. The specifications declare what the robot must do, rather than how the task is to be done. I will focus on describing how Moshe Vardi's work has led to efficient task and motion planning solutions and synthesis methodologies for human-robot collaboration. 

09:00-10:30 Session 26L: Doctoral Program (CP)
Location: Taub 4
Industrial Research Career Path as a Sequence of Constraint Satisfaction Problems
Solving the Non-Crossing MAPF for non point-sized robots

ABSTRACT. This paper deals with the multi-agent path finding (MAPF) problem for a team of tethered robots. When considering point-sized robots, paths may share a same subpath provided that they do not cross, and we have shown in a previous work that this case basically involves solving an assignment problem: The objective is to find a set of non-crossing paths, and the makespan is equal to the length of the longest path. In this work, we extend it to the case of non-point-sized robots where robot paths must be synchronized when they share a same subpath and waiting times are considered when computing the makespan. We prove that the upper bound can be computed by solving the linear sum assignment problem. We introduce a new variable neighborhood search method to improve the upper bound and show that it is robust to different instances. We also introduce a Constraint Programming model for solving the problem to optimality.

Extended Abstract for : Scheduling the Equipment Maintenance of an Electric Power Transmission Network using Constraint Programming
PRESENTER: Louis Popovic

ABSTRACT. Modern electrical power utilities must maintain their electrical equipment and replace them as they reach the end of their useful life. The Transmission Maintenance Scheduling (TMS) problem consists in generating an annual maintenance plan for electric power transportation equipment while maintaining the stability of the network and ensuring a continuous power flow for customers. Each year, a list of equipment that needs to be maintained or replaced is available and the goal is to generate an optimal maintenance plan. This paper proposes a constraint-based scheduling approach to solve the TMS problem. The model considers two types of constraints: (1) constraints that can be naturally formalized inside a constraint programming model, and (2) complex constraints that do not have a proper formalization from the field specialists. The latter cannot be integrated inside the model due to their complexity. Their satisfaction is thus verified by a black box tool, which is a simulator mimicking the impact of a maintenance schedule on the real power network. The simulator is based on complex differential power-flow equations. Experiments are carried out at five strategic points of Hydro-Québec power grid infrastructure, which involves more than 200 electrical equipment and 300 withdrawal requests. Results show that our model is able to comply with most of the formalized and unformalized constraints. It also generates maintenance schedules within an execution time of few minutes. The generated schedules are similar to the ones proposed by a field specialist and can be used to simulate maintenance programs for the next 10 years.

Extended Abstract: Sequence Variables for Routing Problems

ABSTRACT. Constraint Programming (CP) is one of the most flexible approaches for modeling and solving vehicle routing problems (VRP). This paper proposes the sequence variable domain, that is inspired by the insertion graph and the subset bound domain for set variables. This domain representation, which targets VRP applications, allows for an efficient insertion-based search on a partial tour and the implementation of simple, yet efficient filtering algorithms for constraints that enforce time-windows on the visits and capacities on the vehicles. Experiment results demonstrate the efficiency and flexibility of this CP domain for solving some hard VRP problems, including the Dial-A-Ride, the Patient Transportation, and the asymmetric TSP with time windows.

Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints - Extended Abstract
PRESENTER: Daphné Lafleur

ABSTRACT. In this work, we use Reinforcement Learning to combine Machine Learning (ML) and Constraint Programming (CP). We show that combining ML and CP allows the agent to reflect a pretrained network while taking into account constraints, leading to melodic lines that respect both the corpus' style and the music theory constraints.

09:15-10:30 Session 27 (Mentoring Workshop)
Location: Taub 7
Advancing Science with Platforms and Driving Scenarios: a perspective from a researcher at Microsoft Research

ABSTRACT. What does a research career in industry look like? While there is no single answer to this question, I will approximate by describing perspective based on personal experiences based on a career spanning work in a research institute, a startup, the core file systems group at Microsoft and at Microsoft Research. In each of the experiences, a background with a graduate degree specifically with an emphasis on logic and program logics, proved useful but in very different ways. The perspective of Microsoft Research is distinguished as it is based on the pursuit of advancing science that both aims much wider than business objectives and at the same time benefits, and is of benefit to, business and product groups. I will describe an example of a platform as a research tool (in this case the SMT solver z3) for driving foundational research and drive user scenarios. Platforms represent a researcher’s aim for long term, general results that establish and test methodologies. I describe experiences with combining platforms with driving scenarios based on business needs.

09:30-10:30 Session 28A (LFMTP)

Invited talk

Location: Taub 3
Metatheory of Proto-Quipper in Hybrid: Context Relations Revisited

ABSTRACT. We revisit our formalization of the metatheory of Proto-Quipper in the Hybrid logical framework.  Proto-Quipper contains the core of Quipper, which is a programming language designed to express quantum circuits and allow them to be treated as data.  Hybrid is a system that is designed to support the use of higher-order abstract syntax (also called lambda-tree syntax) for representing and reasoning about formal systems, implemented in the Coq Proof Assistant.  Hybrid follows a two-level approach, where a specification logic (SL) is defined as an inductive type in Coq, and reasoning about an object logic (OL) such as Proto-Quipper is carried out using the SL.  In this work, we adopt a linear SL, which provides infrastructure for reasoning directly about the linear type system of Proto-Quipper.In two-level systems, statements of theorems often relate two or more OL judgments, and a "context relation" is often needed to specify the constraints between them.  In this work, we carry out a more careful study of context relations in a linear setting.  Other goals of revisiting our formalization include: extending the use of higher-order syntax to encode the notion of bound variables in circuits, and improving the representation of subtyping so that it is closer to the original formulation of Proto-Quipper.  The latter allowed us to find an error in one of the on-paper proofs.

09:30-10:30 Session 28B (PLP)

Invited talk: Alexander Artikis & Periklis Mantenogloy (University of Piraeus, Greece) - 

Online Reasoning under Uncertainty with the Event Calculus

Location: Ullmann 305
Online Reasoning under Uncertainty with the Event Calculus

ABSTRACT. Activity recognition systems detect temporal combinations of 'low-level' or 'short-term' activities on sensor data streams. Such streams exhibit various types of uncertainty, often leading to erroneous recognition. We will present an extension of an interval-based activity recognition system which operates on top of a probabilistic Event Calculus implementation. Our proposed system performs online recognition, as opposed to batch processing, thus supporting streaming applications. Our empirical analysis demonstrates the efficacy of our system, comparing it to interval-based batch recognition, point-based recognition, as well as structure and weight learning models.

09:30-10:30 Session 28C (TLLA-LINEARITY)
Location: Ullmann 302
A coherent differential PCF

ABSTRACT. The categorical models of the differential lambda-calculus are additive categories because of the Leibniz rule which requires the summation of two expressions. This means that, as far as the differential lambda-calculus and differential linear logic are concerned, these models feature finite non-determinism and indeed these languages are essentially non-deterministic. Based on a recently introduced categorical framework for differentiation which does not require additivity and is compatible with deterministic models such as coherence spaces and probabilistic models such as probabilistic coherence spaces, this talk will present a deterministic version of the differential lambda-calculus. One nice feature of this new approach to differentiation is that it is compatible with general fixpoints of terms, so our language is actually a differential extension of PCF.

09:50-10:30 Session 29 (VardiFest)
Location: Taub 1
The Safety Fragment of LTL
PRESENTER: Nicola Gigante

ABSTRACT. The Safety Fragment of LTL

An Automata-Theoretic Approach to Model-Free Reinforcement Learning
PRESENTER: Mateo Perez

ABSTRACT. A significant challenge to widespread adoption of reinforcement learning (RL) is the faithful translation of designer’s intent to the scalar reward signal required by RL algorithms. Logic-based specifications help in two ways: by precisely capturing the intended objective, and by allowing its automatic translation to a reward function. Omega-regular objectives, such as those expressed in LTL and by omega-automata, have recently been proposed to specify learning objectives in RL. In this talk, we will discuss the impact of Vardi's contributions to automata-theoretic reinforcement learning.

Strategy Logic: Origin, Results, and Open Questions

ABSTRACT. Fifteen years have passed since the introduction of the original turn-based variant of Strategy Logic, and twelve since its full-fledged concurrent extension. Several interesting results have been obtained and, for sure, many more are still to come. The ones that I consider more meaningful concern, in particular, the enlightening connections with other subfields of theoretical computer science, most notably algorithmic game theory and finite model theory, which have enriched and broadened the scientific literature. In this talk, I will overview some of these results, starting with anecdotes on the original work done in 2008, while I was a visiting Ph.D. student at Rice University under Moshe’s supervision, and terminating with a few open questions.

Rewriting of Regular Path Queries: The first paper of the four Italians

ABSTRACT. The paper discusses a collaboration with Moshe Vardi that started with a work presenting an algorithm that takes one regular expression E and n regular expressions S as input, and returns an automaton which is shown to be the maximal rewriting of E with respect to S.

10:30-11:00Coffee Break
11:00-12:30 Session 31A (FoMLAS)

FoMLAS Session 6

Location: Ullmann 201
Formal Specification for Learning-Enabled Autonomous Systems (Extended Abstract)
PRESENTER: Doron Peled

ABSTRACT. Formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes also automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism applies temporal logic with predicates for characterizing events and uses universal quantification to allow enumeration of objects. While we have applied the formalism successfully to two complex use cases, several limitations such as monitorability or lack of quantitative satisfaction also reveal further improvement potential.

Vehicle: A High-Level Language for Embedding Logical Specifications in Neural Networks
PRESENTER: Luca Arnaboldi

ABSTRACT. Verification of neural network specifications is currently an active field of research in automated theorem proving. However, the actual act of verification is merely one step in the process of constructing a verified network. Prior to verification the specification should influence the training of the network, and afterwards users may want to export the verified specification to other verification environments in order to prove a specification about a larger system that uses the network. Currently there is little consensus on how best to connect these different stages.

In this talk we will describe our proposed solution to this problem: the Vehicle specification system. Vehicle allows the user to write a single specification in a high-level human readable form, and the Vehicle compiler then compiles it down to different targets, including training frameworks, verifiers and interactive theorem provers. In this talk we will discuss the various design decisions involved in the specification language itself and hope to solicit feedback from the verification community.

(Submitted as Extended Abstract)

Differentiable Logics for Neural Network Verification
PRESENTER: Natalia Ślusarz

ABSTRACT. The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications has drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice.

It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification.

Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL).

This raises several research questions on the technical side of "training with constraints". What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions.

(Submitted as Extended Abstract)

11:00-12:30 Session 31B: Modelling with s(CASP) (GDE)

Session focused on the use of s(CASP) for modeling: 3 regular talks (20 minutes presentation) and a short talk (10 minutes presentation) plus 5 minutes of Q&A each one.

Location: Ullmann 310
Integration of Logical English and s(CASP)
PRESENTER: Galileo Sartor

ABSTRACT. This paper showcases the use of Logical English, a logic programming language that allows for expressing rules and explanations in a controlled form of natural language, which can be interpreted by the s(CASP) reasoner. It demonstrates the possibility of representing and reasoning with legal values, unknown facts and time with carefully selected expressions of English that can be easily understood without technical training. This research has been developed in the context of the CrossJustice Project.

Embedding s(CASP) in Prolog
PRESENTER: Jan Wielemaker

ABSTRACT. The original s(CASP) implementation is a stand-alone program implemented in Ciao Prolog. It reads the s(CASP) source using a dedicated parser, resolves the query embedded in the source and emits the results in a format dictated by commandline options. Typical applications require composing a s(CASP) program, solving a query and reasoning about the \textit{bindings}, \textit{model} and/or \textit{justification}. This is often done in some external language, e.g., Python. In this paper we propose a closer integration with Prolog. The s(CASP) program is simply a Prolog program that has to respect certain constraints. The scasp library can be used to solve a query using s(CASP) semantics, making the bindings available in the normal Prolog way and providing access to the model and justification as Prolog terms. This way, we can exploits Prolog's power for manipulating (Prolog) terms for construction the s(CASP) program and interpreting the results.

Modeling Administrative Discretion Using Goal-Directed Answer Set Programming
PRESENTER: Joaquin Arias

ABSTRACT. The formal representation of a legal text to automatize reasoning about them is well known in literature and is recently gaining much attention thanks to the interest in the so-called smart contracts, and to autonomous decisions by public administrations [8,4,11]. For deterministic rules, there are several proposals, often based on logic-based programming languages [9,10]. However, none of the existing proposals are able to represent the ambiguity and/or administrative discretion present in contracts and/or applicable legislation, e.g., force majeure. This paper is an extended abstract of [3], where we present a framework, called s(LAW), that allows for modeling legal rules involving ambiguity, and supports reasoning and inferring conclusions based on them.

An s(CASP) In-Browser Playground based on Ciao Prolog
PRESENTER: Jose F. Morales

ABSTRACT. In recent years Web browsers are becoming closer and closer to full-fledged computing platforms. Ciao Prolog currently includes a browser-based playground which allows editing and running programs locally in the browser with no need for server-side interaction. The playground is built from reusable components, and allows easily embedding runnable Prolog programs in web pages and documents. These components can also be easily used for the development of specific, fully browser-based applications. The purpose of this paper is to present a browser-based environment for developing s(CASP) programs, based on the Ciao Prolog playground and its components. This s(CASP) playground thus runs locally on the browser with no need for interaction with a server beyond code download. After briefly introducing s(CASP) and Ciao Prolog, we provide an overview of the architecture of the Ciao playground, based on a compilation of the Ciao engine to the WebAssembly platform, describe the steps involved in its adaptation to create the s(CASP) playground, and present some of its capabilities. These include editing and running s(CASP) programs, sharing them, obtaining (sets of) answers, and visualizing and exploring explanations.

11:00-12:30 Session 31C (HoTT/UF)
Location: Ullmann 303
Towards Normalization of Simplicial Type Theory via Synthetic Tait Computability
Towards Directed Higher Observational Type Theory
The Compatibility of MF with HoTT
PRESENTER: Michele Contente
11:00-12:30 Session 31D: Invited talk and equivalence (IWC)
Location: Ullmann 306
Seven Confluence Criteria for Solving COPS #20

ABSTRACT. COPS #20 is a thought-provoking confluence problem for term rewriting, posed by Gramlich and Lucas (2006). Although the term rewrite system of the problem is confluent, it is beyond the realm of classical confluence criteria such as Knuth and Bendix' criterion (1970) and Huet's parallel closedness (1980). In this talk we will discuss various solutions to the problem, recalling powerful confluence methods developed in the last decade and a half.

Formalized Signature Extension Results for Equivalence

ABSTRACT. Conversion equivalence and normalization equivalence are important properties of two rewrite systems. We investigate how many constants are needed to reduce these properties to their ground versions for linear variable-separated rewrite systems. Our results are implemented in the decision tool FORT-h and formalized in Isabelle/HOL. The latter enables the validation of the proofs produced by the former in the certifier FORTify.

11:00-12:30 Session 31E (LFMTP)

Workshop papers

Location: Taub 3
A positive perspective on term representation: work in progress

ABSTRACT. We use the focused proof system LJF as a framework for describing term structures and substitution. Since the proof theory of LJF does not pick a canonical polarization for primitive types, two different approaches to term representation arise. When primitive types are given the negative bias, LJF proofs encode term structures as tree-like structures in a familiar fashion. In this situation, cut elimination also yields the familiar notion of substitution. On the other hand, when primitive types are given the positive bias, LJF proofs yield a structure in which explicit sharing of term structures is possible. In this situation, cut elimination yields a different notion of substitution. We illustrate these two approaches to term representation by applying them to the encoding of untyped λ-terms.

More details can be found at http://www.lix.polytechnique.fr/Labo/Dale.Miller/papers/lfmtp22-positive-perspective.pdf.

An Implementation of Set Theory with Pointed Graphs in Dedukti

ABSTRACT. Dedukti is a type-checker for the lambda-pi calculus modulo theory, a logical framework that allows the extension of conversion with user-defined rewrite rules. In this paper, we present the implementation of a version of Dowek-Miquel’s intuitionistic set theory in Dedukti. To do so, we adapt this theory - based on the concept of pointed graphs - from Deduction modulo theory to lambda-pi calculus modulo theory, and we formally write the proofs in Dedukti. In particular, this implementation requires the definition of a deep embedding of a certain class of formulas, as well as its interpretation in the theory.

More details can be found at https://hal.archives-ouvertes.fr/hal-03740004.

11:00-12:30 Session 31F (LogTeach)
Location: Ullmann 301
What Should Students of a Course in Logic Know at Its End?

ABSTRACT. See enclosed file

My Experience Teaching Logic in Undergraduate AI at NYU}

ABSTRACT. I have taught a section on logic in my introductory undergraduate and graduate courses in AI for many years. Here, I discuss what I teach; how students react; where else mathematical logic is taught at NYU; and what topics I would teach in a one- or two-semester advanced undergraduate course.

11:00-12:30 Session 31G (Mentoring Workshop)
Location: Taub 7
Managing a life that also includes research

ABSTRACT. The life of a young researcher in the current academic environment can be highly demanding and stressful. The competition for job opportunities often causes early-stage researchers to prioritize productivity over well-being, work long hours, on weekends, on and off campus, and usually alone. In this very personal talk, I will offer advice on how to (try to) avoid the all-consuming nature of research, drawing from my personal experience and lessons learned from colleagues.

How logic-based approaches can be used in data management
11:00-12:30 Session 31H: Joint QBF Session (PC)
Location: Ullmann 309
Strategy Extraction and Proof
QCDCL with Cube Learning or Pure Literal Elimination – What is best? (II)
11:00-12:30 Session 31I (PCCR)
Location: Ullmann 203
The Parameterized Complexity of SAT

ABSTRACT. This talk will survey parameterized complexity results of the propositional satisfiability problem (SAT). We will cover parameters based on backdoors, decompositions, and hybrid parameters that combine both. After a general overview of results obtained over the last 20 years, we will discuss some recent results based on backdoor depth (ESA 2022) and twin-width (SAT 2022).

Parameterized Algorithmics and Counting: Treewidth in Practice

ABSTRACT. We discuss parameterized algorithms and an application to exact propositional model counting (MC). (W)MC asks for counting the (weighted) number of satisfying assignments of a propositional formula. The problem is known to be beyond NP and a prototypical representative of the class #P.

Various implementations that solve MC employ a SAT-based solving engine. However, we tackle the problem from a theoretical perspective exploiting bounded treewidth in input instances for which an algorithm that runs single exponential in the treewidth is known. The treewidth of an instance is at most the number of its variables, but often much smaller. While our approach suffers from the theoretical worst case bounds, we illustrate how it can still fruitfully be used in combination with modern hardware that takes advantage of parallelism. Our results are encouraging in the sense that complex reasoning problems can be tackled by parameterized algorithms executed on the GPU or within database systems if instances have small treewidth (<40).

11:00-12:30 Session 31J (PLP)
Location: Ullmann 305
On Projectivity in Markov Logic Networks
PRESENTER: Sagar Malhotra

ABSTRACT. Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Like most relational models, MLNs do not admit consistent marginal inference over varying domain sizes i.e. marginal probabilities depend on the domain size. Furthermore, MLNs learned on a fixed domain do not generalize to domains of different sizes. In recent works, connections have emerged between domain size dependence, lifted inference, and learning from a sub-sampled domain. The central idea of these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special class of models, namely Relational Block Models (RBMs). In terms of data likelihood, RBMs allow us to learn the best possible projective MLN in the two-variable fragment. Furthermore, RBMs also admit consistent parameter learning over sub-sampled domains.

Exploiting the Full Power of Pearl's Causality in Probabilistic Logic Programming

ABSTRACT. We introduce new semantics for acyclic probabilistic logic programs in terms of Pearl's functional causal models. Further, we show that our semantics generalize the classical distribution semantics and CP-logic. This enables us to establish all query types of functional causal models, namely probability calculus, predicting the effect of external interventions and counterfactual reasoning, within probabilistic logic programming. Finally, we briefly discuss the problems regarding knowledge representation and the structure learning task which result from the different semantics and query types.

11:00-12:30 Session 31K: SAT and Parallel Solving (POS)
Location: Ullmann 311
Dinosat: A SAT Solver with Native DNF Support
PRESENTER: Markus Iser

ABSTRACT. In this paper we report our preliminary results with a new kind of SAT solver called Dinosat. Dinosat's input is a conjunction of clauses, at-most-one constraints and disjunctive normal form (DNF) formulas. The native support for DNF formulas is motivated by the application domain of SAT based product configuration. A DNF formula can also be viewed as a generalization of a clause, i.e., a clause (disjunction of literals) is special case of a DNF formula, where each term (conjunction of literals) has exactly one literal. Similarly, we can generalize the classical resolution rule and use it to resolve two DNF formulas. Based on that, the CDCL algorithm can be modified to work with DNF formulas instead of just clauses. Using randomly generated formulas (with DNFs) we experimentally show, that in certain relevant scenarios, it is more efficient to solve these formulas with Dinosat than translate them to CNF and use a state-of-the-art SAT solver. Another contribution of this paper is identifying the phase transition points for such formulas.

DPS: A Framework for Deterministic Parallel SAT Solvers

ABSTRACT. In this study, we propose a new framework for easily constructing efficient deterministic parallel SAT solvers, providing the delayed clause exchange technique introduced in ManyGlucose. This framework allows existing sequential SAT solvers to be parallelized with just as little effort as in the non-deterministic parallel solver framework such as PaInleSS. We show experimentally that parallel solvers built using this framework have performance comparable to state-of-the-art non-deterministic parallel SAT solvers while ensuring reproducible behavior.

Scalable Proof Producing Multi-Threaded SAT Solving with Gimsatul through Sharing instead of Copying Clauses
PRESENTER: Armin Biere

ABSTRACT. We give a first account of our new parallel SAT solver Gimsatul. Its key feature is to share clauses physically in memory instead of copying them, which is the method of other state-of-the-art multi-threaded SAT solvers to share clauses logically. Our approach keeps information about which literals are watched in a clause local to a solving thread but shares the actual immutable literals of a clause globally among all solving threads. This design gives quiet remarkable parallel scalability, allows aggressive clause sharing while keeping memory usage low and produces more compact proofs.

11:00-12:30 Session 31L: Joint PC/QBF Session (QBF)

This session will be held jointly with the Proof Complexity (PC) Workshop.

Location: Ullmann 309
Strategy Extraction and Proof

ABSTRACT. Invited talk.

QCDCL with Cube Learning or Pure Literal Elimination - Part 2

ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL. This is a continuation of a previous talk with the same title. While in the previous talk we focused on the general techniques of showing QCDCL lower bounds, here we focus on the specifics pertaining to pure literal elimination and cube learning.

11:00-12:30 Session 31M: Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part II) (REAI)

Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part II)

Location: Ullmann 102
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22)

ABSTRACT. The tutorial introduces the audience to the concept of referring expressions, formulae that can be used to communicate identities of otherwise abstract objects. The formalism provides foundations for a successful and unambiguous exchange of information about individuals between agents sharing common knowledge about such individuals, a task that is indispensable in most modern applications of knowledge representation and semantic technologies (tutorial web page).

11:00-12:30 Session 31N (TERMGRAPH)

Regular submissions, morning session

Location: Ullmann 101
Transformation of DPO Grammars into Hypergraph Lambek Grammars With The Conjunctive Kleene Star

ABSTRACT. We study how to embed well-known hypergraph grammars based on the double pushout (DPO) approach in the hypergraph Lambek calculus HL. It turns out that DPO rules can be naturally encoded by types of HL. However, this encoding is not enough to convert a DPO grammar into an equivalent grammar based on HL: we additionally need a logical operation that would allow making arbitrarily many copies of types. We develop such an operation called the conjunctive Kleene star and show that any DPO grammar can be converted into an equivalent HL-grammar enriched with this operation.

Greedily Decomposing Proof Terms for String Rewriting into Multistep Derivations by Topological Multisorting

ABSTRACT. We show a proof term in a string rewrite system can be mapped to its causal graph and that the latter is a unique representative of the permutation equivalence class of the former. We then map the causal graph back to a proof term of a special shape, a so-called greedy multistep reduction. Composing both transformations yields a simple and effective way of constructing the greedy multistep reduction of a proof term, and thereby of deciding permutation equivalence of proof terms in general, and of (multistep) reductions in particular.

A PBPO+ Graph Rewriting Tutorial
PRESENTER: Roy Overbeek

ABSTRACT. We provide a tutorial introduction to the algebraic graph rewriting formalism PBPO+. We show how PBPO+ can be obtained by composing a few simple building blocks. Along the way, we comment on how alternative design decisions lead to related formalisms in the literature, such as DPO.

11:00-12:30 Session 31P (TLLA-LINEARITY)
Location: Ullmann 302
Dissymetrical Linear Logic

ABSTRACT. This paper is devoted to design computational systems of linear logic (i.e. systems in which, notably, the non linear and structural phenomena which arise during the cut-elimination process are taken in charge by specific modalities, the exponentials: ! and ?). The systems designed are “intermediate” between Intuitionistic LL and Classical LL. Methodologically, the focus is put on how to break the symmetrical interdependency between ! and ? which prevails in Classical LL – and this without to loose the computational properties (closure by cut-elimination, atomizability of axioms). Three main systems are designed (Dissymetrical LL, semi-functorial Dissymetrical LL, semi-specialized Dissymetrical LL), where, in each of them, ! and ? play well differentiated roles.

Parametric Chu Translation

ABSTRACT. Bellin translates multiplicative-additive linear logic to its intuitionistic fragment via the Chu construction, seemingly posing an alternative to negative translation. However, his translation is not sound in the sense that not all valid intuitionistic sequents in its image correspond to valid classical ones. By directly analyzing two-sided classical sequents, we develop a sound generalization thereof inspired by parametric negative translation that also handles the exponentials.

Peano Arithmetic and muMALL: An extended abstract
PRESENTER: Dale Miller

ABSTRACT. We propose to examine some of the proof theory of arithmetic by using two proof systems. A linearized version of arithmetic is available using muMALL, which is MALL plus the following logical connective to treat first-order term structures: equality and inequality, first-order universal and existential quantification, and the least and greatest fixed point operators. The proof system muLK is an extension of muMALL in which contraction and weakening are permitted. It is known that muMALL has a cut-elimination result and is therefore consistent. We will show that muLK is consistent by embedding it into second-order linear logic. We also show that muLK contains Peano arithmetic and that in a couple of different situations, muLK is conservative over muMALL. Finally, we show that a proof that a relation represents a total function can be turned into a proof-search-based algorithm to compute that function.

11:00-11:30 Session 31Q (VardiFest)
Location: Taub 1
Rewriting, Answering, and Losslessness: A Clarification by the “Four Italians”

ABSTRACT. The paper itself is an extended abstract.

Bisimulation Games Played in Fibered Categories

ABSTRACT. I propose to present our recent results [LICS'19,LICS'21] on capturing various bisimilarity notions— covering not only the conventional relational notion but also its quantitative extensions such as probabilistic bisimulation and bisimulation metric—in the language of category theory. The theory combines abstract category theory and concrete games—so-called codensity games which are played in categories—which I believe will be of interest of Moshe and the audience.

Capturing abscondings

ABSTRACT. In order to improve the effectiveness of law enforcement agencies, we address the problem of capturing absconding using graph theory, formal methods and game theory. We define a set of problems and we outline an approach to solve them.

11:00-12:30 Session 31R: Doctoral Program (CP)
Location: Taub 4
Interdisciplinary Research -- Cost Function Network for Life Sciences
Automated SAT Problem Feature Extraction using 1 Convolutional Autoencoders
PRESENTER: Marco Dalla

ABSTRACT. The Boolean Satisfiability Problem (SAT) was the first known NP-complete problem and has a very broad literature focusing on it. It has been applied successfully to various real-world problems, such as scheduling, planning and cryptography. SAT problem feature extraction plays an essential role in this field. SAT solvers are complex, fine-tuned systems that exploit problem structure. The ability to represent/encode a large SAT problem using a compact set of features has broad practical use in instance classification, algorithm portfolios, and solver configuration. The performance of these techniques relies on the ability of feature extraction to convey helpful information. Researchers often craft these features ``by hand'' to capture particular structures of the problem. Instead, in this paper, we extract features using semi-supervised deep learning. We train a convolutional autoencoder (AE) to compress the SAT problem into a limited latent space and reconstruct it minimizing the reconstruction error. The latent space projection should preserve much of the structural features of the problem. We compare our approach to a set of features commonly used for algorithm selection. Firstly, we train classifiers on the projection to predict if the problems are satisfiable or not. If the compression conveys valuable information, a classifier should be able to take correct decisions. In the second experiment, we check if the classifiers can identify the original problem that was encoded as SAT. The empirical analysis shows that the autoencoder is able to represent problem features in a limited latent space efficiently, as well as convey more information than current feature extraction methods.

Selecting SAT Encodings for Pseudo-Boolean and Linear Integer Constraints

ABSTRACT. In this extended abstract, we summarise the work from our main conference paper. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes.

Peel-and-Bound: Generating Stronger Relaxed Bounds with Multivalued Decision Diagrams
PRESENTER: Isaac Rudich

ABSTRACT. Decision diagrams are an increasingly important tool in cutting-edge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warm-start technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. We present a method of peeling off a sub-graph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peel-and-bound. We test the method on the sequence ordering problem, and our results indicate that our peel-and-bound scheme generates stronger bounds than a branch-and-bound scheme using the same propagators, and at significantly less computational cost.

Solving the Constrained Single-Row Facility Layout Problem with Decision Diagrams (Extended Abstract)
PRESENTER: Vianney Coppé

ABSTRACT. This paper presents two exact optimization models for the Constrained Single-Row Facility Layout Problem. It is a linear arrangement problem considering departments in a facility with given lengths and traffic intensities. The first approach is an extension of the state-of-the-art mixed-integer programming model for the unconstrained problem with the additional constraints. The second one is a decision diagram-based branch-and-bound that takes advantage of the recursive nature of the problem through a dynamic programming model. The computational experiments show that both approaches significantly outperform the only mixed-integer programming model in the literature.

CNF Encodings of Binary Constraint Trees (Extended Abstract)
PRESENTER: Ruiwei Wang

ABSTRACT. Ordered Multi-valued Decision Diagrams (MDDs) have been shown to be useful to represent finite domain functions/relations. For example, various constraints can be modelled with MDD constraints. Recently, a new representation called Binary Constraint Tree (BCT), which is a (special) tree structure binary Constraint Satisfaction Problem, has been proposed to encode MDDs and shown to outperform existing MDD constraint propagators in Constraint Programming solvers. BCT is a compact representation, and it can be exponentially smaller than MDD for representing some constraints. Here, we also show that BCT is compact for representing non-deterministic finite state automaton (NFA) constraints. In this paper, we investigate how to encode BCT into CNF form, making it suitable for SAT solvers. We present and investigate five BCT CNF encodings. We compare the propagation strength of the BCT CNF encodings and experimentally evaluate the encodings on a range of existing benchmarks. We also compare with seven existing CNF encodings of MDD constraints. Experimental results show that the CNF encodings of BCT constraints can outperform those of MDD constraints on various benchmarks.

11:35-12:05 Session 32 (VardiFest)
Location: Taub 1
From Kochen-Specker to Feder-Vardi

ABSTRACT. We point out the surprising connections which have recently been observed between the famous Kochen-Specker theorem in the foundations of quantum mechanics, and algorithmic questions relating to constraint satisfaction and the celebrated Feder-Vardi Dichotomy Conjecture (recently proved by Bulatov and Zhuk), and to the Weisfeiler-Leman approximations to structure isomorphism.

Data Complexity and Expressive Power of Ontological Reasoning Formalisms

ABSTRACT. In his 1982 landmark paper “The Complexity of Relational Query Languages” [7], Moshe Vardi defined and studied the concept of data complexity of relational query languages, which is the complexity of evaluating a fixed query in the language as a function of the size of the database. Vardi also defines the notion of expression complexity (fixed database, query as input), now often called “program complexit”, and the combined complexity (where both, the database and the query constitute the input). Data complexity has become the standard method for assessing the complexity of query languages. Vardi [7] analysed various query languages and showed that fixed-point queries are complete for PTIME in data complexity by a proof from which the same result for Datalog follows. He noticed that ”the expression complexity of the investigated languages is usually one exponential higher than the data complexity” [7]. Vardi also notes that over ordered structures fixed-point queries (and implicitly Datalog queries) capture PTIME, which was shown independently by Immerman [6], and more explicitly for the second-order Horn fragment corresponding to Datalog by Gr ̈adel [5]. The first part of the talk will give a short overview of these results.

In the second part, I briefly address the complexity and expressive power of ontological reasoning formalisms such as ontology-based data access via description logics [3] or via variants of Datalog [4,1,2]. I will essentially illustrate two points. First, rather than a typical single-exponential jump from data to expression (or combined) complexity for classical logical query languages addressed by Vardi, we now typically have a double-exponential jump. Second, to better understand the expressive power of ontological reasoning formalisms where, in addition to a database (or ABox) and a query, there is an ontology (or TBox or Datalog± program), it is convenient to consider a refined concept of expressive power studied in [1,2], which is defined by the set of Boolean queries that can be expressed when keeping ontologies fixed.


[1] Marcelo Arenas, Georg Gottlob, and Andreas Pieris. Expressive languages for querying the semantic web. ACM Trans. Database Syst., 43(3):13:1–13:45, 2018. [2] Gerald Berger, Georg Gottlob, Andreas Pieris, and Emanuel Sallinger. The space-efficient core of vadalog. ACM Trans. Database Syst., 47(1), apr 2022. [3] Diego Calvanese, Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, Antonella Poggi, Mariano Rodriguez-Muro, Riccardo Rosati, Marco Ruzzi, and Domenico Fabio Savo. The mastro system for ontology-based data access. Semantic Web, 2(1):43–53, 2011. [4] Georg Gottlob, Thomas Lukasiewicz, and Andreas Pieris. Datalog+/-: Questions and answers. In 14th Intl. Conf. on the Principles of Knowledge Representation and Reasoning (KR’14), 2014. [5] Erich Gr ̈adel. Capturing complexity classes by fragments of second-order logic. Theoretical Com- puter Science, 101(1):35–57, 1992. [6] Neil Immerman. Relational queries computable in polynomial time (extended abstract). In Pro- ceedings of STOC’82, May 5-7, 1982, San Francisco, California, USA. [7] Moshe Y. Vardi. The complexity of relational query languages. In Proc. of STOC’82, 1982.

Logic-driven approaches for smart, safe and energy-efficient aviation

ABSTRACT. In this talk we present some of the many examples where Prof. Moshe Vardi’s research has enabled progress in solving some of the most challenging technical problems present in aviation. We touch upon a subset of use-cases from knowledge acquisition, advanced reasoning, planning, environment health and safety, and energy efficiency.

12:10-12:40 Session 33 (VardiFest)
Location: Taub 1
Divide-and-Conquer Determinization for B\"uchi Automata

ABSTRACT. The determinization of a nondeterministic B\"uchi automaton (NBA) is a fundamental construction of automata theory, with applications to probabilistic verification and reactive synthesis. The standard determinization constructions, such as the ones based on the Safra-Piterman's approach, work on the whole NBA. In this work we propose a divide-and-conquer determinization approach. To this end, we first classify the strongly connected components (SCCs) of the given NBA as inherently weak, deterministic accepting, and nondeterministic accepting. We then present how to determinize each type of SCC \emph{independently} from the others; this results in an easier handling of the determinization algorithm that takes advantage of the structure of that SCC. Once all SCCs have been determinized, we show how to compose them so to obtain the final equivalent deterministic Emerson-Lei automaton, which can be converted into a deterministic Rabin automaton without blow-up of states and transitions. We implement our algorithm in a prototype tool named ourDC and empirically evaluate ourDC with the state-of-the-art tools on a large set of benchmarks from the literature. The experimental results show that our prototype ourDC outperforms Spot and Owl regarding the number of states and transitions.

Little Tricky Logic: Misconceptions in the Understanding of LTL
PRESENTER: Ben Greenman

ABSTRACT. We have been studying LTL misconceptions with multiple populations to determine *in what ways* LTL is tricky and to decide *what we can do* to address the issues. We propose an interactive remote talk that aims to demonstrate the LTL misconceptions and expert blind spots that we have found.

Comments from Giuseppe, Kuldeep, and Kristin
12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

14:00-15:30 Session 34A (ABR)

Tutorial: Assumption-Based Nonmonotonic Reasoning - Alexander Bochman

Location: Ullmann 104
Tutorial: Assumption-Based Nonmonotonic Reasoning (Part 1)

ABSTRACT. The tutorial provides an introduction to Assumption-Based Reasoning (ABR) which is argued to constitute the core of nonmonotonic reasoning in AI. Webpage: https://sites.google.com/view/kr2022-abr/home

14:00-15:30 Session 34B (FoMLAS)

FoMLAS Session 7

Location: Ullmann 201
Neural Network Verification with Proof Production

ABSTRACT. Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification com- munity has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question. In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to- check witness of unsatisfiability, which attests to the absence of errors. Our proof production is based on an efficient adaptation of the well-known Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. As a proof of concept, we implemented our technique on top of the Marabou DNN verifier. Our evaluation on a safety- critical system for airborne collision avoidance shows that proof production succeeds in almost all cases, and entails only a small overhead.

Efficient Neural Network Verification using Branch and Bound

ABSTRACT. In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons;  and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α, β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. 

Bio: Suman Jana is an associate professor in the department of computer science and the data science institute at Columbia University.  His primary research interest is at the intersections of computer security and machine learning. His research has received six best paper awards, a CACM research highlight, a Google faculty fellowship, a JPMorgan Chase Faculty Research Award, an NSF CAREER award, and an ARO young investigator award.

14:00-15:30 Session 34C: s(CASP) extensions and applications I (GDE)

Session focused on the most recent applications of s(CASP) and the description of the functionalities incorporated in s(CASP) that have made them possible: 3 regular talks (20 minutes presentation) and a short talk (10 minutes presentation) plus 5 minutes of Q&A each one.

Location: Ullmann 310
Automating Defeasible Reasoning in Law with Answer Set Programming
PRESENTER: Avishkar Mahajan

ABSTRACT. The paper studies defeasible reasoning in rule-based systems, in particular about legal norms and contracts. We identify rule modifiers that specify how rules interact and how they can be overridden. We then define rule transformations that eliminate these modifiers, leading in the end to a translation of rules to formulas. For reasoning with and about rules, we contrast two approaches, one in a classical logic with SMT solvers, which is only briefly sketched, and one using non-monotonic logic with Answer Set Programming solvers, described in more detail.

Unmanned Aerial Vehicle compliance checking using Goal-Directed Answer Set Programming

ABSTRACT. We present a novel application of Goal-Directed Answer Set Programming that digitizes the model aircraft operator’s compliance verification against the Academy of Model Aircrafts (AMA) safety code. The AMA safety code regulates how AMA flyers operate Unmanned Aerial Vehicles (UAVs) for limited recreational purposes. Flying drones and their operators are subject to various rules before and after the operation of the aircraft to ensure safe flights. In this paper, we leverage Goal-Directed Answer Set Programming to encode the AMA safety code and automate compliance checks. To check compliance, we use the s(CASP) which is a goal-directed ASP engine. By using s(CASP) the operators can easily check for violations and obtain a justification tree explaining the cause of the violations in human-readable natural language. We develop a front end questionnaire interface that accepts various conditions and use the backend s(CASP) engine to evaluate whether the conditions adhere to the regulations. We also leverage s(CASP) implemented in SWI-Prolog, where SWI-Prolog exposes the reasoning capabilities of s(CASP) as a REST service. To the best of our knowledge, this is the first application of ASP in the AMA and Avionics Compliance and Certification space.

Symbolic Reinforcement Learning Framework with Incremental Learning of Rule-based Policy
PRESENTER: Elmer Salazar

ABSTRACT. In AI research, Relational Reinforcement Learning (RRL) is a vastly discussed domain that combines reinforcement learning with relational learning or inductive learning. One of the key challenges of inductive learning through rewards and action is to learn the relations incrementally. In other words, how an agent can closely mimic the human learning process. Where we, humans, start with a very naive belief about a concept and gradually update it over time to a more concrete hypothesis. In this paper, we address this challenge and show that an automatic theory revision component can be developed efficiently that can update the existing hypothesis based on the rewards the agent collects by applying it. We present a symbolic reinforcement learning framework with the automatic theory revision component for incremental learning. This theory revision component would not be possible to build without the help of a goal-directed execution engine of answer set programming (ASP) - s(CASP). The current work has demonstrated a proof of concept about the RL framework and we are still working on it.

LTL Model Checking using Coinductive Answer Set programming

ABSTRACT. We present a model checker for Linear Temporal Logic using Goal-Directed Answer Set Programming under Costable model semantics (CoASP). Costable model semantics allows for positive loops to succeed, unlike Stable model semantics where positive loops fail. Therefore, by using the Costable model semantics, LTL formulas involving the G and R operator can be proved coinductively.

14:00-15:30 Session 34D (HoTT/UF)
Location: Ullmann 303
Semantics for two-dimensional type theory
PRESENTER: Benedikt Ahrens
14:00-15:30 Session 34E: Conditional rewriting (IWC)
Location: Ullmann 306
On local confluence of conditional rewrite systems

ABSTRACT. We characterize local confluence of *conditional rewrite systems* à la Huet, i.e., as the joinability of a set of conditional pairs including the usual conditional critical pairs and a new kind of pairs we call *conditional variable pairs*.

A Critical Pair Criterion for Level-Commutation of Conditional Term Rewriting Systems
PRESENTER: Takahito Aoto

ABSTRACT. We introduce level-commutation of conditional term rewriting systems (CTRSs) that extends the notion of level-confluence, in a way similar to extending confluence to commutation. We show a criterion for level-commutation of oriented CTRSs, which generalizes the one for commutation of term rewriting systems in (Toyama, 1987). As a corollary, we obtain a criterion of level-confluence of oriented CTRSs which extends the one in (Suzuki et al., 1995).

Proving Confluence with CONFident
PRESENTER: Raúl Gutiérrez

ABSTRACT. This paper describes the proof framework used in CONFident, a framework to deal with different types of systems (term rewriting systems, context-sensitive term rewriting systems and conditional term rewriting systems) and different types of tasks (checking joinability of critical pairs, termination of systems, feasibility of reachibility problems or deducibility) and different techniques for proving confluence (including modular decompositions, transformations, etc.).

14:00-15:00 Session 34F (LFMTP)

Frank Pfenning special session: invited talk

Location: Taub 3
A modal analysis of dependently typed metaprogramming

ABSTRACT. Metaprogramming is the art of writing programs that produce or manipulate other programs. This opens the possibility to eliminate boilerplate code and exploit domain-specific knowledge to build high-performance programs. While this widely used concept is almost as old as programming itself, it has been surprisingly challenging to extend to logical frameworks and type-theoretic proof assistants.

In this talk, we present MINTS, a modal intuitionistic type theory which supports dependently typed  multi-staged programming in the spirit of Scheme or Racket's quote-unquote style. As MINTS is dependently typed, we can not only specify, generate and share code across multiple stages, but also reason about multi-staged programs and prove them correct in MINTS. Theoretically, MINTS extends the Kripke-style modal lambda-calculus by Pfenning and Davies which serves as a logical foundation for simply-typed multi-staged programming to a full Martin-Loef type  theory with a cumulative hierarchy of universes. This will allow us to exploit the full potential of metaprogramming without sacrificing reliability of and trust in the code we are producing and running. In addition, it provides a fresh perspective towards internalizing macros or tactics in proof assistants based on type theories.

This is joint work with Jason Z. Hu and Junyoung Jang.

14:00-15:30 Session 34G (LogTeach)
Location: Ullmann 301
Possible Desiderata for Logic in the CS Curriculum

ABSTRACT. Mathematical (or formal) logic is extremely important in computer science (yet another example, along with number theory, of an already extant mathematical field’s having extreme applicability to the "newer" field of Computer Science).

As the Workshop participants are well aware, to incorporate logic into the CS curriculum has been very challenging; there have been sporadic efforts for decades. But, as noted in the Call for Participation, if anything, the situation has worsened over the years. Some of the following will be suggested attempts to lessen this logjam; others will be oriented toward logic content.

Caveat: virtually everything following is based on (likely incomplete) knowledge of academia in the United States. Hence, specific items may or may not have relevance in other countries.

Teaching Logic for Computer Science Students: Proof Assistants and Related Tools

ABSTRACT. In the last decade we have focused our main logic courses on proof assistants and related tools. We find that the modern computer science curriculum requires a focus on applications instead of just pen-and-paper proofs. Notably, we teach the metatheory of logic using tools with formalizations in proof assistants like Isabelle such that we have both implementations and theorems about them.

14:00-15:30 Session 34H (Mentoring Workshop)
Location: Taub 7
Navigating through the academic jungle

ABSTRACT. When pursuing an academic career, one is often confronted with contradictory expectations and a lot of uncertainty. Understanding some of the processes in academia and different views different people may take can help a lot with finding one's own way. This talk is targeted at PhD students and early-career post-docs. It will focus on some key stages in an academic career, and what the important decisions are that one should make or at least consider for oneself at a certain stage.

From PhD to industry: A recent graduate’s perspective

ABSTRACT. To a PhD student who deeply focuses on academic research in their day-to-day work, moving to industry may seem unattainable. As a recent graduate, I will share my personal experience of applying for internships, doing coding interviews, working in a startup, as well as working in a big company. I will also give an overview of the project I am currently working on as an applied scientist in the Prime Video Automated Reasoning team.

14:00-15:30 Session 34I (PC)
Location: Ullmann 309
Hard Inputs from Kolmogorov Randomness---Can Merlin Tame Open Problems?

ABSTRACT. We consolidate two widely believed conjectures about tautologies---no optimal proof system exists, and most require superpolynomial size proofs in any system---into a $p$-isomorphism-invariant condition satisfied by all paddable $\textbf{coNP}$-complete languages or none. The condition is: for any Turing machine (TM) $M$ accepting the language, $\textbf{P}$-uniform input families requiring superpolynomial time by $M$ exist (equivalent to the first conjecture) and appear with positive upper density in an enumeration of input families (implies the second). In that case, no such language is easy on average (in $\textbf{AvgP}$) for a distribution applying non-negligible weight to the hard families.

The hardness of proving tautologies and theorems is likely related. Motivated by the fact that arithmetic sentences encoding ``string $x$ is Kolmogorov random'' are true but unprovable with positive density in a finitely axiomatized theory $\mathcal{T}$ (Calude and J{\"u}rgensen), we conjecture that any propositional proof system requires superpolynomial size proofs for a dense set of $\textbf{P}$-uniform families of tautologies encoding ``there is no $\mathcal{T}$ proof of size $\leq t$ showing that string $x$ is Kolmogorov random''. This implies the above condition.

The conjecture suggests that there is no optimal proof system because undecidable theories help prove tautologies and do so more efficiently as axioms are added, and that constructing hard tautologies seems difficult because it is impossible to construct Kolmogorov random strings. Similar conjectures that computational blind spots are manifestations of noncomputability would resolve other open problems.

Simulations between proof systems

ABSTRACT. The problem of the existence of an p-optimal propositional proof system is one of the central open problems in proof complexity.

The goal of this work is to study the restricted case of this problem, namely, the case of strong reductions. We also introduce the notion of bounded proof system and study the connection between optimality and automatizability for bounded proof systems.

Are Hitting Formulas Hard for Resolution?
PRESENTER: Tomáš Peitl

ABSTRACT. Hitting formulas are a peculiar class of propositional CNF formulas. Not only is their satisfiability decidable in polynomial time, even their models can be counted in closed form. In contrast, other tractable classes, like 2-SAT and Horn-SAT, usually have algorithms based on resolution; and model counting remains hard. On the flip side, those resolution-based algorithms usually easily imply an upper bound on resolution complexity, which is missing for hitting formulas. Are hitting formulas hard for resolution? In this paper we take the first steps towards answering this question. We show that the resolution complexity of hitting formulas is dominated by so-called irreducible hitting formulas. However, constructing large irreducible formulas is non-trivial and it is not even known whether infinitely many exist. Building upon our theoretical results, we implement an efficient algorithm on top of the Nauty software package to enumerate all irreducible unsatisfiable hitting formulas with up to 14 clauses, and we determine exact resolution complexity of those with up to 13 clauses, by extending an existing SAT encoding.

14:00-15:30 Session 34J (PCCR)
Location: Ullmann 203
Decompositions and algorithms for interpretations of sparse graphs

ABSTRACT. The first-order model checking problem for finite graphs asks, given a graph G and a first-order sentence phi as input, to decide whether phi holds on G. While we do not expect that there is an efficient (fpt with respect to the formula size) algorithm which works on the class of all finite graphs, such algorithms have been proven to exist for various structurally well-behaved classes of graphs (graphs of bounded degree, planar graphs, unit interval graphs etc). Identifying graph classes for which an fpt model checking algorithm exists has been an active area of research for the past 25 years.

After the existence of an efficient model checking algorithm was shown for classes of sparse graphs in 2014 by Grohe, Kreutzer and Siebertz, the attention gradually turned to the more general setting of graph classes which can be obtained from sparse graphs via interpretations/transductions. This program has been initiated in 2016, when the existence of an fpt algorithm for the first-order model checking problem was shown for graph classes interpretable in graphs of bounded degree. After this, there followed several results about the structure of graphs interpretable in sparse graphs, but despite the efforts of several groups of researchers, no positive algorithmic result has been achieved until very recently. In the talk we will review the current status and recent developments regarding this problem, and in particular we will present a fixed-parameter tractable algorithm for the first-order model checking on interpretations of graph classes with bounded local treewidth (notably, this includes interpretations of planar graphs).

Tractable Abstract Argumentation via Backdoor-Treewidth
PRESENTER: Matthias König

ABSTRACT. Argumentation frameworks (AFs) are a core formalism in the field of formal argumentation. As most standard computational tasks regarding AFs are hard for the first or second level of the Polynomial Hierarchy, a variety of algorithmic approaches to achieve manageable runtimes have been considered in the past. Among them, the backdoor-approach and the treewidth-approach turned out to yield fixed-parameter tractable fragments. However, many applications yield high parameter values for these methods, often rendering them infeasible in practice. We introduce the backdoor-treewidth approach for abstract argumentation, combining the best of both worlds with a guaranteed parameter value that does not exceed the minimum of the backdoor- and treewidth-parameter. In particular, we formally define backdoor-treewidth and establish fixed-parameter tractability for standard reasoning tasks of abstract argumentation. Moreover, we provide systems to find and exploit backdoors of small width, and conduct systematic experiments evaluating the new parameter.

14:00-15:30 Session 34K (PLP)
Location: Ullmann 305
Semantics for Hybrid Probabilistic Logic Programs with Function Symbols: Technical Summary
PRESENTER: Fabrizio Riguzzi

ABSTRACT. Hybrid probabilistic logic programs extends probabilistic logic programs by adding the possibility to manage continuous random variables. Despite the maturity of the field, a semantics that unifies discrete and continuous random variables and function symbols was still missing. In this paper, we summarize the main concepts behind a new proposed semantics for hybrid probabilistic logic programs with function symbols.

Statistical Statements in Probabilistic Logic Programming
PRESENTER: Fabrizio Riguzzi

ABSTRACT. Probabilistic Logic Programs under the distribution semantics (PLPDS) do not allow statistical probabilistic statements of the form 90% of birds fly, which were defined Type 1 statements by Halpern. In this paper, we add this kind of statements to PLPDS and introduce the PASTA (Probabilistic Answer set programming for STAtistical probabilities) language. We translate programs in our new formalism into probabilistic answer set programs under the credal semantics. This approach differs from previous proposals, such as the one based on probabilistic conditionals as, instead of choosing a single model by making the maximum entropy assumption, we take into consideration all models and we assign probability intervals to queries. In this way we refrain from making assumptions and we obtain a more neutral framework. We also propose an inference algorithm and compare it with an existing solver for probabilistic answer set programs on a number of programs of increasing size, showing that our solution is faster and can deal with larger instances. --- NOTE: this is the original paper submitted and accepted at the 16th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR). It will be published in the proceedings of LPNMR so it must not be published on CEUR. ---

14:00-15:30 Session 34L: Proofs I (POS)
Location: Ullmann 311
TBUDDY: A Proof-Generating BDD Package

ABSTRACT. The TBUDDY library enables the construction and manipulation of reduced, ordered binary decision diagrams (BDDs). It extends the capabilities of the BUDDY BDD package to support trusted BDDs, where the generated BDDs are accompanied by proofs of their logical properties. These proofs are expressed in a standard clausal framework, for which a variety of proof checkers are available.

Building on TBUDDY via its application-program interface (API) enables developers to implement automated reasoning tools that generate correctness proofs for their outcomes. In some cases, BDDs serve as the core reasoning mechanism for the tool, while in other cases they provide a bridge from the core reasoner to proof generation. A Boolean satisfiability (SAT) solver based on TBUDDY can achieve polynomial scaling when generating unsatisfiability proofs for a number of problems that yield exponentially-sized proofs with standard solvers. It performs particularly well for formulas containing parity constraints, where it can employ Gaussian elimination to systematically simplify the constraints.

Combining CDCL, Gauss-Jordan Elimination, and Proof Generation

ABSTRACT. Traditional Boolean satisfiability (SAT) solvers based on the conflict-driven clause-learning (CDCL) framework fare poorly on formulas involving large numbers of parity constraints. The CryptoMiniSat solver augments CDCL with Gauss-Jordan elimination to greatly improve performance on these formulas. Integrating the TBUDDY proof-generating BDD library into CryptoMiniSat enables it to generate unsatisfiability proofs when using Gauss-Jordan elimination. These proofs are compatible with standard, clausal proof frameworks.

Towards the shortest DRAT proof of the Pigeonhole Principle
PRESENTER: Marijn Heule

ABSTRACT. The Pigeonhole Principle (PHP) has been heavily studied in automated reasoning, both theoretically and in practice. Most solvers have exponential runtime and proof length, while some specialized techniques achieve polynomial runtime and proof length. Several decades ago, Cook manually constructed O(n^4) extended resolution proofs, where n denotes the number of pigeons. Existing automated techniques only surpass Cook's proofs in similar proof systems for large n. We construct the shortest known proofs of PHP in the standard proof format of modern SAT solving, DRAT. Using auxiliary variables and by recursively decomposing the original program into smaller sizes, we manually obtain proofs having length O(n^3) and leading coefficient 5/2.

14:00-15:30 Session 34M (QBF)
Location: Ullmann 205
Moving Definition Variables in Quantified Boolean Formulas (Extended Abstract)
PRESENTER: Joseph Reeves

ABSTRACT. Augmenting problem variables in a quantified Boolean formula with definition variables enables a compact representation in clausal form. Generally these definition variables are placed in the innermost quantifier level. To restore some structural information, we introduced a preprocessing technique that moves definition variables to the quantifier level closest to the variables that define them. We express the movement in the QRAT proof system to allow verification by independent proof checkers. We evaluated definition variable movement on the QBFEVAL'20 competition benchmarks. Movement significantly improved performance for the competition's top solvers. Combining variable movement with the preprocessor bloqqer improves solver performance compared to using bloqqer alone.

This work appeared in a TACAS'22 paper under the same title.

Advances in Quantified Integer Programming

ABSTRACT. Quantified Integer Programs (QIP) are integer linear programs where variables are either existentially or universally quantified. Similar to the link from SAT to Integer Programming, a connection between QBF and QIP can be drawn: QIP extends QBF by allowing general integer variables and linear constraints in addition to a linear objective function. We review solution methods, recent developments, and extensions in order to display future research directions and opportunities.

A Data-Driven Approach for Boolean Functional Synthesis

ABSTRACT. Given a relational specification between Boolean inputs and outputs, the problem of Boolean functional synthesis is to construct each output as a function of the inputs such that the specification is met. Synthesizing Boolean functions is one of the challenging problems in Computer Science. Over the decades, the problem has found applications in a wide variety of domains such as certified QBF solving, automated program repair, program synthesis, and cryptography.

In this talk, we will discuss Manthan, a novel data-driven approach for Boolean functional synthesis. Manthan views the problem of functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification.

14:00-15:30 Session 34N (TERMGRAPH)

Invited talk by Jörg Endrullis

Location: Ullmann 101
PBPO+ Graph Rewriting in Context

ABSTRACT. We will give an overview of various results on graph rewriting with PBPO+. PBPO+ is a versatile graph rewriting formalism based in category theory that subsumes many well-known approaches such as DPO, SqPO and AGREE (at least in the setting of quasitopoi).

14:00-15:30 Session 34P (TLLA-LINEARITY)
Location: Ullmann 302
The Call-by-Value Lambda-Calculus from a Linear Logic Perspective

ABSTRACT. Plotkin's call-by-value (CbV, for short) lambda-calculus is a variant of the lambda-calculus that models the evaluation mechanism in most functional programming languages. The theory of the CbV lambda-calculus is not as well studied as the call-by-name one, because of some technical issues due to the "weakness" of Plotkin's CbV beta-rule, which leads to a mismatch between syntax and semantics. By adopting a Curry-Howard perspective, we show how linear logic inspires several ways to solve the mismatch between syntax and semantics in CbV, paving the way for restoring a theory of the CbV lambda-calculus as elegant as the one for call-by-name. We test our approach on a type-theoretic/semantic characterization of normalizabilty and on the notion of solvability.

Cloning and Deleting Quantum Information from a Linear Logical Point of View

ABSTRACT. This paper displays a linear sequent calculus in accordance with the no-cloning and no-deleting theorems of quantum computing. The calculus represents operations on matrices in terms of linear sequent rules, satisfing admissibility of cut. It is possible to define a non-trivial categorical semantics for it using categories intrinsic to vector spaces extended with a Kronecker product, which can be viewed as a partial answer to a problem posed by Abramsky.

14:00-14:55 Session 34Q (VardiFest)
Location: Taub 1
Descriptive complexity and inexpressibly proofs

ABSTRACT. I will discuss ways to simplify inexpressibility proofs. In particular, I will discuss an approach by Fagin, Stockmeyer and Vardi that greatly simplifies my earlier proof (from my Ph.D. thesis) that monadic NP is not closed under complement, where monadic NP consists of properties defined by existential second-order sentences, where the existential second-order quantifiers range only over subsets of the domain.

A Brief Glimpse of the Skolem Landscape

ABSTRACT. The Skolem Problem asks how to determine algorithmically whether a given linear recurrence sequence (such as the Fibonacci numbers) has a zero. It is a central question in dynamical systems and number theory, and has many connections to other branches of mathematics and computer science, such as program analysis and automated verification. Unfortunately, its decidability has been open for nearly a century! In this talk, I will present a brief survey of what is known on the Skolem Problem and related questions, including recent and ongoing developments.

Approximations of Certain Answers in First-Order Logic

ABSTRACT. I will explain how old papers by Moshe Vardi and Ray Reiter on query answering over incomplete databases led to new recently discovered schemes of efficient approximation of certain answers, and present a previously unknown and exceptionally simple formulation of such schemes for first-order queries over relational databases.

Towards Algebraic Techniques for Descriptive Complexity

ABSTRACT. I describe my work in connection with several papers by Moshe Vardi.

14:00-15:30 Session 34R: Doctoral Program (CP)
Location: Taub 4
Aggressive Bound Descent for Constraint Optimization
PRESENTER: Thibault Falque

ABSTRACT. Backtrack search is a classical complete approach for exploring the search space of a constraint optimization problem. % when optimality is sought to be proved. Each time a new solution is found during search, its associated bound is used to constrain more the problem, and so the remaining search. An extreme (bad) scenario is when solutions are found in sequence with very small differences between successive bounds. In this paper, we propose an aggressive bound descent (ABD) approach to remedy this problem: new bounds are modified exponentially as long as the searching algorithm is successful. We show that this approach can render the solver more robust, especially at the beginning of search. Our experiments confirm this behavior for constraint optimization.

Exploiting Model Entropy to Make Branching Decisions in Constraint Programming
PRESENTER: Auguste Burlats

ABSTRACT. Branching decisions have a strong impact on performance in Constraint Programming (CP). Therefore robust generic variable ordering heuristics are an important area of research in the CP community. By allowing us to compute marginal probability distributions over the domain of each variable in the model, CP-based Belief Propagation also allows us to compute the entropy of each variable. We present two new branching heuristics exploiting entropy: one chooses the variable with the smallest entropy; the other is inspired by impact-based search and chooses the variable whose fixing is believed to have the strongest impact on the entropy of the model. We also propose a dynamic stopping criterion for iterated belief propagation based on variations in model entropy. We test our heuristics on various constraint satisfaction problems from XCSP and minizinc benchmarks.

Finding Counterfactual Explanations through Constraint Relaxations
PRESENTER: Sharmi Dev Gupta

ABSTRACT. Interactive constraint systems often suffer from infeasibility (no solution) due to conflicting user constraints. A common approach to recover infeasibility is to eliminate the constraints that cause the conflicts in the system. This approach allows the system to provide an explanation as: “if the user is willing to drop out some of their constraints, there exists a solution”. However, one can criticise this form of explanation as not being very informative. A counterfactual explanation is a type of explanation that can provide a basis for the user to recover feasibility by helping them understand which changes can be applied to their existing constraints rather than removing them. This approach has been extensively studied in the machine learning field, but requires a more thorough investigation in the context of constraint satisfaction. We propose an iterative method based on conflict detection and maximal relaxations in over-constrained constraint satisfaction problems to help compute a counterfactual explanation.

Symmetry breaking and Knowledge Compilation
PRESENTER: Andrea Balogh

ABSTRACT. Constraint Programming is a powerful method to solve combinatorial problems, but due to a large search space, solving can be very time consuming. Diagnosis, planning, product configuration are example use cases. These systems are often used in an online format answering queries. Compilation methods were developed to deal with the complexity of solving the problems offline and create a representation that is able to answer queries in polynomial time. Symmetry breaking is the addition of constraints to eliminate symmetries, thus in general speeding up search and reducing the number of solutions. Knowledge compilation looks at finding succinct representations that are also tractable, that is, they support queries and transformations in polytime. Finding the smallest representation is often the bottleneck of compilation methods. Compiled representations are Directed Acyclic Graphs representing the set of all solutions. In this paper we investigate if breaking symmetries, that is representing less solutions, always leads to a smaller compiled representation? We considered four compilers and three highly symmetrical problems. A reduction is observed in all the problems for the compilation size when we break symmetries, with top-down compilers obtaining more reduction.

Improved Sample Complexity Bounds for Branch-and-Cut
PRESENTER: Siddharth Prasad

ABSTRACT. The branch-and-cut algorithm for integer programming has a wide variety of tunable parameters that have a huge impact on its performance, but which are challenging to tune by hand. An increasingly popular approach is to use machine learning to configure these parameters based on a training set of integer programs from the application domain. We bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cut selection, and are sharper and more general than those from prior research.

Exploiting Functional Constraints in Automatic Dominance Breaking for Constraint Optimization (Extended Abstract)
PRESENTER: Allen Z. Zhong

ABSTRACT. Dominance breaking is an effective technique to reduce the time for solving constraint optimization problems. Lee and Zhong propose an automatic dominance breaking framework for a class of constraint optimization problems based on specific forms of objectives and constraints. In this paper, we propose to enhance the framework for problems with nested function calls which can be flattened to functional constraints. In particular, we focus on aggregation functions and exploit such properties as monotonicity, commutativity and associativity to give an efficient procedure for generating effective dominance breaking nogoods. Experimentation also shows orders-of-magnitude runtime speedup using the generated dominance breaking nogoods and demonstrates the ability of our proposal to reveal dominance relations in the literature and discover new dominance relations on problems with ineffective or no known dominance breaking constraints.

Large Neighborhood Search for Robust Solutions for Constraint Satisfaction Problems with Ordered Domains
PRESENTER: Jheisson López

ABSTRACT. Real-world Constraint Satisfaction Problems (CSPs) are subject to uncertainty/dynamism that cannot be known in advance. Some techniques in the literature offer robust solutions for CSPs, but they have low performance in large-scale CSPs. We propose a Large Neighbourhood Search (LNS) algorithm and a value selection heuristic that searches for robust solutions in large-scale CSPs.

15:00-15:30 Session 35A (LFMTP)

Frank Pfenning special session: contributed talks

Location: Taub 3
Associativity or Non-Associativity, Local or Global!
PRESENTER: Eben Blaisdell

ABSTRACT. Lambek's two calculi, the associative one and the non-associative one, have their advantages and disadvantages for the analysis of natural language syntax by means of categorial grammar. In some cases, associativity leads to over-generation, i.e., validation of grammatically incorrect sentences. In other situations, associativity is useful. We will discuss two approaches. One approach, developed by Morrill and Moortgat, begins with the associative calculus and reconstructs local non-associativity by means of the so-called bracket modalities, ultimately leading to Morrill's CatLog parser. Bracket modalities interact in a subtle way with the subexponential modalities originating in linear logic. Another approach, developed by Moot and Retoré, begins with the non-associative calculus and utilizes multi-modalities, ultimately leading to the Grail parser. We enhance the latter approach in our IJCAR 2022 paper, showing that local associativity may be expressed by means of subexponentials. Some aspects of the two approaches touch Frank Pfenning's work on ordered logic and on adjoint logic. We discuss decidability and undecidability results in both approaches.

15:00-15:40 Session 35B (VardiFest)
Location: Taub 1
Fixpoint Logics, Relational Machines, and Computational Complexity

ABSTRACT. In this talk I will recall a fruitful collaboration between Serge Abiteboul and myself with Moshe, that resulted in the 1997 JACM article with the above title. Under Moshe's impetus, the article completed in a very elegant way previous results, providing a comprehensive and compelling picture, and remains one of my favorite papers.

The results establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. The parameterized fixpoint logics express the complexity classes P, NP, PSPACE, and EXPTIME, but only over ordered structures.

The order requirement highlights an inherent mismatch between complexity and logic -- while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we used a theory of relational complexity based on the relational machine, a computational device that operates directly on structures. Relational complexity bridges the gap between standard complexity and fixpoint logic. On one hand, questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight, three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.

To Count or Not to Count: A Personal Perspective

ABSTRACT. Although I have worked closely with Moshe on counting problems, it will be a tall order to count all the things I've learnt from him -- through his articles, lectures, one-on-one conversations and gems of advice over the years. So I'd rather not try to count but briefly talk of a few (among many) occasions at different stages of my career, when I kept running into Moshe's profound contributions and insights. Starting from my grad school days, Moshe's result with Pierre Wolper on automata theoretic LTL model checking was among my first few introductions to formal verification. Much later, while working with my Ph.D. student on logic and some aspects of finite model theory, we kept running into beautiful results due to Moshe and his collaborators. More recently, while working with Moshe on PAC counting, I recall some very interesting discussions on how 2-, 3- or even 2.5-universal hashing might just be the sweet spot to help achieve a balance between scalability and strong formal guarantees. Almost a decade later, we know how spot-on his prediction was.

Moshe Vardi and Intel Corporation: Long and Fruitful Collaboration
PRESENTER: Eli Singerman

ABSTRACT. In this short talk, we will give a retrospective of the collaboration Moshe had with Intel over the past 25 years! This long-lasting collaboration was key in bringing formal verification to industry. It is an excellent example of Vardi’s unique contribution and ability to bridge theory and practice.

A Toast for Moshe at the FLoC VardiFest

ABSTRACT. This summer, we gather to celebrate Moshe Vardi's many pioneering contributions to the theory and practice of computer science, and his leadership qualities and activities. My toast will focus on some less-known ingenious traits he exhibited starting very early in his career. In particular, I will discuss how Moshe acted in the role of a grand translator who applied advanced archery strategies in database research, and will comment on his role as a member of the Gang-of-Four.

15:30-16:00Coffee Break
16:00-17:30 Session 37A (ABR)

Tutorial: Assumption-Based Nonmonotonic Reasoning

Location: Ullmann 104
Tutorial: Assumption-Based Nonmonotonic Reasoning (Part 2)

ABSTRACT. The tutorial is intended to provide a gentle introduction to Assumption-Based Reasoning (ABR) which will be argued to constitute the core of nonmonotonic reasoning in AI. Webpage https://sites.google.com/view/kr2022-abr/home

16:00-17:00 Session 37B (FoMLAS)

FoMLAS Session 8

Location: Ullmann 201
Minimal Multi-Layer Modifications of Deep Neural Networks
PRESENTER: Idan Refaeli

ABSTRACT. Deep neural networks (DNNs) have become increasingly popular in recent years. However, despite their many successes, DNNs may also err and produce incorrect and potentially fatal outputs in safetycritical settings, such as autonomous driving, medical diagnosis, and airborne collision avoidance systems. Much work has been put into detecting such erroneous behavior in DNNs, e.g., via testing or verification, but removing these errors after their detection has received lesser attention. We present here a new framework, called 3M-DNN, for repairing a given DNN, which is known to err on some set of inputs. The novel repair procedure employed by 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine. To the best of our knowledge, our method is the first one that allows repairing the network by simultaneously modifying the weights of multiple layers. This is achieved by splitting the network into sub-networks, and applying a single-layer repairing technique to each component. We evaluated 3M-DNN on an extensive set of benchmarks, obtaining promising results.

Self-Correcting Neural Networks For Safe Classification
PRESENTER: Ravi Mangal

ABSTRACT. Classifiers learnt from data are increasingly being used as components in systems where safety is a critical concern. In this work, we present a formal notion of safety for classifiers via constraints called safe-ordering constraints. These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature. For classifiers implemented using neural networks, we also present a run-time mechanism for the enforcement of safe-ordering constraints. Our approach is based on a self-correcting layer, which provably yields safe outputs regardless of the characteristics of the classifier input. We compose this layer with an existing neural network classifier to construct a self-correcting network (SC-Net), and show that in addition to providing safe outputs, the SC-Net is guaranteed to preserve the classification accuracy of the original network whenever possible. Our approach is independent of the size and architecture of the neural network used for classification, depending only on the specified property and the dimension of the network’s output; thus it is scalable to large state-of-the-art networks. We show that our approach can be optimized for a GPU, introducing run-time overhead of less than 1ms on current hardware-even on large, widely-used networks containing hundreds of thousands of neurons and millions of parameters.

16:00-17:30 Session 37C: s(CAPS) extensions and applications II (GDE)

Session focused on the latest applications of s(CASP): 2 regular talks (20 minutes plus 5 minutes of Q&A), followed by a panel to discuss the present and future of goal-directed execution of answer set programs (40 minutes).

Location: Ullmann 310
Summary on "Hybrid Neuro-Symbolic Approach for Text-Based Games using Inductive Logic Programming"

ABSTRACT. In this paper, I briefly describe the summary of my work titled - Hybrid Neuro-Symbolic Approach for Text-Based Games using Inductive Logic Programming. Text-based games (TBGs) have emerged as an important test-bed, requiring reinforcement learning (RL) agents to combine natural language understanding with reasoning. A key challenge for agents solving this task is to generalize across multiple games and shows good results on both seen and unseen objects. To tackle these issues, we have designed a hybrid neuro-symbolic framework for TBGs that uses symbolic reasoning along with the neural RL model. We also use WordNet as an external commonsense knowledge source to bring information to generalize the hypothesis. We have tested our work on different settings on TWC games and showed that the agents that incorporate the neuro-symbolic hybrid approach with the generalized rules outperform the baseline agents. 

Blawx: Web-based user-friendly Rules as Code

ABSTRACT. This paper describes Blawx, a prototype web-based user-friendly Rules as Code tool, powered by a goal-directed answer set programming. The paper briefly describes Rules as Code, and introduces desireable qualities for Rules as Code tools. It provides justifications for Blawx’s implementation of the Google Blockly library, and the s(CASP) reasoning system. It then provides a step-by-step tour of how Blawx allows a user to generate an answer set program representing their understanding of a statute, and use that encoding to power an application. The paper concludes with a brief discussion of the current short term and anticipated long-term development objectives for Blawx.

GDE of ASP: applications, potential and future directions

ABSTRACT. Panel discussion with TBA

16:00-17:30 Session 37D (HoTT/UF)
Location: Ullmann 303
Models of homotopy type theory from the Yoneda embedding
A type-theoretic model structure over cubes with one connection presenting spaces
PRESENTER: Evan Cavallo
Groupoidal Realizability: Formalizing the Topological BHK Interpretation
16:00-18:00 Session 37E: Higher-order rewriting and CoCo (IWC)
Location: Ullmann 306
Confluence by Higher-Order Multi--One Critical pairs with an application to the Functional Machine Calculus

ABSTRACT. We show all multi--one critical peaks being many--multi joinable entails confluence of positional pattern rewrite systems (PRSs). To apply this result to the functional machine calculus (FMC), we embed the FMC in a $3$rd order PRS and show its multi--one critical peaks to be one--multi joinable, regaining its (known) confluence via higher-order rewriting.

Checking Confluence of Rewrite Rules in Haskell

ABSTRACT. We present a tool GSOL, a confluence checker for GHC. It checks the confluence property for rewrite rules in a Haskell program by using the confluence checker SOL (Second-Order Laboratory). %Haskell is a non-strict and purely functional programming language. The Glasgow Haskell Compiler (GHC) allows the user to use rewrite rules to optimize Haskell programs in the compilation pipeline. Currently, GHC does not check the confluence of the user-defined rewrite rules. If the rewrite rules are not confluent then the optimization using these rules may produce unexpected results. Therefore, checking the confluence of rewrite rules is important. We implement GSOL using the plugin mechanism of GHC.

Confluence Competition
16:00-17:00 Session 37F (LFMTP)

Frank Pfenning special session: invited talk

Location: Taub 3
Reasoning about Specifications in LF

ABSTRACT. The dependently typed lambda calculus LF provides a convenient means for encoding specifications of object systems; dependent types in this calculus constitute a natural representation of rule-based relational specifications and lambda terms provide for a higher-order abstract syntax based treatment of binding structure. An important consideration from the perspective of formalizing object systems in this context is how reasoning about such specifications can be supported. One approach towards addressing this issue, pioneered by Frank Pfenning and Carsten Schuermann, is to use LF types with specified modes to represent properties of LF specifications; the validity of these properties is then demonstrated by describing inhabitants for such types and showing their totality relative to the chosen modes. We will describe a different approach in this talk. This approach is based on describing a logic that uses LF typing judgements as atomic formulas and that allows more complex formulas to be constructed using logical connectives and quantification over contexts and term variables. The validity of quantifier-free formulas in the logic is determined by LF derivability and a classical interpretation of the connectives, and quantifiers are interpreted by a substitution semantics. Mechanization of reasoning is realized by describing proof rules that are sound with respect to the semantics. We will discuss a collection of proof rules that encode LF meta-theorems and provide for case analysis and induction based reasoning over LF derivations, in addition to reasoning based on the usual interpretation of logical symbols. This approach shares some aspects with the one explored by Mary Southern and Kaustuv Chaudhuri using the Abella system but differs from it in that LF derivability is treated explicitly in the logic rather than indirectly via a translation to a predicate logic form. The talk will also touch upon the Adelfa proof assistant that implements the logic.

[The talk will be based on collaborative work with Mary Southern andChase Johnson.]

16:00-17:30 Session 37I (PC)
Location: Ullmann 309
On vanishing sums of roots of unity in polynomial calculus and sum-of-squares
PRESENTER: Ilario Bonacina

ABSTRACT. Vanishing sums of roots of unity can be seen as a natural generalization of knapsack from Boolean variables to variables taking values over the roots of unity. We show that these sums are hard to prove for polynomial calculus and for sum-of-squares, both in terms of degree and size.

Exponential separations using guarded extension variables

ABSTRACT. We study the complexity of proof systems augmenting resolution with inference rules that allow, given a formula F in conjunctive normal form, deriving clauses that are not necessarily logically implied by F but whose addition to F preserves satisfiability. When the derived clauses are allowed to introduce variables not occurring in F, the systems we consider become equivalent to extended resolution. We are concerned with the versions of these systems "without new variables". They are called BC-, RAT-, SBC- and GER-, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution. Except for SBC-, these systems are known to be exponentially weaker than extended resolution. They are, however, all equivalent to it under a relaxed notion of simulation that allows the translation of the formula along with the proof when moving between proof systems. By taking advantage of this fact, we construct formulas that exponentially separate RAT- from GER- and vice versa. With the same strategy, we also separate SBC- from RAT-. Additionally, we give polynomial-size SBC- proofs of the pigeonhole principle, which separates SBC- from GER- by a previously known lower bound. These results also separate the three systems from BC- since they all simulate it. We thus give an almost complete picture of their relative strengths. Along the way, we prove a partial simulation of RAT- by BC- that is to our knowledge the only example of a nontrivial simulation in proof complexity that cannot necessarily be carried out in time polynomial in the size of the produced proof, which highlights the semantic nature of these systems.

16:00-17:00 Session 37J (PLP)
Location: Ullmann 305
Explainability, causality and computational and-or graphs

ABSTRACT. In recent years, there has been an increasing interest in studying causality-related properties in machine learning models generally, and in generative models in particular. While that is well-motivated, it inherits the fundamental computational hardness of probabilistic inference, making exact reasoning intractable. Probabilistic tractable models have also recently emerged, which guarantee that conditional marginals can be computed in time linear in the size of the model, where the model is usually learned from data. In the talk, we will discuss a number of new results in this area. We will discuss what kind of casual queries can be answered on trained tractable models, what kind of domain constraints can be posed and what methods are available to extract (counterfactual) explanations from them.

16:00-17:00 Session 37K: Proofs II (POS)
Location: Ullmann 311
SATViz: Real-Time Visualization of Clausal Proofs
PRESENTER: Tim Holzenkamp

ABSTRACT. Visual layouts of graphs representing SAT instances can highlight the community structure of SAT instances. The community structure of SAT instances has been associated with both instance hardness and known clause quality heuristics. Our tool SATViz visualizes CNF formulas using the variable interaction graph and a force-directed layout algorithm. With SATViz, clause proofs can be animated to continuously highlight variables that occur in a moving window of recently learned clauses. If needed, SATViz can also create new layouts of the variable interaction graph with the adjusted edge weights. In this paper, we describe the structure and feature set of SATViz. We also present some interesting visualizations created with SATViz.

Certified Symmetry and Dominance Breaking for Combinatorial Optimisation
PRESENTER: Jakob Nordstrom

ABSTRACT. Symmetry and dominance breaking can be crucial for solving hard combinatorial search and optimisation problems, but the correctness of these techniques sometimes relies on subtle arguments. For this reason, it is desirable to produce efficient, machine-verifiable certificates that solutions have been computed correctly. Building on the cutting planes proof system, we develop a certification method for optimisation problems in which symmetry and dominance breaking are easily expressible. Our experimental evaluation demonstrates that we can efficiently verify fully general symmetry breaking in Boolean satisfiability (SAT) solving, thus providing, for the first time, a unified method to certify a range of advanced SAT techniques that also includes XOR and cardinality reasoning. In addition, we apply our method to maximum clique solving and constraint programming as a proof of concept that the approach applies to a wider range of combinatorial problems.

This is a presentation-only submission of a paper that appeared at the 36th AAAI Conference on Artificial Intelligence (AAAI '22).

16:00-17:30 Session 37L (QBF)
Location: Ullmann 205
From QBF to the Dynamic Logic of Propositional Assignments and back: a computational perspective

ABSTRACT. The Dynamic Logic of Propositional Assignments (DL-PA) is a formalism that combines logic, programming, and non-determinism constructs. Starting with DL-PA, we build reductions to and from Quantified Boolean Formulas. This prompts us to unexpectedly revisit prenexing and clausification procedures for quantified propositional formulas. To the best of our knowledge, this latter task was not completly settled when the equivalence operator is involved. (Joint work with Andreas Herzig.)

Lower Bounds for QBFs of Bounded Treewidth

ABSTRACT. The problem of deciding the validity (QSat) of quantified Boolean formulas (QBF) is a vivid research area in both theory and practice. In the field of parameterized algorithmics, the well-studied graph measure treewidth turned out to be a successful parameter. A well-known result by Chen [9] is that QSat when parameterized by the treewidth of the primal graph and the quantifier rank of the input formula is fixed-parameter tractable. More precisely, the runtime of such an algorithm is polynomial in the formula size and exponential in the treewidth, where the exponential function in the treewidth is a tower, whose height is the quantifier rank. A natural question is whether one can significantly improve these results and decrease the tower while assuming the Exponential Time Hypothesis (ETH). In the last years, there has been a growing interest in the quest of establishing lower bounds under ETH, showing mostly problem-specific lower bounds up to the third level of the polynomial hierarchy. Still, an important question is to settle this as general as possible and to cover the whole polynomial hierarchy. In this work, we show lower bounds based on the ETH for arbitrary QBFs parameterized by treewidth and quantifier rank. More formally, we establish lower bounds for QSat and treewidth, namely, that under ETH there cannot be an algorithm that solves QSat of quantifier rank i in runtime significantly better than i-fold exponential in the treewidth and polynomial in the input size. In doing so, we provide a reduction technique to compress treewidth that encodes dynamic programming on arbitrary tree decompositions. Further, we describe a general methodology for a more finegrained analysis of problems parameterized by treewidth that are at higher levels of the polynomial hierarchy. Finally, we illustrate the usefulness of our results by discussing various applications of our results to problems that are located higher on the polynomial hierarchy, in particular, various problems from the literature such as projected model counting problems.

16:00-17:30 Session 37M (TERMGRAPH)

Regular papers, afternoon session, including discussion

Location: Ullmann 101
Formalization and analysis of BPMN using graph transformation systems
PRESENTER: Tim Kräuter

ABSTRACT. The BPMN is a widely used standard notation for defining intra- and inter-organizational workflows. However, the informal description of the BPMN execution semantics leads to different interpretations of BPMN constructs and difficulties in checking behavioral properties. Other approaches to formalizing BPMN’s execution semantics only partially cover BPMN. To this end, we propose a formalization that, compared to other approaches, covers most of the BPMN constructs. Our approach is based on a model transformation from BPMN models to graph grammars. As a proof of concept, we have implemented our approach in an open-source web-based tool.

Ideograph: A Language for Expressing and Manipulating Structured Data
PRESENTER: Stephen Mell

ABSTRACT. We introduce Ideograph, a language for expressing and manipulating structured data. Its types describe kinds of structures, such as natural numbers, lists, multisets, binary trees, syntax trees with variable binding, directed acyclic graphs, and relational databases. Fully normalized terms of a type correspond exactly to members of the corresponding structure, analogous to a Church encoding. Non-normal terms encode alternate representations of their fully normalized forms. In this paper, we first illustrate the correspondence between terms in our language and standard Church encodings, and then we exhibit the type of closed terms in untyped lambda calculus.

16:00-16:40 Session 37Q (VardiFest)
Location: Taub 1
Understandable Proofs of Unsatisfiability

ABSTRACT. Proofs of unsatisfiability facilitate the validation of SAT solver results. Practically all top-tier solvers support proof logging and these proofs can efficiently be checked using formally-verified tools. However, the size of these proofs is typically large and sometimes gigantic, thereby making them impossible to understand. On the other hand, one can extract useful information out of proofs, such as unsatisfiable cores or interpolants.

We present some results on extracting some understanding from proofs of unsatisfiability. This work started after a question by Moshe Vardi about the effect of using a large interval for the Pythagorean Triples problem on the size of the proof. Increasing the size of the interval turned out to reduce the size of the proof substantially. It might even be possible to produce a humanly-understandable proof for this problem if the interval is large enough.

We also show some other results in this direction. For example, short proofs of unsatisfiability have been crucial to constructing small unit-distance graphs with chromatic number 5. These graphs are important building blocks to solving the Hadwiger-Nelson problem. Also, compact proofs of mutilated chessboard problems provided an alternative short, humanly-understandable argument of unsatisfiability.

Moshe Y. Vardi's First Love

ABSTRACT. In 2008, Moshe Y. Vardi received the ACM SIGMOD Edgar F. Codd Innovations Award for "fundamental contributions to the foundations of relational databases". In his acceptance speech, Moshe referred to database theory as his "first love". The purpose of this talk is to give a bird's eye view of Moshe's contributions to database theory and of his enduring legacy by highlighting some of Moshe's most influential papers in this area.

A Comment to Moshe by Ron Fagin
16:00-17:30 Session 37R: Doctoral Program (CP)
Location: Taub 4
Optimized Code Generation against Power Side Channels

ABSTRACT. Software masking, a software mitigation against power-side channel attacks, aims at removing the secret dependencies from the power traces that may leak cryptographic keys. However, high-level software mitigations often depend on general purpose compilers, which do not preserve non-functional properties. What is more, microarchitectural features, such as the memory bus and register reuse, may also reveal secret information. These abstraction are not visible at the high-level implementation of the program. Instead, they are decided at compile time. To remedy these problems, security engineers often turn off compiler optimization and/or perform local, post-compilation transformations. However, theses solution lead to inefficient code. To deal with this issues, we propose Secure by Construction Code Generation (SecConCG), a secure constraint-based compiler backend to generate code that is secure. SecConCG can control the quality of the mitigated program by efficiently searching the best possible low-level implementation according to a processor cost model. In our experiments with ten masked implementations on MIPS and ARM Cortex M0, SecConCG improves the generated code from 10% to 9x compared to non-optimized secure code at a small overhead of up to 8% compared to non-secure optimized code.

Boolean Functional Synthesis and its Applications
PRESENTER: Priyanka Golia

ABSTRACT. Given a Boolean specification between a set of inputs and outputs, the problem of Boolean functional synthesis is to synthesize each output as a function of inputs such that the specification is met. In the report, we first discussed a state-of-the-art data-driven approach for a Boolean functional synthesis, called Manthan [1]. Motivated by the progress in machine learning, Manthan views functional synthesis as a classification problem and relies on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided repair and provable verification. We then discussed challenges faced by Manthan during data-driven synthesis and the remedies to overcome those challenges [2]. Finally, to discuss the applications, we showed the reduction of program synthesis to a special variant of Boolean functional synthesis in which we have explicit dependencies on universally quantified variables [3]. We hope that program synthesis will be that killer application that will motivate further research into functional synthesis.

[1] Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel. 2020. Manthan: A Data-Driven Approach for Boolean Function Synthesis. In Proceedings of International Conference on Computer-Aided Verification (CAV), 2020. [2] Priyanka Golia, Friedrich Slivovsky, Subhajit Roy, and Kuldeep S. Meel. 2021. Engineering an Efficient Boolean Functional Synthesis Engine. In Proceedings of International Conference On Computer Aided Design (ICCAD), 2021. [3] Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel. Program synthesis as dependency quantified formula modulo theory. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2021.

On PB Encodings for Constraint Problems
PRESENTER: Thibault Falque

ABSTRACT. One of the possible approaches for solving a CSP is to encode the input problem into a CNF formula, and then use a SAT solver to solve it. The main advantage of this technique is that it allows to benefit from the practical efficiency of modern SAT solvers, based on the CDCL architecture. However, the reasoning power of SAT solvers is somehow "weak", as it is limited by that of the resolution proof system they use internally. This observation led to the development of so called pseudo-Boolean (PB) solvers, that implement the stronger cutting planes proof system, along with many of the solving techniques inherited from SAT solvers. Additionally, PB solvers can natively reason on PB constraints, i.e., linear equalities or inequalities over Boolean variables. These constraints are more succinct than clauses, so that a single PB constraint can represent exponentially many clauses. In this paper, we leverage both this succinctness and the reasoning power of PB solvers to solve CSPs by designing PB encodings for different common constraints, and feeding them into PB solvers to compare their performance with that of existing CP solvers.

A Boolean Formula Seeker in the Context of Acquiring Maps of Interrelated Conjectures on Sharp Bounds
PRESENTER: Ramiz Gindullin

ABSTRACT. A component of the Bound Seeker [1] is Boolean formula seeker, a part devoted to search for Boolean formulae. Here, a Boolean formula involves n arithmetic conditions linked by a single commutative logical operator or by a sum. Part of the originality of the Boolean formula seeker is that it was synthesised by a constraint program. This extended abstract includes (i) the type of Boolean formulae we consider, (ii) the importance of allowing Boolean formulae in the context of maps of conjectures, (iii) the components of the Boolean formula seeker, and (iv) a short description of the different steps of the acquisition process of Boolean formulae.

Explaining Propagation for Gini and Spread with Variable Mean (Extended Abstract)
PRESENTER: Alexander Ek

ABSTRACT. We introduce two log-linear-time dispersion propagators---(a) spread (variance, and indirectly standard deviation) and (b) the Gini coefficient---capable of explaining their propagations, thus allowing clause learning solvers to use the propagators. Propagators for (a) exist in the literature but do not explain themselves, while propagators for (b) have not been previously studied.

Extended Abstract: Constraint Acquisition Based on Solution Counting

ABSTRACT. We propose CABSC, a system that performs Constraint Acquisition Based on Solution Counting. In order to learn a Constraint Satisfaction Problem (CSP), the user provides positive examples and a Meta-CSP, i.e. a model of a combinatorial problem whose solution is a CSP. It allows listing the potential constraints that can be part of the CSP the user wants to learn. It also allows stating the parameters of the constraints and imposing constraints over these parameters. The CABSC reads the Meta-CSP using an augmented version of the language MiniZinc and returns the CSP that accepts the fewest solutions among the CSPs accepting all positive examples. This is done using a branch and bound where the bounding mechanism makes use of a model counter. Experiments show that CABSC is successful at learning constraints and their parameters from positive examples.

Extended Abstract: Acquiring Maps of Interrelated Conjectures on Sharp Bounds

ABSTRACT. To automate the discovery of conjectures on combinatorial objects, we introduce the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a CP-based system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upper-bounds on eight digraph characteristics.

16:45-17:05 Session 38 (VardiFest)
Location: Taub 1
Verifying Accuracy Claims of Differential Privacy Algorithms

ABSTRACT. Differential privacy is a mathematical framework for developing statistical computations with provable guarantees of privacy and accuracy. In contrast to the privacy component of differential privacy, which has a clear mathematical and intuitive meaning, the accuracy component of differential privacy does not have a general accepted definition; accuracy claims of differential privacy algorithms vary from algorithm to algorithm and are not instantiations of a general definition. In a recent paper~\cite{bcksv21}. we identify program discontinuity as a common cause for \emph{ad hoc} definitions and introduce an alternative notion of accuracy parametrized by, what we call, {distance to disagreement} --- the {distance to disagreement} of an input $x$ w.r.t.\, a deterministic computation $f$ and a distance $d$ is the minimal distance $d(x,y)$ over all $y$ such that $f(y)\neq f(x)$. The talk will discuss what this definition entails and identify circumstance under which verifying claims of accuracy is decidable.

Bridging Practice and Theory in SAT: Moshe Vardi the Catalyst

ABSTRACT. In this talk I will share the technical stepping stones that were the path from my research world to Moshe’s. My background is in hardware verification and this led to my interest in developing practical SAT solvers that could handle hardware verification problems at scale. This led to the two key contributions of the Chaff SAT solver form my group – the two-literal watching algorithm for unit propagation and the VSIDS (Variable State Independent Decaying Sum) decision heuristic. These techniques built on the earlier success of what is now known as CDCL (Conflict Driven Clause Learning). Collectively these techniques dramatically improved the capabilities to SAT solvers enabling them to tackle problems at scale in not just hardware verification, but system verification and even beyond instances from verification. The practical success of these and subsequent solvers seemed to fly in the face of the theoretical complexity of SAT. This piqued Moshe’s interest and led to his taking a major leadership role in trying to develop the theoretical foundations for what makes these solvers effective for the practical instances of interest –the relationship between the search algorithm and the search space characteristics of the practical instances. He was the driver of a series of workshops titled “Theory and Practice of Satisfiability Solving” held at BIRS Banff (2014), Dagstuhl (2015) and BIRS Oxaca (2018). These workshops were remarkable in their bringing together theoreticians and practitioners interested in SAT in an immersive setting to learn from each other and build bridges between theory and practice for this simultaneously simple and complex problem. Moshe was also instrumental in shepherding articles on the practical successes of SAT and SMT solvers in CACM – making sure this reached out to the broad CS community. My chance to collaborate directly with Moshe came through when Kuldeep Meel visited me at Princeton while he was Moshe’s student. We started working on the problem of finding the minimum/minimal set of independent variables for a given CNF SAT formula. This could significantly simplify the cost of the uniform sampling and model counting algorithms that Kuldeep and Moshe were working on. The collaboration expanded to include Alex Ivrii and led to a nice algorithm for this problem – making my Moshe number 1!

17:00-18:00 Session 39A (LFMTP)

Frank Pfenning special session: contributed talks

Location: Taub 3
A (Logical) Framework for Collaboration

ABSTRACT. A common interest in logical frameworks formed the basis for many years of fruitful and enjoyable collaboration with Frank and his students, both directly and indirectly.  Frank’s development of the Twelf system was instrumental in enabling the use of LF for specifying a wide range of formalisms, ranging from “pure logics” to “practical programming languages.”  Most importantly, the Twelf totality checker for proving All-Exists statements allowed for a human-readable and easily developed proof of type safety for the Standard ML language that not only verified what we already knew, but also fostered the development of a new language definition methodology well-suited to the demands of working at scale.

Type refinement as a unifying principle

ABSTRACT. I will discuss some personal perspectives on the "Type Refinement" project initiated by Frank several decades ago, and how it has influenced my research over the years since my PhD.


Language Minimalism and Logical Frameworks
PRESENTER: Chris Martens

ABSTRACT. In the design of tools, languages, and frameworks, Frank has been a consistent advocate for a particular flavor of aesthetic minimalism. A valuable trait of minimalist frameworks and tiny programming languages is that they facilitate implementation and reimplementation in multiple contexts with various interfaces. This principle has guided our (Chris and Rob’s) work at and beyond Carnegie Mellon, including for some recent projects we've pursued together. We'll talk about some of our Frank-adjacent projects we've reimplemented on the web, how language minimalism contributed to their success, lessons learned, and a path forward for extending the reach and application of logical frameworks.


Logics for Robotics
17:00-18:00 Session 39B: Joint Session with Vardi-Fest (LogTeach)
Location: Taub 1
How to be an ethical computer scientist (Joint with Vardi Fest)

ABSTRACT. Many of us got involved in computing because programming was fun. The advantages of computing seemed intuitive to us. We truly believed that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous! Recently, however, computer scientists realized that computing is not a game--it isreal--and it brings with it not only societal benefits, but alsosignificant societal costs, such as labor polarization, disinformation, and smart-phone addiction.

A common reaction to this crisis is to label it as an "ethics crisis".But corporations are driven by profits, not ethics, and machines are just machines. Only people can be expected to act ethically. In this talk, the speaker will discuss how computer scientists should behave ethically.

17:10-18:00 Session 40 (VardiFest)
Location: Taub 1
How to be an ethical computer scientist

ABSTRACT. Many of us got involved in computing because programming was fun.  The advantages of computing seemed intuitive to us.  We truly believed that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous!  Recently, however, computer scientists realized that computing is not a game--it isreal--and it brings with it not only societal benefits, but alsosignificant societal costs, such as labor polarization, disinformation, and smart-phone addiction.

A common reaction to this crisis is to label it as an "ethics crisis".But corporations are driven by profits, not ethics, and machines are just machines.  Only people can be expected to act ethically. In this talk, the speaker will discuss how computer scientists should behave ethically.

(The talk will be also broadcast live. For a webinar link to this talk, please register here: https://bit.ly/VardiFest )

Short BioMoshe Y. Vardi, the founding chair of FLoC, is a University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University. He is the recipient of three IBM Outstanding Innovation Awards, the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, the EATCS Distinguished Achievements Award, the Southeastern Universities Research Association's Distinguished Scientist Award, the ACM SIGLOG Church Award, the Knuth Prize, the ACM Allen Newell Award, and IEEE Norbert Wiener Award for Social and Professional Responsibility.  He holds seven honorary doctorates.


18:30-20:00Workshop Dinner (at the Technion, Taub Terrace Floor 2) - Paid event