View: session overviewtalk overview
FoMLAS Session 5
09:00 | PRESENTER: Haoze Wu ABSTRACT. Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users' expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying image and NLP classifiers. We develop vegas, the first general framework for verifying both single-step and multi-step properties of these schedulers based on carefully designed algorithms that combine abstractions, refinements, solvers, and proof transfer. Our experimental results show that vegas achieves significant speed-up when verifying important properties of a state-of-the-art GNN-based scheduler compared to previous methods. |
09:30 | PRESENTER: Youcheng Sun ABSTRACT. Neural networks are successfully used in many domains including safety and security critical applications. As a result researchers have proposed formal verification techniques for verifying neural network properties. A large majority of previous efforts have focused on checking local robustness in neural networks. We instead focus on another neural network security issue, namely data poisoning, whereby an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger causes the classifier to predict some target class. In this paper, we show how to formulate absence of data poisoning as a property that can be checked with off-the-shelf verification tools, such as Marabou and nneum. Counterexamples of failed checks constitute potential triggers that we validate through testing. We further show that the discovered triggers are ‘transferable’ from a small model to a larger, better-trained model, allowing us to analyze state-of-the art performant models trained for image classification tasks. |
10:00 | PRESENTER: Guy Amir ABSTRACT. Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks. However, as highlighted by many recent studies, even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN. This renders DNNs vulnerable to strategic input manipulations by attackers, and also prone to oversensitivity to environmental noise. To mitigate this phenomenon, practitioners apply joint classification by an ensemble of DNNs. By aggregating the classification outputs of different individual DNNs for the same input, ensemble-based classification reduces the risk of misclassifications due to the specific realization of the stochastic training process of any single DNN. However, the effectiveness of a DNN ensemble is highly dependent on its members not simultaneously erring on many different inputs. In this case study, we harness recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed --- resulting in more robustly-accurate ensemble-based classification. Our proposed framework uses a DNN verifier as a backend, and includes heuristics that help reduce the high complexity of directly verifying ensembles. More broadly, our work puts forth a novel universal objective for formal verification that can potentially improve the robustness of real-world, deep-learning-based systems across a variety of application domains. |
Welcome ceremony to the 2nd workshop on Goal-directed Execution of Answer Set Programs (GDE 2022) followed by a tutorial (40 minutes) and two talks, each roughly 20 minutes plus 5 minutes for discussion and questions.
09:00 | Tutorial: Automating Commonsense Reasoning ABSTRACT. Automating commonsense reasoning, i.e., automating the human thought process, has been considered fiendishly difficult. It is widely believed that automation of commonsense reasoning is needed to build intelligent systems that can rival humans. We argue that answer set programming (ASP) along with its goal-directed implementation allows us to reach this automation goal. We discuss essential elements needed for automating the human thought process, and show how they are realized in ASP and the s(CASP) goal-directed ASP engine. |
09:40 | A Query Evaluation Method for ASP with Abduction ABSTRACT. In this paper, we present a goal-directed proof procedure for ASP with abduction. Our proposed procedure in this paper is correct for any consistent abductive framework proposed in \cite{Kakas}. In other words, if the procedure succeeds, there is a set of hypotheses which satisfies a query, and if the procedure finitely fails, there is no such set. If we do not consider abducibles, this procedure is a goal-directed proof procedure for ASP as well. NOTE: This paper is an extended abstract of a paper with the title ``A Query Evaluation Method for Abductive Logic Programming'' that appeared in the Proceedings of the Joint International Conference and Symposium on Logic Programming (JICSLP'92), pp. 671 -- 685. |
10:05 | First order logic and commonsense reasoning: a path less travelled PRESENTER: Tanel Tammet ABSTRACT. The context of the paper is developing logic-based components for hybrid -- machine learning plus logic -- commonsense question answering systems. The paper presents the main principles and several lessons learned from implementing an automated reasoner able to handle both undecidable exceptions and numerical confidences for full first order logic. Although the described reasoner is based on the resolution method, some of these lessons may be useful for the further development of ASP systems as well. |
09:00 | PRESENTER: Aart Middeldorp ABSTRACT. Having development closed critical pairs is a well-known sufficient condition for confluence of left-linear term rewrite systems. We present formalized results involving proof terms and unification that play an important role in the proof. |
09:30 | On Confluence of Parallel-Innermost Term Rewriting PRESENTER: Carsten Fuhs ABSTRACT. We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures. We propose a simple sufficient criterion for confluence of parallel-innermost rewriting based on non-overlappingness. Our experiments on a large benchmark set indicate the practical usefulness of our criterion. We close with a challenge to the community to develop more powerful dedicated techniques for this problem. |
10:00 | Uniform Completeness ABSTRACT. We introduce uniform completeness and give a local characterisation of it. We show it yields a complete method for showing completeness of rewrite systems. |
09:00 | QBF Solvers and their Proof Systems ABSTRACT. We give an overview of the main paradigms in QBF solving, as well as techniques for preprocessing. In each case, we present corresponding proof systems and discuss lower bound results. We point to open problems, and consider current trends in solver development. |
10:00 | PRESENTER: Benjamin Böhm ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL. |
09:00 | PRESENTER: Daniel Waszkiewicz ABSTRACT. SAT-solvers are one of the primary tools to assess the security of block ciphers automatically. Common CNF encodings of s-boxes are based on algebraic representations (finding low-degree polynomials) or symbolic execution of considered function implementation. However, those methods are not strictly connected with algorithms used in efficient SAT-solvers. Therefore, we propose an application of minimal propagate complete encodings, which in their definition are tuned for modern satisfiability checking algorithms. During the construction of the Boolean formula, there is often the problem of encoding linear XOR equations. The standard procedure includes a greedy shortening algorithm to decrease the size of the resulting encoding. Recently, the problem of a straight-line program has been successfully applied in obtaining efficient implementations of MDS matrices. In this paper, we propose to use the algorithm for finding a short straight-line program as a shortening procedure for a system of linear XOR equations. As a result, we achieved a 20x speed-up of algebraic cryptanalysis of Small Scale AES block cipher to widely used algebraic representations by combining two approaches. |
09:30 | ABSTRACT. In this paper, we formalize the implementations of decision tree and random forest classifiers of the Python Scikit-learn package, and we present a CNF encoding which can be used to calculate sufficient reasons a.k.a. prime implicants for them. Our decision tree encoding resembles a monotonic combinational circuit with pure input variables, of which we take advantage in our method for incremental enumeration of its prime implicants. Encoding the combination of several decision trees in a random forest would add a non-monotonic evaluation function to the root of the encoded circuit. In our approach, we solve an auxiliary SAT problem for enumerating valid leaf-node combinations of the random forest. Each valid leaf-node combination is used to incrementally update the original monotonic circuit encoding by extending the DNF at its root with a new term. Preliminary results show that enumeration of prime implicants by incrementally updating the encoding is by order of magnitudes faster than one-shot solving of the monolithic formula. We present preliminary runtime data, and some initial data about the size and number of samples found when translating the prime implicants back into database queries. |
10:00 | PRESENTER: Jakob Nordstrom ABSTRACT. Algebraic reasoning has proven to be one of the most effective approaches for verifying gate-level integer multipliers, but it struggles with certain components, necessitating the complementary use of SAT solvers. For this reason validation certificates require proofs in two different formats. Approaches to unify the certificates are not scalable, meaning that the validation results can only be trusted up to the correctness of compositional reasoning. We show in this work that using dual variables in the algebraic encoding, together with a novel tail substitution and carry rewriting method, removes the need for SAT solvers in the verification flow and yields a single, uniform proof certificate. This is a presentation-only submission for a paper previously published at DATE '22. |
This session will be held jointly with the Proof Complexity (PC) Workshop.
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part I)
09:00 | Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22 https://cs.uwaterloo.ca/~david/kr22/) ABSTRACT. The tutorial introduces the audience to the concept of referring expressions, formulae that can be used to communicate identities of otherwise abstract objects. The formalism provides foundations for a successful and unambiguous exchange of information about individuals between agents sharing common knowledge about such individuals, a task that is indispensable in most modern applications of knowledge representation and semantic technologies. (tutorial web page) |
Invited Talk by Delia Kesner
09:00 | Opening |
09:05 | Industrial Research Career Path as a Sequence of Constraint Satisfaction Problems |
10:00 | PRESENTER: Xiao Peng ABSTRACT. This paper deals with the multi-agent path finding (MAPF) problem for a team of tethered robots. When considering point-sized robots, paths may share a same subpath provided that they do not cross, and we have shown in a previous work that this case basically involves solving an assignment problem: The objective is to find a set of non-crossing paths, and the makespan is equal to the length of the longest path. In this work, we extend it to the case of non-point-sized robots where robot paths must be synchronized when they share a same subpath and waiting times are considered when computing the makespan. We prove that the upper bound can be computed by solving the linear sum assignment problem. We introduce a new variable neighborhood search method to improve the upper bound and show that it is robust to different instances. We also introduce a Constraint Programming model for solving the problem to optimality. |
10:20 | Extended Abstract for : Scheduling the Equipment Maintenance of an Electric Power Transmission Network using Constraint Programming PRESENTER: Louis Popovic ABSTRACT. Modern electrical power utilities must maintain their electrical equipment and replace them as they reach the end of their useful life. The Transmission Maintenance Scheduling (TMS) problem consists in generating an annual maintenance plan for electric power transportation equipment while maintaining the stability of the network and ensuring a continuous power flow for customers. Each year, a list of equipment that needs to be maintained or replaced is available and the goal is to generate an optimal maintenance plan. This paper proposes a constraint-based scheduling approach to solve the TMS problem. The model considers two types of constraints: (1) constraints that can be naturally formalized inside a constraint programming model, and (2) complex constraints that do not have a proper formalization from the field specialists. The latter cannot be integrated inside the model due to their complexity. Their satisfaction is thus verified by a black box tool, which is a simulator mimicking the impact of a maintenance schedule on the real power network. The simulator is based on complex differential power-flow equations. Experiments are carried out at five strategic points of Hydro-Québec power grid infrastructure, which involves more than 200 electrical equipment and 300 withdrawal requests. Results show that our model is able to comply with most of the formalized and unformalized constraints. It also generates maintenance schedules within an execution time of few minutes. The generated schedules are similar to the ones proposed by a field specialist and can be used to simulate maintenance programs for the next 10 years. |
10:23 | PRESENTER: Augustin Delecluse ABSTRACT. Constraint Programming (CP) is one of the most flexible approaches for modeling and solving vehicle routing problems (VRP). This paper proposes the sequence variable domain, that is inspired by the insertion graph and the subset bound domain for set variables. This domain representation, which targets VRP applications, allows for an efficient insertion-based search on a partial tour and the implementation of simple, yet efficient filtering algorithms for constraints that enforce time-windows on the visits and capacities on the vehicles. Experiment results demonstrate the efficiency and flexibility of this CP domain for solving some hard VRP problems, including the Dial-A-Ride, the Patient Transportation, and the asymmetric TSP with time windows. |
10:26 | Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints - Extended Abstract PRESENTER: Daphné Lafleur ABSTRACT. In this work, we use Reinforcement Learning to combine Machine Learning (ML) and Constraint Programming (CP). We show that combining ML and CP allows the agent to reflect a pretrained network while taking into account constraints, leading to melodic lines that respect both the corpus' style and the music theory constraints. |
Invited talk
Invited talk: Alexander Artikis & Periklis Mantenogloy (University of Piraeus, Greece) -
Online Reasoning under Uncertainty with the Event Calculus
09:30 | Online Reasoning under Uncertainty with the Event Calculus PRESENTER: Alexander Artikis ABSTRACT. Activity recognition systems detect temporal combinations of 'low-level' or 'short-term' activities on sensor data streams. Such streams exhibit various types of uncertainty, often leading to erroneous recognition. We will present an extension of an interval-based activity recognition system which operates on top of a probabilistic Event Calculus implementation. Our proposed system performs online recognition, as opposed to batch processing, thus supporting streaming applications. Our empirical analysis demonstrates the efficacy of our system, comparing it to interval-based batch recognition, point-based recognition, as well as structure and weight learning models. |
09:50 | PRESENTER: Nicola Gigante ABSTRACT. The Safety Fragment of LTL |
10:00 | PRESENTER: Mateo Perez ABSTRACT. A significant challenge to widespread adoption of reinforcement learning (RL) is the faithful translation of designer’s intent to the scalar reward signal required by RL algorithms. Logic-based specifications help in two ways: by precisely capturing the intended objective, and by allowing its automatic translation to a reward function. Omega-regular objectives, such as those expressed in LTL and by omega-automata, have recently been proposed to specify learning objectives in RL. In this talk, we will discuss the impact of Vardi's contributions to automata-theoretic reinforcement learning. |
10:10 | ABSTRACT. Fifteen years have passed since the introduction of the original turn-based variant of Strategy Logic, and twelve since its full-fledged concurrent extension. Several interesting results have been obtained and, for sure, many more are still to come. The ones that I consider more meaningful concern, in particular, the enlightening connections with other subfields of theoretical computer science, most notably algorithmic game theory and finite model theory, which have enriched and broadened the scientific literature. In this talk, I will overview some of these results, starting with anecdotes on the original work done in 2008, while I was a visiting Ph.D. student at Rice University under Moshe’s supervision, and terminating with a few open questions. |
10:20 | ABSTRACT. The paper discusses a collaboration with Moshe Vardi that started with a work presenting an algorithm that takes one regular expression E and n regular expressions S as input, and returns an automaton which is shown to be the maximal rewriting of E with respect to S. |
FoMLAS Session 6
11:00 | PRESENTER: Doron Peled ABSTRACT. Formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes also automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism applies temporal logic with predicates for characterizing events and uses universal quantification to allow enumeration of objects. While we have applied the formalism successfully to two complex use cases, several limitations such as monitorability or lack of quantitative satisfaction also reveal further improvement potential. |
11:30 | PRESENTER: Luca Arnaboldi ABSTRACT. Verification of neural network specifications is currently an active field of research in automated theorem proving. However, the actual act of verification is merely one step in the process of constructing a verified network. Prior to verification the specification should influence the training of the network, and afterwards users may want to export the verified specification to other verification environments in order to prove a specification about a larger system that uses the network. Currently there is little consensus on how best to connect these different stages. In this talk we will describe our proposed solution to this problem: the Vehicle specification system. Vehicle allows the user to write a single specification in a high-level human readable form, and the Vehicle compiler then compiles it down to different targets, including training frameworks, verifiers and interactive theorem provers. In this talk we will discuss the various design decisions involved in the specification language itself and hope to solicit feedback from the verification community. (Submitted as Extended Abstract) |
12:00 | PRESENTER: Natalia Ślusarz ABSTRACT. The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications has drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice. It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification. Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL). This raises several research questions on the technical side of "training with constraints". What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions. (Submitted as Extended Abstract) |
Session focused on the use of s(CASP) for modeling: 3 regular talks (20 minutes presentation) and a short talk (10 minutes presentation) plus 5 minutes of Q&A each one.
11:00 | Integration of Logical English and s(CASP) PRESENTER: Galileo Sartor ABSTRACT. This paper showcases the use of Logical English, a logic programming language that allows for expressing rules and explanations in a controlled form of natural language, which can be interpreted by the s(CASP) reasoner. It demonstrates the possibility of representing and reasoning with legal values, unknown facts and time with carefully selected expressions of English that can be easily understood without technical training. This research has been developed in the context of the CrossJustice Project. |
11:25 | Embedding s(CASP) in Prolog PRESENTER: Jan Wielemaker ABSTRACT. The original s(CASP) implementation is a stand-alone program implemented in Ciao Prolog. It reads the s(CASP) source using a dedicated parser, resolves the query embedded in the source and emits the results in a format dictated by commandline options. Typical applications require composing a s(CASP) program, solving a query and reasoning about the \textit{bindings}, \textit{model} and/or \textit{justification}. This is often done in some external language, e.g., Python. In this paper we propose a closer integration with Prolog. The s(CASP) program is simply a Prolog program that has to respect certain constraints. The scasp library can be used to solve a query using s(CASP) semantics, making the bindings available in the normal Prolog way and providing access to the model and justification as Prolog terms. This way, we can exploits Prolog's power for manipulating (Prolog) terms for construction the s(CASP) program and interpreting the results. |
11:50 | Modeling Administrative Discretion Using Goal-Directed Answer Set Programming PRESENTER: Joaquin Arias ABSTRACT. The formal representation of a legal text to automatize reasoning about them is well known in literature and is recently gaining much attention thanks to the interest in the so-called smart contracts, and to autonomous decisions by public administrations [8,4,11]. For deterministic rules, there are several proposals, often based on logic-based programming languages [9,10]. However, none of the existing proposals are able to represent the ambiguity and/or administrative discretion present in contracts and/or applicable legislation, e.g., force majeure. This paper is an extended abstract of [3], where we present a framework, called s(LAW), that allows for modeling legal rules involving ambiguity, and supports reasoning and inferring conclusions based on them. |
12:15 | PRESENTER: Jose F. Morales ABSTRACT. In recent years Web browsers are becoming closer and closer to full-fledged computing platforms. Ciao Prolog currently includes a browser-based playground which allows editing and running programs locally in the browser with no need for server-side interaction. The playground is built from reusable components, and allows easily embedding runnable Prolog programs in web pages and documents. These components can also be easily used for the development of specific, fully browser-based applications. The purpose of this paper is to present a browser-based environment for developing s(CASP) programs, based on the Ciao Prolog playground and its components. This s(CASP) playground thus runs locally on the browser with no need for interaction with a server beyond code download. After briefly introducing s(CASP) and Ciao Prolog, we provide an overview of the architecture of the Ciao playground, based on a compilation of the Ciao engine to the WebAssembly platform, describe the steps involved in its adaptation to create the s(CASP) playground, and present some of its capabilities. These include editing and running s(CASP) programs, sharing them, obtaining (sets of) answers, and visualizing and exploring explanations. |
11:00 | Towards Normalization of Simplicial Type Theory via Synthetic Tait Computability PRESENTER: Jonathan Weinberger |
11:30 | Towards Directed Higher Observational Type Theory |
12:00 | The Compatibility of MF with HoTT PRESENTER: Michele Contente |
11:00 | Seven Confluence Criteria for Solving COPS #20 ABSTRACT. COPS #20 is a thought-provoking confluence problem for term rewriting, posed by Gramlich and Lucas (2006). Although the term rewrite system of the problem is confluent, it is beyond the realm of classical confluence criteria such as Knuth and Bendix' criterion (1970) and Huet's parallel closedness (1980). In this talk we will discuss various solutions to the problem, recalling powerful confluence methods developed in the last decade and a half. |
12:00 | PRESENTER: Fabian Mitterwallner ABSTRACT. Conversion equivalence and normalization equivalence are important properties of two rewrite systems. We investigate how many constants are needed to reduce these properties to their ground versions for linear variable-separated rewrite systems. Our results are implemented in the decision tool FORT-h and formalized in Isabelle/HOL. The latter enables the validation of the proofs produced by the former in the certifier FORTify. |
Workshop papers
11:00 | A positive perspective on term representation: work in progress PRESENTER: Jui-Hsuan Wu ABSTRACT. We use the focused proof system LJF as a framework for describing term structures and substitution. Since the proof theory of LJF does not pick a canonical polarization for primitive types, two different approaches to term representation arise. When primitive types are given the negative bias, LJF proofs encode term structures as tree-like structures in a familiar fashion. In this situation, cut elimination also yields the familiar notion of substitution. On the other hand, when primitive types are given the positive bias, LJF proofs yield a structure in which explicit sharing of term structures is possible. In this situation, cut elimination yields a different notion of substitution. We illustrate these two approaches to term representation by applying them to the encoding of untyped λ-terms. More details can be found at http://www.lix.polytechnique.fr/Labo/Dale.Miller/papers/lfmtp22-positive-perspective.pdf. |
11:45 | An Implementation of Set Theory with Pointed Graphs in Dedukti PRESENTER: Thomas Traversié ABSTRACT. Dedukti is a type-checker for the lambda-pi calculus modulo theory, a logical framework that allows the extension of conversion with user-defined rewrite rules. In this paper, we present the implementation of a version of Dowek-Miquel’s intuitionistic set theory in Dedukti. To do so, we adapt this theory - based on the concept of pointed graphs - from Deduction modulo theory to lambda-pi calculus modulo theory, and we formally write the proofs in Dedukti. In particular, this implementation requires the definition of a deep embedding of a certain class of formulas, as well as its interpretation in the theory. More details can be found at https://hal.archives-ouvertes.fr/hal-03740004. |
11:00 | Strategy Extraction and Proof |
12:00 | QCDCL with Cube Learning or Pure Literal Elimination – What is best? (II) |
11:00 | PRESENTER: Sagar Malhotra ABSTRACT. Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Like most relational models, MLNs do not admit consistent marginal inference over varying domain sizes i.e. marginal probabilities depend on the domain size. Furthermore, MLNs learned on a fixed domain do not generalize to domains of different sizes. In recent works, connections have emerged between domain size dependence, lifted inference, and learning from a sub-sampled domain. The central idea of these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special class of models, namely Relational Block Models (RBMs). In terms of data likelihood, RBMs allow us to learn the best possible projective MLN in the two-variable fragment. Furthermore, RBMs also admit consistent parameter learning over sub-sampled domains. |
11:30 | Exploiting the Full Power of Pearl's Causality in Probabilistic Logic Programming PRESENTER: Kilian Rueckschloss ABSTRACT. We introduce new semantics for acyclic probabilistic logic programs in terms of Pearl's functional causal models. Further, we show that our semantics generalize the classical distribution semantics and CP-logic. This enables us to establish all query types of functional causal models, namely probability calculus, predicting the effect of external interventions and counterfactual reasoning, within probabilistic logic programming. Finally, we briefly discuss the problems regarding knowledge representation and the structure learning task which result from the different semantics and query types. |
11:00 | PRESENTER: Markus Iser ABSTRACT. In this paper we report our preliminary results with a new kind of SAT solver called Dinosat. Dinosat's input is a conjunction of clauses, at-most-one constraints and disjunctive normal form (DNF) formulas. The native support for DNF formulas is motivated by the application domain of SAT based product configuration. A DNF formula can also be viewed as a generalization of a clause, i.e., a clause (disjunction of literals) is special case of a DNF formula, where each term (conjunction of literals) has exactly one literal. Similarly, we can generalize the classical resolution rule and use it to resolve two DNF formulas. Based on that, the CDCL algorithm can be modified to work with DNF formulas instead of just clauses. Using randomly generated formulas (with DNFs) we experimentally show, that in certain relevant scenarios, it is more efficient to solve these formulas with Dinosat than translate them to CNF and use a state-of-the-art SAT solver. Another contribution of this paper is identifying the phase transition points for such formulas. |
11:30 | PRESENTER: Hidetomo Nabeshima ABSTRACT. In this study, we propose a new framework for easily constructing efficient deterministic parallel SAT solvers, providing the delayed clause exchange technique introduced in ManyGlucose. This framework allows existing sequential SAT solvers to be parallelized with just as little effort as in the non-deterministic parallel solver framework such as PaInleSS. We show experimentally that parallel solvers built using this framework have performance comparable to state-of-the-art non-deterministic parallel SAT solvers while ensuring reproducible behavior. |
12:00 | Scalable Proof Producing Multi-Threaded SAT Solving with Gimsatul through Sharing instead of Copying Clauses PRESENTER: Armin Biere ABSTRACT. We give a first account of our new parallel SAT solver Gimsatul. Its key feature is to share clauses physically in memory instead of copying them, which is the method of other state-of-the-art multi-threaded SAT solvers to share clauses logically. Our approach keeps information about which literals are watched in a clause local to a solving thread but shares the actual immutable literals of a clause globally among all solving threads. This design gives quiet remarkable parallel scalability, allows aggressive clause sharing while keeping memory usage low and produces more compact proofs. |
This session will be held jointly with the Proof Complexity (PC) Workshop.
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part II)
11:00 | Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22) ABSTRACT. The tutorial introduces the audience to the concept of referring expressions, formulae that can be used to communicate identities of otherwise abstract objects. The formalism provides foundations for a successful and unambiguous exchange of information about individuals between agents sharing common knowledge about such individuals, a task that is indispensable in most modern applications of knowledge representation and semantic technologies (tutorial web page). |
Regular submissions, morning session
11:00 | ABSTRACT. We study how to embed well-known hypergraph grammars based on the double pushout (DPO) approach in the hypergraph Lambek calculus HL. It turns out that DPO rules can be naturally encoded by types of HL. However, this encoding is not enough to convert a DPO grammar into an equivalent grammar based on HL: we additionally need a logical operation that would allow making arbitrarily many copies of types. We develop such an operation called the conjunctive Kleene star and show that any DPO grammar can be converted into an equivalent HL-grammar enriched with this operation. |
11:30 | Greedily Decomposing Proof Terms for String Rewriting into Multistep Derivations by Topological Multisorting ABSTRACT. We show a proof term in a string rewrite system can be mapped to its causal graph and that the latter is a unique representative of the permutation equivalence class of the former. We then map the causal graph back to a proof term of a special shape, a so-called greedy multistep reduction. Composing both transformations yields a simple and effective way of constructing the greedy multistep reduction of a proof term, and thereby of deciding permutation equivalence of proof terms in general, and of (multistep) reductions in particular. |
12:00 | A PBPO+ Graph Rewriting Tutorial PRESENTER: Roy Overbeek ABSTRACT. We provide a tutorial introduction to the algebraic graph rewriting formalism PBPO+. We show how PBPO+ can be obtained by composing a few simple building blocks. Along the way, we comment on how alternative design decisions lead to related formalisms in the literature, such as DPO. |
11:00 | Dissymetrical Linear Logic ABSTRACT. This paper is devoted to design computational systems of linear logic (i.e. systems in which, notably, the non linear and structural phenomena which arise during the cut-elimination process are taken in charge by specific modalities, the exponentials: ! and ?). The systems designed are “intermediate” between Intuitionistic LL and Classical LL. Methodologically, the focus is put on how to break the symmetrical interdependency between ! and ? which prevails in Classical LL – and this without to loose the computational properties (closure by cut-elimination, atomizability of axioms). Three main systems are designed (Dissymetrical LL, semi-functorial Dissymetrical LL, semi-specialized Dissymetrical LL), where, in each of them, ! and ? play well differentiated roles. |
11:30 | ABSTRACT. Bellin translates multiplicative-additive linear logic to its intuitionistic fragment via the Chu construction, seemingly posing an alternative to negative translation. However, his translation is not sound in the sense that not all valid intuitionistic sequents in its image correspond to valid classical ones. By directly analyzing two-sided classical sequents, we develop a sound generalization thereof inspired by parametric negative translation that also handles the exponentials. |
12:00 | PRESENTER: Dale Miller ABSTRACT. We propose to examine some of the proof theory of arithmetic by using two proof systems. A linearized version of arithmetic is available using muMALL, which is MALL plus the following logical connective to treat first-order term structures: equality and inequality, first-order universal and existential quantification, and the least and greatest fixed point operators. The proof system muLK is an extension of muMALL in which contraction and weakening are permitted. It is known that muMALL has a cut-elimination result and is therefore consistent. We will show that muLK is consistent by embedding it into second-order linear logic. We also show that muLK contains Peano arithmetic and that in a couple of different situations, muLK is conservative over muMALL. Finally, we show that a proof that a relation represents a total function can be turned into a proof-search-based algorithm to compute that function. |
11:00 | Interdisciplinary Research -- Cost Function Network for Life Sciences |
11:55 | PRESENTER: Marco Dalla ABSTRACT. The Boolean Satisfiability Problem (SAT) was the first known NP-complete problem and has a very broad literature focusing on it. It has been applied successfully to various real-world problems, such as scheduling, planning and cryptography. SAT problem feature extraction plays an essential role in this field. SAT solvers are complex, fine-tuned systems that exploit problem structure. The ability to represent/encode a large SAT problem using a compact set of features has broad practical use in instance classification, algorithm portfolios, and solver configuration. The performance of these techniques relies on the ability of feature extraction to convey helpful information. Researchers often craft these features ``by hand'' to capture particular structures of the problem. Instead, in this paper, we extract features using semi-supervised deep learning. We train a convolutional autoencoder (AE) to compress the SAT problem into a limited latent space and reconstruct it minimizing the reconstruction error. The latent space projection should preserve much of the structural features of the problem. We compare our approach to a set of features commonly used for algorithm selection. Firstly, we train classifiers on the projection to predict if the problems are satisfiable or not. If the compression conveys valuable information, a classifier should be able to take correct decisions. In the second experiment, we check if the classifiers can identify the original problem that was encoded as SAT. The empirical analysis shows that the autoencoder is able to represent problem features in a limited latent space efficiently, as well as convey more information than current feature extraction methods. |
12:15 | PRESENTER: Felix Ulrich-Oltean ABSTRACT. In this extended abstract, we summarise the work from our main conference paper. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. |
12:18 | PRESENTER: Isaac Rudich ABSTRACT. Decision diagrams are an increasingly important tool in cutting-edge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warm-start technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. We present a method of peeling off a sub-graph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peel-and-bound. We test the method on the sequence ordering problem, and our results indicate that our peel-and-bound scheme generates stronger bounds than a branch-and-bound scheme using the same propagators, and at significantly less computational cost. |
12:21 | Solving the Constrained Single-Row Facility Layout Problem with Decision Diagrams (Extended Abstract) PRESENTER: Vianney Coppé ABSTRACT. This paper presents two exact optimization models for the Constrained Single-Row Facility Layout Problem. It is a linear arrangement problem considering departments in a facility with given lengths and traffic intensities. The first approach is an extension of the state-of-the-art mixed-integer programming model for the unconstrained problem with the additional constraints. The second one is a decision diagram-based branch-and-bound that takes advantage of the recursive nature of the problem through a dynamic programming model. The computational experiments show that both approaches significantly outperform the only mixed-integer programming model in the literature. |
12:24 | PRESENTER: Ruiwei Wang ABSTRACT. Ordered Multi-valued Decision Diagrams (MDDs) have been shown to be useful to represent finite domain functions/relations. For example, various constraints can be modelled with MDD constraints. Recently, a new representation called Binary Constraint Tree (BCT), which is a (special) tree structure binary Constraint Satisfaction Problem, has been proposed to encode MDDs and shown to outperform existing MDD constraint propagators in Constraint Programming solvers. BCT is a compact representation, and it can be exponentially smaller than MDD for representing some constraints. Here, we also show that BCT is compact for representing non-deterministic finite state automaton (NFA) constraints. In this paper, we investigate how to encode BCT into CNF form, making it suitable for SAT solvers. We present and investigate five BCT CNF encodings. We compare the propagation strength of the BCT CNF encodings and experimentally evaluate the encodings on a range of existing benchmarks. We also compare with seven existing CNF encodings of MDD constraints. Experimental results show that the CNF encodings of BCT constraints can outperform those of MDD constraints on various benchmarks. |
12:10 | ABSTRACT. The determinization of a nondeterministic B\"uchi automaton (NBA) is a fundamental construction of automata theory, with applications to probabilistic verification and reactive synthesis. The standard determinization constructions, such as the ones based on the Safra-Piterman's approach, work on the whole NBA. In this work we propose a divide-and-conquer determinization approach. To this end, we first classify the strongly connected components (SCCs) of the given NBA as inherently weak, deterministic accepting, and nondeterministic accepting. We then present how to determinize each type of SCC \emph{independently} from the others; this results in an easier handling of the determinization algorithm that takes advantage of the structure of that SCC. Once all SCCs have been determinized, we show how to compose them so to obtain the final equivalent deterministic Emerson-Lei automaton, which can be converted into a deterministic Rabin automaton without blow-up of states and transitions. We implement our algorithm in a prototype tool named ourDC and empirically evaluate ourDC with the state-of-the-art tools on a large set of benchmarks from the literature. The experimental results show that our prototype ourDC outperforms Spot and Owl regarding the number of states and transitions. |
12:20 | PRESENTER: Ben Greenman ABSTRACT. We have been studying LTL misconceptions with multiple populations to determine *in what ways* LTL is tricky and to decide *what we can do* to address the issues. We propose an interactive remote talk that aims to demonstrate the LTL misconceptions and expert blind spots that we have found. |
12:30 | Comments from Giuseppe, Kuldeep, and Kristin |
Lunches will be held in Taub hall and in The Grand Water Research Institute.
Tutorial: Assumption-Based Nonmonotonic Reasoning - Alexander Bochman
FoMLAS Session 7
14:00 | PRESENTER: Omri Isac ABSTRACT. Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification com- munity has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question. In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to- check witness of unsatisfiability, which attests to the absence of errors. Our proof production is based on an efficient adaptation of the well-known Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. As a proof of concept, we implemented our technique on top of the Marabou DNN verifier. Our evaluation on a safety- critical system for airborne collision avoidance shows that proof production succeeds in almost all cases, and entails only a small overhead. |
14:30 | Efficient Neural Network Verification using Branch and Bound ABSTRACT. In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons; and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α, β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. Bio: Suman Jana is an associate professor in the department of computer science and the data science institute at Columbia University. His primary research interest is at the intersections of computer security and machine learning. His research has received six best paper awards, a CACM research highlight, a Google faculty fellowship, a JPMorgan Chase Faculty Research Award, an NSF CAREER award, and an ARO young investigator award. |
Session focused on the most recent applications of s(CASP) and the description of the functionalities incorporated in s(CASP) that have made them possible: 3 regular talks (20 minutes presentation) and a short talk (10 minutes presentation) plus 5 minutes of Q&A each one.
14:00 | Automating Defeasible Reasoning in Law with Answer Set Programming PRESENTER: Avishkar Mahajan ABSTRACT. The paper studies defeasible reasoning in rule-based systems, in particular about legal norms and contracts. We identify rule modifiers that specify how rules interact and how they can be overridden. We then define rule transformations that eliminate these modifiers, leading in the end to a translation of rules to formulas. For reasoning with and about rules, we contrast two approaches, one in a classical logic with SMT solvers, which is only briefly sketched, and one using non-monotonic logic with Answer Set Programming solvers, described in more detail. |
14:25 | Unmanned Aerial Vehicle compliance checking using Goal-Directed Answer Set Programming PRESENTER: Sarat Chandra Varanasi ABSTRACT. We present a novel application of Goal-Directed Answer Set Programming that digitizes the model aircraft operator’s compliance verification against the Academy of Model Aircrafts (AMA) safety code. The AMA safety code regulates how AMA flyers operate Unmanned Aerial Vehicles (UAVs) for limited recreational purposes. Flying drones and their operators are subject to various rules before and after the operation of the aircraft to ensure safe flights. In this paper, we leverage Goal-Directed Answer Set Programming to encode the AMA safety code and automate compliance checks. To check compliance, we use the s(CASP) which is a goal-directed ASP engine. By using s(CASP) the operators can easily check for violations and obtain a justification tree explaining the cause of the violations in human-readable natural language. We develop a front end questionnaire interface that accepts various conditions and use the backend s(CASP) engine to evaluate whether the conditions adhere to the regulations. We also leverage s(CASP) implemented in SWI-Prolog, where SWI-Prolog exposes the reasoning capabilities of s(CASP) as a REST service. To the best of our knowledge, this is the first application of ASP in the AMA and Avionics Compliance and Certification space. |
14:50 | Symbolic Reinforcement Learning Framework with Incremental Learning of Rule-based Policy PRESENTER: Elmer Salazar ABSTRACT. In AI research, Relational Reinforcement Learning (RRL) is a vastly discussed domain that combines reinforcement learning with relational learning or inductive learning. One of the key challenges of inductive learning through rewards and action is to learn the relations incrementally. In other words, how an agent can closely mimic the human learning process. Where we, humans, start with a very naive belief about a concept and gradually update it over time to a more concrete hypothesis. In this paper, we address this challenge and show that an automatic theory revision component can be developed efficiently that can update the existing hypothesis based on the rewards the agent collects by applying it. We present a symbolic reinforcement learning framework with the automatic theory revision component for incremental learning. This theory revision component would not be possible to build without the help of a goal-directed execution engine of answer set programming (ASP) - s(CASP). The current work has demonstrated a proof of concept about the RL framework and we are still working on it. |
15:15 | LTL Model Checking using Coinductive Answer Set programming PRESENTER: Sarat Chandra Varanasi ABSTRACT. We present a model checker for Linear Temporal Logic using Goal-Directed Answer Set Programming under Costable model semantics (CoASP). Costable model semantics allows for positive loops to succeed, unlike Stable model semantics where positive loops fail. Therefore, by using the Costable model semantics, LTL formulas involving the G and R operator can be proved coinductively. |
14:00 | TBA |
15:00 | Semantics for two-dimensional type theory PRESENTER: Benedikt Ahrens |
14:00 | ABSTRACT. We characterize local confluence of *conditional rewrite systems* à la Huet, i.e., as the joinability of a set of conditional pairs including the usual conditional critical pairs and a new kind of pairs we call *conditional variable pairs*. |
14:30 | A Critical Pair Criterion for Level-Commutation of Conditional Term Rewriting Systems PRESENTER: Takahito Aoto ABSTRACT. We introduce level-commutation of conditional term rewriting systems (CTRSs) that extends the notion of level-confluence, in a way similar to extending confluence to commutation. We show a criterion for level-commutation of oriented CTRSs, which generalizes the one for commutation of term rewriting systems in (Toyama, 1987). As a corollary, we obtain a criterion of level-confluence of oriented CTRSs which extends the one in (Suzuki et al., 1995). |
15:00 | Proving Confluence with CONFident PRESENTER: Raúl Gutiérrez ABSTRACT. This paper describes the proof framework used in CONFident, a framework to deal with different types of systems (term rewriting systems, context-sensitive term rewriting systems and conditional term rewriting systems) and different types of tasks (checking joinability of critical pairs, termination of systems, feasibility of reachibility problems or deducibility) and different techniques for proving confluence (including modular decompositions, transformations, etc.). |
Frank Pfenning special session: invited talk
14:00 | Possible Desiderata for Logic in the CS Curriculum ABSTRACT. Mathematical (or formal) logic is extremely important in computer science (yet another example, along with number theory, of an already extant mathematical field’s having extreme applicability to the "newer" field of Computer Science). As the Workshop participants are well aware, to incorporate logic into the CS curriculum has been very challenging; there have been sporadic efforts for decades. But, as noted in the Call for Participation, if anything, the situation has worsened over the years. Some of the following will be suggested attempts to lessen this logjam; others will be oriented toward logic content. Caveat: virtually everything following is based on (likely incomplete) knowledge of academia in the United States. Hence, specific items may or may not have relevance in other countries. |
15:00 | Teaching Logic for Computer Science Students: Proof Assistants and Related Tools PRESENTER: Frederik Krogsdal Jacobsen ABSTRACT. In the last decade we have focused our main logic courses on proof assistants and related tools. We find that the modern computer science curriculum requires a focus on applications instead of just pen-and-paper proofs. Notably, we teach the metatheory of logic using tools with formalizations in proof assistants like Isabelle such that we have both implementations and theorems about them. |
14:00 | ABSTRACT. We consolidate two widely believed conjectures about tautologies---no optimal proof system exists, and most require superpolynomial size proofs in any system---into a $p$-isomorphism-invariant condition satisfied by all paddable $\textbf{coNP}$-complete languages or none. The condition is: for any Turing machine (TM) $M$ accepting the language, $\textbf{P}$-uniform input families requiring superpolynomial time by $M$ exist (equivalent to the first conjecture) and appear with positive upper density in an enumeration of input families (implies the second). In that case, no such language is easy on average (in $\textbf{AvgP}$) for a distribution applying non-negligible weight to the hard families. The hardness of proving tautologies and theorems is likely related. Motivated by the fact that arithmetic sentences encoding ``string $x$ is Kolmogorov random'' are true but unprovable with positive density in a finitely axiomatized theory $\mathcal{T}$ (Calude and J{\"u}rgensen), we conjecture that any propositional proof system requires superpolynomial size proofs for a dense set of $\textbf{P}$-uniform families of tautologies encoding ``there is no $\mathcal{T}$ proof of size $\leq t$ showing that string $x$ is Kolmogorov random''. This implies the above condition. The conjecture suggests that there is no optimal proof system because undecidable theories help prove tautologies and do so more efficiently as axioms are added, and that constructing hard tautologies seems difficult because it is impossible to construct Kolmogorov random strings. Similar conjectures that computational blind spots are manifestations of noncomputability would resolve other open problems. |
14:30 | ABSTRACT. The problem of the existence of an p-optimal propositional proof system is one of the central open problems in proof complexity. The goal of this work is to study the restricted case of this problem, namely, the case of strong reductions. We also introduce the notion of bounded proof system and study the connection between optimality and automatizability for bounded proof systems. |
15:00 | PRESENTER: Tomáš Peitl ABSTRACT. Hitting formulas are a peculiar class of propositional CNF formulas. Not only is their satisfiability decidable in polynomial time, even their models can be counted in closed form. In contrast, other tractable classes, like 2-SAT and Horn-SAT, usually have algorithms based on resolution; and model counting remains hard. On the flip side, those resolution-based algorithms usually easily imply an upper bound on resolution complexity, which is missing for hitting formulas. Are hitting formulas hard for resolution? In this paper we take the first steps towards answering this question. We show that the resolution complexity of hitting formulas is dominated by so-called irreducible hitting formulas. However, constructing large irreducible formulas is non-trivial and it is not even known whether infinitely many exist. Building upon our theoretical results, we implement an efficient algorithm on top of the Nauty software package to enumerate all irreducible unsatisfiable hitting formulas with up to 14 clauses, and we determine exact resolution complexity of those with up to 13 clauses, by extending an existing SAT encoding. |
14:00 | Decompositions and algorithms for interpretations of sparse graphs ABSTRACT. The first-order model checking problem for finite graphs asks, given a graph G and a first-order sentence phi as input, to decide whether phi holds on G. While we do not expect that there is an efficient (fpt with respect to the formula size) algorithm which works on the class of all finite graphs, such algorithms have been proven to exist for various structurally well-behaved classes of graphs (graphs of bounded degree, planar graphs, unit interval graphs etc). Identifying graph classes for which an fpt model checking algorithm exists has been an active area of research for the past 25 years. After the existence of an efficient model checking algorithm was shown for classes of sparse graphs in 2014 by Grohe, Kreutzer and Siebertz, the attention gradually turned to the more general setting of graph classes which can be obtained from sparse graphs via interpretations/transductions. This program has been initiated in 2016, when the existence of an fpt algorithm for the first-order model checking problem was shown for graph classes interpretable in graphs of bounded degree. After this, there followed several results about the structure of graphs interpretable in sparse graphs, but despite the efforts of several groups of researchers, no positive algorithmic result has been achieved until very recently. In the talk we will review the current status and recent developments regarding this problem, and in particular we will present a fixed-parameter tractable algorithm for the first-order model checking on interpretations of graph classes with bounded local treewidth (notably, this includes interpretations of planar graphs). |
15:00 | Tractable Abstract Argumentation via Backdoor-Treewidth PRESENTER: Matthias König ABSTRACT. Argumentation frameworks (AFs) are a core formalism in the field of formal argumentation. As most standard computational tasks regarding AFs are hard for the first or second level of the Polynomial Hierarchy, a variety of algorithmic approaches to achieve manageable runtimes have been considered in the past. Among them, the backdoor-approach and the treewidth-approach turned out to yield fixed-parameter tractable fragments. However, many applications yield high parameter values for these methods, often rendering them infeasible in practice. We introduce the backdoor-treewidth approach for abstract argumentation, combining the best of both worlds with a guaranteed parameter value that does not exceed the minimum of the backdoor- and treewidth-parameter. In particular, we formally define backdoor-treewidth and establish fixed-parameter tractability for standard reasoning tasks of abstract argumentation. Moreover, we provide systems to find and exploit backdoors of small width, and conduct systematic experiments evaluating the new parameter. |
14:00 | PRESENTER: Fabrizio Riguzzi ABSTRACT. Hybrid probabilistic logic programs extends probabilistic logic programs by adding the possibility to manage continuous random variables. Despite the maturity of the field, a semantics that unifies discrete and continuous random variables and function symbols was still missing. In this paper, we summarize the main concepts behind a new proposed semantics for hybrid probabilistic logic programs with function symbols. |
14:30 | PRESENTER: Fabrizio Riguzzi ABSTRACT. Probabilistic Logic Programs under the distribution semantics (PLPDS) do not allow statistical probabilistic statements of the form 90% of birds fly, which were defined Type 1 statements by Halpern. In this paper, we add this kind of statements to PLPDS and introduce the PASTA (Probabilistic Answer set programming for STAtistical probabilities) language. We translate programs in our new formalism into probabilistic answer set programs under the credal semantics. This approach differs from previous proposals, such as the one based on probabilistic conditionals as, instead of choosing a single model by making the maximum entropy assumption, we take into consideration all models and we assign probability intervals to queries. In this way we refrain from making assumptions and we obtain a more neutral framework. We also propose an inference algorithm and compare it with an existing solver for probabilistic answer set programs on a number of programs of increasing size, showing that our solution is faster and can deal with larger instances. --- NOTE: this is the original paper submitted and accepted at the 16th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR). It will be published in the proceedings of LPNMR so it must not be published on CEUR. --- |
14:00 | ABSTRACT. The TBUDDY library enables the construction and manipulation of reduced, ordered binary decision diagrams (BDDs). It extends the capabilities of the BUDDY BDD package to support trusted BDDs, where the generated BDDs are accompanied by proofs of their logical properties. These proofs are expressed in a standard clausal framework, for which a variety of proof checkers are available. Building on TBUDDY via its application-program interface (API) enables developers to implement automated reasoning tools that generate correctness proofs for their outcomes. In some cases, BDDs serve as the core reasoning mechanism for the tool, while in other cases they provide a bridge from the core reasoner to proof generation. A Boolean satisfiability (SAT) solver based on TBUDDY can achieve polynomial scaling when generating unsatisfiability proofs for a number of problems that yield exponentially-sized proofs with standard solvers. It performs particularly well for formulas containing parity constraints, where it can employ Gaussian elimination to systematically simplify the constraints. |
14:30 | PRESENTER: Mate Soos ABSTRACT. Traditional Boolean satisfiability (SAT) solvers based on the conflict-driven clause-learning (CDCL) framework fare poorly on formulas involving large numbers of parity constraints. The CryptoMiniSat solver augments CDCL with Gauss-Jordan elimination to greatly improve performance on these formulas. Integrating the TBUDDY proof-generating BDD library into CryptoMiniSat enables it to generate unsatisfiability proofs when using Gauss-Jordan elimination. These proofs are compatible with standard, clausal proof frameworks. |
15:00 | PRESENTER: Marijn Heule ABSTRACT. The Pigeonhole Principle (PHP) has been heavily studied in automated reasoning, both theoretically and in practice. Most solvers have exponential runtime and proof length, while some specialized techniques achieve polynomial runtime and proof length. Several decades ago, Cook manually constructed O(n^4) extended resolution proofs, where n denotes the number of pigeons. Existing automated techniques only surpass Cook's proofs in similar proof systems for large n. We construct the shortest known proofs of PHP in the standard proof format of modern SAT solving, DRAT. Using auxiliary variables and by recursively decomposing the original program into smaller sizes, we manually obtain proofs having length O(n^3) and leading coefficient 5/2. |
14:00 | Moving Definition Variables in Quantified Boolean Formulas (Extended Abstract) PRESENTER: Joseph Reeves ABSTRACT. Augmenting problem variables in a quantified Boolean formula with definition variables enables a compact representation in clausal form. Generally these definition variables are placed in the innermost quantifier level. To restore some structural information, we introduced a preprocessing technique that moves definition variables to the quantifier level closest to the variables that define them. We express the movement in the QRAT proof system to allow verification by independent proof checkers. We evaluated definition variable movement on the QBFEVAL'20 competition benchmarks. Movement significantly improved performance for the competition's top solvers. Combining variable movement with the preprocessor bloqqer improves solver performance compared to using bloqqer alone. This work appeared in a TACAS'22 paper under the same title. |
14:30 | Advances in Quantified Integer Programming ABSTRACT. Quantified Integer Programs (QIP) are integer linear programs where variables are either existentially or universally quantified. Similar to the link from SAT to Integer Programming, a connection between QBF and QIP can be drawn: QIP extends QBF by allowing general integer variables and linear constraints in addition to a linear objective function. We review solution methods, recent developments, and extensions in order to display future research directions and opportunities. |
15:00 | A Data-Driven Approach for Boolean Functional Synthesis ABSTRACT. Given a relational specification between Boolean inputs and outputs, the problem of Boolean functional synthesis is to construct each output as a function of the inputs such that the specification is met. Synthesizing Boolean functions is one of the challenging problems in Computer Science. Over the decades, the problem has found applications in a wide variety of domains such as certified QBF solving, automated program repair, program synthesis, and cryptography. In this talk, we will discuss Manthan, a novel data-driven approach for Boolean functional synthesis. Manthan views the problem of functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. |
Invited talk by Jörg Endrullis
14:00 | PRESENTER: Thibault Falque ABSTRACT. Backtrack search is a classical complete approach for exploring the search space of a constraint optimization problem. % when optimality is sought to be proved. Each time a new solution is found during search, its associated bound is used to constrain more the problem, and so the remaining search. An extreme (bad) scenario is when solutions are found in sequence with very small differences between successive bounds. In this paper, we propose an aggressive bound descent (ABD) approach to remedy this problem: new bounds are modified exponentially as long as the searching algorithm is successful. We show that this approach can render the solver more robust, especially at the beginning of search. Our experiments confirm this behavior for constraint optimization. |
14:20 | PRESENTER: Auguste Burlats ABSTRACT. Branching decisions have a strong impact on performance in Constraint Programming (CP). Therefore robust generic variable ordering heuristics are an important area of research in the CP community. By allowing us to compute marginal probability distributions over the domain of each variable in the model, CP-based Belief Propagation also allows us to compute the entropy of each variable. We present two new branching heuristics exploiting entropy: one chooses the variable with the smallest entropy; the other is inspired by impact-based search and chooses the variable whose fixing is believed to have the strongest impact on the entropy of the model. We also propose a dynamic stopping criterion for iterated belief propagation based on variations in model entropy. We test our heuristics on various constraint satisfaction problems from XCSP and minizinc benchmarks. |
14:40 | PRESENTER: Sharmi Dev Gupta ABSTRACT. Interactive constraint systems often suffer from infeasibility (no solution) due to conflicting user constraints. A common approach to recover infeasibility is to eliminate the constraints that cause the conflicts in the system. This approach allows the system to provide an explanation as: “if the user is willing to drop out some of their constraints, there exists a solution”. However, one can criticise this form of explanation as not being very informative. A counterfactual explanation is a type of explanation that can provide a basis for the user to recover feasibility by helping them understand which changes can be applied to their existing constraints rather than removing them. This approach has been extensively studied in the machine learning field, but requires a more thorough investigation in the context of constraint satisfaction. We propose an iterative method based on conflict detection and maximal relaxations in over-constrained constraint satisfaction problems to help compute a counterfactual explanation. |
15:00 | PRESENTER: Andrea Balogh ABSTRACT. Constraint Programming is a powerful method to solve combinatorial problems, but due to a large search space, solving can be very time consuming. Diagnosis, planning, product configuration are example use cases. These systems are often used in an online format answering queries. Compilation methods were developed to deal with the complexity of solving the problems offline and create a representation that is able to answer queries in polynomial time. Symmetry breaking is the addition of constraints to eliminate symmetries, thus in general speeding up search and reducing the number of solutions. Knowledge compilation looks at finding succinct representations that are also tractable, that is, they support queries and transformations in polytime. Finding the smallest representation is often the bottleneck of compilation methods. Compiled representations are Directed Acyclic Graphs representing the set of all solutions. In this paper we investigate if breaking symmetries, that is representing less solutions, always leads to a smaller compiled representation? We considered four compilers and three highly symmetrical problems. A reduction is observed in all the problems for the compilation size when we break symmetries, with top-down compilers obtaining more reduction. |
15:20 | PRESENTER: Siddharth Prasad ABSTRACT. The branch-and-cut algorithm for integer programming has a wide variety of tunable parameters that have a huge impact on its performance, but which are challenging to tune by hand. An increasingly popular approach is to use machine learning to configure these parameters based on a training set of integer programs from the application domain. We bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cut selection, and are sharper and more general than those from prior research. |
15:23 | Exploiting Functional Constraints in Automatic Dominance Breaking for Constraint Optimization (Extended Abstract) PRESENTER: Allen Z. Zhong ABSTRACT. Dominance breaking is an effective technique to reduce the time for solving constraint optimization problems. Lee and Zhong propose an automatic dominance breaking framework for a class of constraint optimization problems based on specific forms of objectives and constraints. In this paper, we propose to enhance the framework for problems with nested function calls which can be flattened to functional constraints. In particular, we focus on aggregation functions and exploit such properties as monotonicity, commutativity and associativity to give an efficient procedure for generating effective dominance breaking nogoods. Experimentation also shows orders-of-magnitude runtime speedup using the generated dominance breaking nogoods and demonstrates the ability of our proposal to reveal dominance relations in the literature and discover new dominance relations on problems with ineffective or no known dominance breaking constraints. |
15:26 | Large Neighborhood Search for Robust Solutions for Constraint Satisfaction Problems with Ordered Domains PRESENTER: Jheisson López ABSTRACT. Real-world Constraint Satisfaction Problems (CSPs) are subject to uncertainty/dynamism that cannot be known in advance. Some techniques in the literature offer robust solutions for CSPs, but they have low performance in large-scale CSPs. We propose a Large Neighbourhood Search (LNS) algorithm and a value selection heuristic that searches for robust solutions in large-scale CSPs. |
Frank Pfenning special session: contributed talks
15:00 | Associativity or Non-Associativity, Local or Global! PRESENTER: Eben Blaisdell ABSTRACT. Lambek's two calculi, the associative one and the non-associative one, have their advantages and disadvantages for the analysis of natural language syntax by means of categorial grammar. In some cases, associativity leads to over-generation, i.e., validation of grammatically incorrect sentences. In other situations, associativity is useful. We will discuss two approaches. One approach, developed by Morrill and Moortgat, begins with the associative calculus and reconstructs local non-associativity by means of the so-called bracket modalities, ultimately leading to Morrill's CatLog parser. Bracket modalities interact in a subtle way with the subexponential modalities originating in linear logic. Another approach, developed by Moot and Retoré, begins with the non-associative calculus and utilizes multi-modalities, ultimately leading to the Grail parser. We enhance the latter approach in our IJCAR 2022 paper, showing that local associativity may be expressed by means of subexponentials. Some aspects of the two approaches touch Frank Pfenning's work on ordered logic and on adjoint logic. We discuss decidability and undecidability results in both approaches. |
15:00 | ABSTRACT. In this talk I will recall a fruitful collaboration between Serge Abiteboul and myself with Moshe, that resulted in the 1997 JACM article with the above title. Under Moshe's impetus, the article completed in a very elegant way previous results, providing a comprehensive and compelling picture, and remains one of my favorite papers. The results establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. The parameterized fixpoint logics express the complexity classes P, NP, PSPACE, and EXPTIME, but only over ordered structures. The order requirement highlights an inherent mismatch between complexity and logic -- while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we used a theory of relational complexity based on the relational machine, a computational device that operates directly on structures. Relational complexity bridges the gap between standard complexity and fixpoint logic. On one hand, questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight, three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators. |
15:10 | ABSTRACT. Although I have worked closely with Moshe on counting problems, it will be a tall order to count all the things I've learnt from him -- through his articles, lectures, one-on-one conversations and gems of advice over the years. So I'd rather not try to count but briefly talk of a few (among many) occasions at different stages of my career, when I kept running into Moshe's profound contributions and insights. Starting from my grad school days, Moshe's result with Pierre Wolper on automata theoretic LTL model checking was among my first few introductions to formal verification. Much later, while working with my Ph.D. student on logic and some aspects of finite model theory, we kept running into beautiful results due to Moshe and his collaborators. More recently, while working with Moshe on PAC counting, I recall some very interesting discussions on how 2-, 3- or even 2.5-universal hashing might just be the sweet spot to help achieve a balance between scalability and strong formal guarantees. Almost a decade later, we know how spot-on his prediction was. |
15:20 | PRESENTER: Eli Singerman ABSTRACT. In this short talk, we will give a retrospective of the collaboration Moshe had with Intel over the past 25 years! This long-lasting collaboration was key in bringing formal verification to industry. It is an excellent example of Vardi’s unique contribution and ability to bridge theory and practice. |
15:30 | ABSTRACT. This summer, we gather to celebrate Moshe Vardi's many pioneering contributions to the theory and practice of computer science, and his leadership qualities and activities. My toast will focus on some less-known ingenious traits he exhibited starting very early in his career. In particular, I will discuss how Moshe acted in the role of a grand translator who applied advanced archery strategies in database research, and will comment on his role as a member of the Gang-of-Four. |
Tutorial: Assumption-Based Nonmonotonic Reasoning
FoMLAS Session 8
16:00 | PRESENTER: Idan Refaeli ABSTRACT. Deep neural networks (DNNs) have become increasingly popular in recent years. However, despite their many successes, DNNs may also err and produce incorrect and potentially fatal outputs in safetycritical settings, such as autonomous driving, medical diagnosis, and airborne collision avoidance systems. Much work has been put into detecting such erroneous behavior in DNNs, e.g., via testing or verification, but removing these errors after their detection has received lesser attention. We present here a new framework, called 3M-DNN, for repairing a given DNN, which is known to err on some set of inputs. The novel repair procedure employed by 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine. To the best of our knowledge, our method is the first one that allows repairing the network by simultaneously modifying the weights of multiple layers. This is achieved by splitting the network into sub-networks, and applying a single-layer repairing technique to each component. We evaluated 3M-DNN on an extensive set of benchmarks, obtaining promising results. |
16:30 | PRESENTER: Ravi Mangal ABSTRACT. Classifiers learnt from data are increasingly being used as components in systems where safety is a critical concern. In this work, we present a formal notion of safety for classifiers via constraints called safe-ordering constraints. These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature. For classifiers implemented using neural networks, we also present a run-time mechanism for the enforcement of safe-ordering constraints. Our approach is based on a self-correcting layer, which provably yields safe outputs regardless of the characteristics of the classifier input. We compose this layer with an existing neural network classifier to construct a self-correcting network (SC-Net), and show that in addition to providing safe outputs, the SC-Net is guaranteed to preserve the classification accuracy of the original network whenever possible. Our approach is independent of the size and architecture of the neural network used for classification, depending only on the specified property and the dimension of the network’s output; thus it is scalable to large state-of-the-art networks. We show that our approach can be optimized for a GPU, introducing run-time overhead of less than 1ms on current hardware-even on large, widely-used networks containing hundreds of thousands of neurons and millions of parameters. |
Session focused on the latest applications of s(CASP): 2 regular talks (20 minutes plus 5 minutes of Q&A), followed by a panel to discuss the present and future of goal-directed execution of answer set programs (40 minutes).
16:00 | Models of homotopy type theory from the Yoneda embedding |
16:30 | A type-theoretic model structure over cubes with one connection presenting spaces PRESENTER: Evan Cavallo |
17:00 |
Frank Pfenning special session: invited talk
16:00 | PRESENTER: Ilario Bonacina ABSTRACT. Vanishing sums of roots of unity can be seen as a natural generalization of knapsack from Boolean variables to variables taking values over the roots of unity. We show that these sums are hard to prove for polynomial calculus and for sum-of-squares, both in terms of degree and size. |
17:00 | PRESENTER: Emre Yolcu ABSTRACT. We study the complexity of proof systems augmenting resolution with inference rules that allow, given a formula F in conjunctive normal form, deriving clauses that are not necessarily logically implied by F but whose addition to F preserves satisfiability. When the derived clauses are allowed to introduce variables not occurring in F, the systems we consider become equivalent to extended resolution. We are concerned with the versions of these systems "without new variables". They are called BC-, RAT-, SBC- and GER-, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution. Except for SBC-, these systems are known to be exponentially weaker than extended resolution. They are, however, all equivalent to it under a relaxed notion of simulation that allows the translation of the formula along with the proof when moving between proof systems. By taking advantage of this fact, we construct formulas that exponentially separate RAT- from GER- and vice versa. With the same strategy, we also separate SBC- from RAT-. Additionally, we give polynomial-size SBC- proofs of the pigeonhole principle, which separates SBC- from GER- by a previously known lower bound. These results also separate the three systems from BC- since they all simulate it. We thus give an almost complete picture of their relative strengths. Along the way, we prove a partial simulation of RAT- by BC- that is to our knowledge the only example of a nontrivial simulation in proof complexity that cannot necessarily be carried out in time polynomial in the size of the produced proof, which highlights the semantic nature of these systems. |
16:00 | PRESENTER: Tim Holzenkamp ABSTRACT. Visual layouts of graphs representing SAT instances can highlight the community structure of SAT instances. The community structure of SAT instances has been associated with both instance hardness and known clause quality heuristics. Our tool SATViz visualizes CNF formulas using the variable interaction graph and a force-directed layout algorithm. With SATViz, clause proofs can be animated to continuously highlight variables that occur in a moving window of recently learned clauses. If needed, SATViz can also create new layouts of the variable interaction graph with the adjusted edge weights. In this paper, we describe the structure and feature set of SATViz. We also present some interesting visualizations created with SATViz. |
16:30 | PRESENTER: Jakob Nordstrom ABSTRACT. Symmetry and dominance breaking can be crucial for solving hard combinatorial search and optimisation problems, but the correctness of these techniques sometimes relies on subtle arguments. For this reason, it is desirable to produce efficient, machine-verifiable certificates that solutions have been computed correctly. Building on the cutting planes proof system, we develop a certification method for optimisation problems in which symmetry and dominance breaking are easily expressible. Our experimental evaluation demonstrates that we can efficiently verify fully general symmetry breaking in Boolean satisfiability (SAT) solving, thus providing, for the first time, a unified method to certify a range of advanced SAT techniques that also includes XOR and cardinality reasoning. In addition, we apply our method to maximum clique solving and constraint programming as a proof of concept that the approach applies to a wider range of combinatorial problems. This is a presentation-only submission of a paper that appeared at the 36th AAAI Conference on Artificial Intelligence (AAAI '22). |
Regular papers, afternoon session, including discussion
16:00 | PRESENTER: Tim Kräuter ABSTRACT. The BPMN is a widely used standard notation for defining intra- and inter-organizational workflows. However, the informal description of the BPMN execution semantics leads to different interpretations of BPMN constructs and difficulties in checking behavioral properties. Other approaches to formalizing BPMN’s execution semantics only partially cover BPMN. To this end, we propose a formalization that, compared to other approaches, covers most of the BPMN constructs. Our approach is based on a model transformation from BPMN models to graph grammars. As a proof of concept, we have implemented our approach in an open-source web-based tool. |
16:30 | Ideograph: A Language for Expressing and Manipulating Structured Data PRESENTER: Stephen Mell ABSTRACT. We introduce Ideograph, a language for expressing and manipulating structured data. Its types describe kinds of structures, such as natural numbers, lists, multisets, binary trees, syntax trees with variable binding, directed acyclic graphs, and relational databases. Fully normalized terms of a type correspond exactly to members of the corresponding structure, analogous to a Church encoding. Non-normal terms encode alternate representations of their fully normalized forms. In this paper, we first illustrate the correspondence between terms in our language and standard Church encodings, and then we exhibit the type of closed terms in untyped lambda calculus. |
16:00 | PRESENTER: Rodothea Myrsini Tsoupidi ABSTRACT. Software masking, a software mitigation against power-side channel attacks, aims at removing the secret dependencies from the power traces that may leak cryptographic keys. However, high-level software mitigations often depend on general purpose compilers, which do not preserve non-functional properties. What is more, microarchitectural features, such as the memory bus and register reuse, may also reveal secret information. These abstraction are not visible at the high-level implementation of the program. Instead, they are decided at compile time. To remedy these problems, security engineers often turn off compiler optimization and/or perform local, post-compilation transformations. However, theses solution lead to inefficient code. To deal with this issues, we propose Secure by Construction Code Generation (SecConCG), a secure constraint-based compiler backend to generate code that is secure. SecConCG can control the quality of the mitigated program by efficiently searching the best possible low-level implementation according to a processor cost model. In our experiments with ten masked implementations on MIPS and ARM Cortex M0, SecConCG improves the generated code from 10% to 9x compared to non-optimized secure code at a small overhead of up to 8% compared to non-secure optimized code. |
16:20 | PRESENTER: Priyanka Golia ABSTRACT. Given a Boolean specification between a set of inputs and outputs, the problem of Boolean functional synthesis is to synthesize each output as a function of inputs such that the specification is met. In the report, we first discussed a state-of-the-art data-driven approach for a Boolean functional synthesis, called Manthan [1]. Motivated by the progress in machine learning, Manthan views functional synthesis as a classification problem and relies on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided repair and provable verification. We then discussed challenges faced by Manthan during data-driven synthesis and the remedies to overcome those challenges [2]. Finally, to discuss the applications, we showed the reduction of program synthesis to a special variant of Boolean functional synthesis in which we have explicit dependencies on universally quantified variables [3]. We hope that program synthesis will be that killer application that will motivate further research into functional synthesis. [1] Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel. 2020. Manthan: A Data-Driven Approach for Boolean Function Synthesis. In Proceedings of International Conference on Computer-Aided Verification (CAV), 2020. [2] Priyanka Golia, Friedrich Slivovsky, Subhajit Roy, and Kuldeep S. Meel. 2021. Engineering an Efficient Boolean Functional Synthesis Engine. In Proceedings of International Conference On Computer Aided Design (ICCAD), 2021. [3] Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel. Program synthesis as dependency quantified formula modulo theory. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2021. |
16:40 | PRESENTER: Thibault Falque ABSTRACT. One of the possible approaches for solving a CSP is to encode the input problem into a CNF formula, and then use a SAT solver to solve it. The main advantage of this technique is that it allows to benefit from the practical efficiency of modern SAT solvers, based on the CDCL architecture. However, the reasoning power of SAT solvers is somehow "weak", as it is limited by that of the resolution proof system they use internally. This observation led to the development of so called pseudo-Boolean (PB) solvers, that implement the stronger cutting planes proof system, along with many of the solving techniques inherited from SAT solvers. Additionally, PB solvers can natively reason on PB constraints, i.e., linear equalities or inequalities over Boolean variables. These constraints are more succinct than clauses, so that a single PB constraint can represent exponentially many clauses. In this paper, we leverage both this succinctness and the reasoning power of PB solvers to solve CSPs by designing PB encodings for different common constraints, and feeding them into PB solvers to compare their performance with that of existing CP solvers. |
17:00 | A Boolean Formula Seeker in the Context of Acquiring Maps of Interrelated Conjectures on Sharp Bounds PRESENTER: Ramiz Gindullin ABSTRACT. A component of the Bound Seeker [1] is Boolean formula seeker, a part devoted to search for Boolean formulae. Here, a Boolean formula involves n arithmetic conditions linked by a single commutative logical operator or by a sum. Part of the originality of the Boolean formula seeker is that it was synthesised by a constraint program. This extended abstract includes (i) the type of Boolean formulae we consider, (ii) the importance of allowing Boolean formulae in the context of maps of conjectures, (iii) the components of the Boolean formula seeker, and (iv) a short description of the different steps of the acquisition process of Boolean formulae. |
17:20 | PRESENTER: Alexander Ek ABSTRACT. We introduce two log-linear-time dispersion propagators---(a) spread (variance, and indirectly standard deviation) and (b) the Gini coefficient---capable of explaining their propagations, thus allowing clause learning solvers to use the propagators. Propagators for (a) exist in the literature but do not explain themselves, while propagators for (b) have not been previously studied. |
17:23 | PRESENTER: Christopher Coulombe ABSTRACT. We propose CABSC, a system that performs Constraint Acquisition Based on Solution Counting. In order to learn a Constraint Satisfaction Problem (CSP), the user provides positive examples and a Meta-CSP, i.e. a model of a combinatorial problem whose solution is a CSP. It allows listing the potential constraints that can be part of the CSP the user wants to learn. It also allows stating the parameters of the constraints and imposing constraints over these parameters. The CABSC reads the Meta-CSP using an augmented version of the language MiniZinc and returns the CSP that accepts the fewest solutions among the CSPs accepting all positive examples. This is done using a branch and bound where the bounding mechanism makes use of a model counter. Experiments show that CABSC is successful at learning constraints and their parameters from positive examples. |
17:26 | PRESENTER: Jovial Cheukam Ngouonou ABSTRACT. To automate the discovery of conjectures on combinatorial objects, we introduce the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a CP-based system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upper-bounds on eight digraph characteristics. |
Frank Pfenning special session: contributed talks
17:00 | A (Logical) Framework for Collaboration ABSTRACT. A common interest in logical frameworks formed the basis for many years of fruitful and enjoyable collaboration with Frank and his students, both directly and indirectly. Frank’s development of the Twelf system was instrumental in enabling the use of LF for specifying a wide range of formalisms, ranging from “pure logics” to “practical programming languages.” Most importantly, the Twelf totality checker for proving All-Exists statements allowed for a human-readable and easily developed proof of type safety for the Standard ML language that not only verified what we already knew, but also fostered the development of a new language definition methodology well-suited to the demands of working at scale. |
17:15 | Type refinement as a unifying principle ABSTRACT. I will discuss some personal perspectives on the "Type Refinement" project initiated by Frank several decades ago, and how it has influenced my research over the years since my PhD.
|
17:30 | Language Minimalism and Logical Frameworks PRESENTER: Chris Martens ABSTRACT. In the design of tools, languages, and frameworks, Frank has been a consistent advocate for a particular flavor of aesthetic minimalism. A valuable trait of minimalist frameworks and tiny programming languages is that they facilitate implementation and reimplementation in multiple contexts with various interfaces. This principle has guided our (Chris and Rob’s) work at and beyond Carnegie Mellon, including for some recent projects we've pursued together. We'll talk about some of our Frank-adjacent projects we've reimplemented on the web, how language minimalism contributed to their success, lessons learned, and a path forward for extending the reach and application of logical frameworks.
|
17:45 | Logics for Robotics |
17:10 | How to be an ethical computer scientist ABSTRACT. Many of us got involved in computing because programming was fun. The advantages of computing seemed intuitive to us. We truly believed that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous! Recently, however, computer scientists realized that computing is not a game--it isreal--and it brings with it not only societal benefits, but alsosignificant societal costs, such as labor polarization, disinformation, and smart-phone addiction. A common reaction to this crisis is to label it as an "ethics crisis".But corporations are driven by profits, not ethics, and machines are just machines. Only people can be expected to act ethically. In this talk, the speaker will discuss how computer scientists should behave ethically. (The talk will be also broadcast live. For a webinar link to this talk, please register here: https://bit.ly/VardiFest ) Short Bio: Moshe Y. Vardi, the founding chair of FLoC, is a University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University. He is the recipient of three IBM Outstanding Innovation Awards, the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, the EATCS Distinguished Achievements Award, the Southeastern Universities Research Association's Distinguished Scientist Award, the ACM SIGLOG Church Award, the Knuth Prize, the ACM Allen Newell Award, and IEEE Norbert Wiener Award for Social and Professional Responsibility. He holds seven honorary doctorates.
|