View: session overviewtalk overview
Tutorial on DatalogMTL (https://datalogmtl.github.io/), part 1/2: motivations, introduction to DatalogMTL, and explanation of reasoning techniques.
09:00 | DatalogMTL ABSTRACT. The first part of the DatalogMTL tutorial. |
FoMLAS Session 1
09:00 | PRESENTER: Marco Casadio ABSTRACT. With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations. Recently, many methods for verifying networks’ general properties of robustness have been proposed, but they are mostly applied in Computer Vision. In this paper we propose a Verification method for Natural Language Understanding classification based on larger regions of interest, and we discuss the challenges of such task. We observe that, although the data is almost linearly separable, the verifier does not output positive results and we explain the problems and implications. (Submitted as Extended Abstract) |
09:30 | PRESENTER: Matan Ostrovsky ABSTRACT. Convolutional neural networks have gained vast popularity due to their excellent performance in the fields of computer vision, image processing, and others. Unfortunately, it is now well known that convolutional networks often produce erroneous results --- for example, minor perturbations of the inputs of these networks can result in severe classification errors. Numerous verification approaches have been proposed in recent years to prove the absence of such errors, but these are typically geared for fully connected networks and suffer from exacerbated scalability issues when applied to convolutional networks. To address this gap, we present here the CNN-Abs framework, which is particularly aimed at the verification of convolutional networks. The core of CNN-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract. CNN-Abs is designed to use existing verification engines as a backend, and our evaluation demonstrates that it can significantly boost the performance of a state-of-the-art DNN verification engine, reducing runtime by 15.7% on average. |
10:00 | PRESENTER: Remi Desmartin ABSTRACT. The demand for formal verification tools for neural networks has increased as neural networks have been deployed in a growing number of safety-critical applications. Matrices are a data structure essential to formalising neural networks. Functional programming languages encourage diverse approaches to matrix definitions. This feature has already been successfully exploited in different applications. The question we ask is whether, and how, these ideas can be applied in neural network verification. A functional programming language Imandra combines the syntax of a functional programming language and the power of an automated theorem prover. Using these two key features of Imandra, we explore how different implementations of matrices can influence automation of neural network verification. |
XLoKR Session 1
09:00 | Invited Talk: Explaining and Arguing with Facts and Rules ABSTRACT. Many existing techniques in Explainable AI, perhaps most of them, are based on relatively simple statistical summaries that capture the behavior of large classifiers. However, explanations often require more refined reasoning, sometimes in contexts that demand quick responses. To illustrate such scenarios, we start this talk by looking at explanations within recommendation systems, in particular examining the value of knowledge graphs in producing arguments for and against recommended items. We then move to techniques that explain the links stored in a knowledge graph themselves, discussing ways to combine data-driven embeddings with rule-based reasoning. And from there we move to explanations that rely on arguments extracted from facts and rules, sometimes obtained through text processing. We examine formalisms that allow for probabilistic argumentation in the presence of negation and cyclic defeating paths, as those seem to offer the necessary foundation for such explanation settings. The connection with various logical formalisms can then be explored to produce results on complexity and expressivity. |
10:00 | PRESENTER: Till Hofmann ABSTRACT. A robot's actions are inherently stochastic, as its sensors are noisy and its actions do not always have the intended effects. For this reason, the agent language Golog has been extended to models with degrees of belief and stochastic actions. While this allows more precise robot models, the resulting programs are much harder to comprehend, because they need to deal with the noise, e.g., by looping until some desired state has been reached with certainty, and because the resulting action traces consist of a large number of actions cluttered with sensor noise. To alleviate these issues, we propose to use abstraction. We define a high-level and nonstochastic model of the robot and then map the high-level model into the lower-level stochastic model. The resulting programs are much easier to understand, often do not require belief operators or loops, and produce much shorter action traces. |
5 talks, each roughly 10 minutes plus 5 minutes for discussion and questions.
09:15 | First-Order Logic Workbook PRESENTER: Ján Kľuka ABSTRACT. We briefly describe the Logic Workbook project, the culmination of our 5-year long effort to develop a suite of tools aimed at fostering independent active learning in our undergraduate course on first-order logic. It supports students solving sets of problems of multiple kinds (modelling, proving, model building) by integrating our previously stand-alone client-side web apps. |
09:30 | PRESENTER: Maurice Herwig ABSTRACT. In this talk we will present recent efforts on extending the DiMo specification language and the tool's capabilities for providing problem-specific (graphical) output. The tool is already present in FOMEO's list of presented tools. |
09:45 | Iltis: Teaching Logic in the Web ABSTRACT. The Iltis project provides an interactive, web-based system for teaching the foundations of formal methods. It is designed to allow modular addition of educational tasks as well as to provide immediate and comprehensive feedback. Currently, exercises for various aspects of typical automated reasoning workflows for propositional logic, modal logic, and first-order logic are covered. Recently, Iltis has reached a level of maturity where large parts of introductory logic courses can be supplemented with interactive exercises. We are now working on integrating Iltis with course management platforms to enable teachers as well as students to monitor their progress. Sample interactive course material has been designed and used in such logic courses with more than 200 students. We invite all readers to give it a try! This is joint work with Gaetano Geck, Marko Schmellenkamp, Jonas Schmidt and Thomas Zeume. |
10:00 | Experiences on Online Teaching of Formal Methods to Systems Engineering Students ABSTRACT. Clear, precise and focused communication is one of the key components of education. Formal methods provide a way to achieve all of these goals. Hence, we introduced formal methods in the teaching of the course ``Systems Engineering 1'' as part of the International Master's program of Distributed Systems Engineering (DSE) at TU Dresden in Winter Semester 2019/20. In the same semester, we initiated the concept of ``research-based teaching'' to integrate our research in the teaching by adding exercise on attestation process in the emerging field of confidential computing. In the same semester, teaching was shifted to online format due to COVID19. Altogether in various courses of DSE, there are currently 12 complete exercises (each of 90 minutes) on remote attestation and ProVerif based on the research carried out at the chair. Considerable effort was required to design these exercises, as DSE students typically do not have familiarity with formal methods. To the best of our knowledge (and confirmed by the developers of ProVerif), this is the first initiative to introduce ProVerif to such students. |
10:15 | DOMtutor -- Competitive Programming as Teaching Tool ABSTRACT. DOMtutor is a set of scripts and problems built for DOMjudge, an evaluation tool used for competitive programming. |
Keynote Talk 1
Theories of Separation
10:30 | Two Results on Separation Logic With Theory Reasoning PRESENTER: Nicolas Peltier ABSTRACT. Two results are presented concerning the entailment problem in Separation Logic with inductively defined predicate symbols and theory reasoning. First, we show that the entailment problem is undecidable for rules satisfying the progress, connectivity and establishment conditions, if theory reasoning is considered. The result holds for a wide class of theories, even with a very low expressive power. For instance it applies to the natural numbers with the successor function, or with the usual order. Second, we show that every entailment problem can be reduced to an entailment problem containing no equality (neither in the formulas nor in the recursive rules defining the semantics of the predicate symbols). |
11:00 | Labelled Tableaux for Linear Time Bunched Implication Logic PRESENTER: Daniel Mery ABSTRACT. In this paper, we define a temporal extension of Bunched Implication Logic (BI), called LTBI, that combines the LTL temporal connectives with BI connectives. After studying its semantics and illustrating its expressiveness, we design a labelled tableaux calculus for this new logic and then prove its soundness w.r.t. the Kripke-style semantics of LTBI. We finally discuss completeness issues of the calculus. |
11:30 | PRESENTER: Radu Iosif ABSTRACT. We compare the model-theoretic expressiveness of the symbolic heap fragment of Separation Logic (SL) with inductive definitions with (Monadic) Second Order Logic ((M)SO). While SL and MSO are incomparable on structures of unbounded treewidth, it turns out that SL can be embedded in SO and that MSO becomes a strict subset of SL, when the treewidth of the models is bounded by a constant and the relation symbols have arity one or two. We leave open the problems of equivalence between (M)SO and SL on bounded treewidth models over signatures containing relation symbols of arity three or more. |
Extensions of ASP
10:45 | Constrained Default Logic Programming PRESENTER: Shutao Zhang ABSTRACT. This paper develops a new formalism CDLP by combining ASP and constrained default logic to facilitate modeling questions with incomplete information, such that both Reiter's defaults and constraint defaults can be represented directly. After that, a method to split the CDLP programs is proposed by extending the notion of splitting sets to accommodate its property of constraint nonmonotonicity. Finally, we design a primary algorithm for solving the CDLP programs. |
11:05 | On the generalization of learnt constraints in ASP solving for temporal domains PRESENTER: Klaus Strauch ABSTRACT. The representation of a dynamic problem in ASP usually boils down to using copies of variables and constraints, one for each time stamp, no matter whether it is directly encoded or via an action or temporal language. The multiplication of variables and constraints is commonly done during grounding and the solver is completely ignorant about the temporal relationship among the different instances. On the other hand, a key factor in the performance of today's ASP solvers is conflict-driven constraint learning. Our question is now whether a constraint learned for particular time steps can be generalized and reused at other time stamps, and ultimately whether this enhances the overall solver performance on dynamic problems. Knowing full well the domain of time, we study conditions under which learned dynamic constraints can be generalized and propose a simple translation of the original logic program such that, for the translated programs, all learned constraints can be generalized to other time points. Last but not least, we empirically evaluate the impact of adding the generalized constraints to the ASP solver. |
11:25 | PRESENTER: Konstantin Herud ABSTRACT. Product configuration is one of the most important industrial applications of artificial intelligence. To enable customers to individualize complex products, usually logical configuration models are necessary. One challenge that exists here is the communication of product knowledge to the customer. This work deals with the situation of invalid customer requirements under given logical constraints. The goal is to explain why configurations are invalid and how they can be minimally changed to become valid again. This involves computing all minimal subsets of both constraints and requirements that make the configuration unsatisfiable. For this, we present a novel approach based on the declarative paradigm of Answer Set Programming. We empirically demonstrate its suitability for real-time configurators with thousands of features and constraints. The approach thus fills the gap of an easy-to-implement as well as high-performing conflict resolution component. |
11:45 | Hard Variants of Stable Marriage Problems: An Empirical Comparison of ASP with Other Computing Paradigms |
Tutorial on DatalogMTL (https://datalogmtl.github.io/), part 2/2: reasoning systems (MeTeoR and Ontop), demos, and conclusions.
11:00 | DatalogMTL ABSTRACT. The second part of the DatalogMTL tutorial. |
5 talks, each roughly 10 minutes plus 5 minutes for discussion and questions.
In the end, we have an additional 15 minutes for discussion, talks running a bit longer or getting to the buffet early.
We kindly ask presenters from the first two sessions to be in the gather in the final 15 minutes so that remote participants can ask questions and discuss with them.
11:00 | Tableaunoir: an online blackboard for teaching formal methods and making slides ABSTRACT. Tableaunoir was already presented at FOMEO 21. Tableaunoir is an online collaborative blackboard tool with fridge magnets available in many languages. The main raison d'être of Tableaunoir is the use of fridge magnets, to make interactive animations. The tool provides magnets for teaching algorithms, automata theory and games. The new feature that will be presented at FOMEO~22 is making slides (presentation). |
11:15 | Teaching Concurrency with pseuCo Book ABSTRACT. This presentation showcases pseuCo Book, an interactive textbook for concurrent programming, and its use in the \emph{Concurrent Programming} lecture at Saarland University. In contrast to many other tools that have a narrow scope and lack context, or only work as add-ons to textbooks and other course materials, pseuCo Book provides a truly interactive textbook experience where interactive demonstrations and exercises are interwoven with more classical textual elements. PseuCo Book is a work-in-progress and currently has two chapters, one on notions of equality like trace equivalence, bisimilarity, and observation congruence, and one focussing on the fundamentals of practical concurrent programming with message-passing or shared-memory concurrency. |
11:30 | Teaching Formal Languages with Iltis - A Preview ABSTRACT. The Iltis project provides an interactive, web-based system for teaching the foundations of formal methods. This abstract gives a preview on Iltis' support of exercises on formal languages. |
11:45 | Automata Tutor ABSTRACT. Name: Automata Tutor Link: automatatutor.com Area: Formal languages Content: Exercises on DFA, NFA, RE, PDA and CFG (understanding, construction, conversions and further algorithms) Usage: Register online - Become instructor - Create your own exercises for your course |
12:00 | Visualization of structural operational semantics over programs of simple imperative language. PRESENTER: Ján Perháč ABSTRACT. Our research team “Theory of programming”, at the Faculty of Electrical engineering and informatics, Technical University of Košice, Slovakia is interested in theoretical informatics. Our research and teaching are dedicated to formal methods and systems. Currently, we teach the following subjects: Formal languages, Logic for informaticians, Type theory, and Semantics of programming languages. We use traditional (mathematical) methods of teaching using blackboards in a class. The pandemic of Covid 19 has made our teaching complicated. We had to adapt and improvise. We have changed a blackboard in a class to a graphical tablet. But there was still missing a student-teacher interaction when for example a student came after class for a consultation to check if they understand a topic correctly, or if their calculations are correct, and so on. Therefore, we have decided to develop web tools that will simulate calculations as they would be done on a blackboard. The first implemented method is the Structural operational semantics (SOS) over a simple imperative language with the following syntax: • e ::= n | x | e + e | e - e | e * e • b ::= true | false | e = e | e ≤ e | ¬ b | b ∧ b • S ::= x := e | skip | S; S | if b then S else S | while b do S Our tool is developed as a web application, therefore is quickly accessible, with a simple UI. User type a program in our language and a tool will provide a calculation with the following options: • The whole sequence – will display the whole solution in one step. • Step by step – will display the calculations be each configuration of the program’s SOS. Here is also a possibility to interactively guess the next configuration. • Single instruction - provides a simple interface for sliding between configurations. • There are also possibilities for uploading the program from a file, exporting results in a PNG file, showing “help” with the user manual, setting other parameters of UI and input editor (such as color, text size, etc.), and many others. |
FoMLAS Session 2
11:00 | PRESENTER: Jianglin Lan ABSTRACT. We present a novel semidefinite programming (SDP) relaxation that enables tight and efficient verification of neural networks. The tightness is achieved by combining SDP relaxations with valid linear cuts, constructed by using the reformulation-linearisation technique (RLT). The computational efficiency results from a layerwise SDP formulation and an iterative algorithm for incrementally adding RLT-generated linear cuts to the verification formulation. The layer RLT-SDP relaxation here presented is shown to produce the tightest SDP relaxation for ReLU neural networks available in the literature. We report experimental results based on MNIST neural networks showing that the method outperforms the state-of-the-art methods while maintaining acceptable computational overheads. For networks of approximately 10k nodes (1k, respectively), the proposed method achieved an improvement in the ratio of certified robustness cases from 0% to 82% (from 35% to 70%, respectively). |
11:30 | PRESENTER: Ravi Mangal ABSTRACT. Neural networks are known to be susceptible to adversarial examples. Different techniques have been proposed in the literature to address the problem, ranging from adversarial training with robustness guarantees to post-training and run-time certification of local robustness using either inexpensive but incomplete verification or sound, complete, but expensive constraint solving. We advocate for the use of a run- time cascade of over-approximate, under-approximate, and exact local robustness checkers. The exact check in the cascade ensures that no unnecessary alarms are raised, an important requirement for autonomous systems where resorting to fail-safe mechanisms is highly undesirable. Though exact checks are expensive, via two case studies, we demonstrate that the exact check in a cascade is rarely invoked in practice. |
12:00 | PRESENTER: Natan Levy ABSTRACT. Neural network models have become the leading solution for a large variety of tasks, such as classification, language processing, and others. However, their reliability is heavily plagued by adversarial inputs: inputs generated by adding tiny perturbations to correctly-classified inputs, and for which the neural network produces erroneous results. In this paper, we present a new method called Robustness Measurement and Assessment (RoMA), which measures the robustness of a neural network model against such adversarial inputs. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. The type of robustness assessment afforded by RoMA is inspired by state-of-the-art certification practices, and could constitute an important step towards integrating neural networks in safety-critical systems. |
11:00 | Quantitative Weak Linearisation ABSTRACT. Weak linearisation was defined years ago through a static characterisation of the intuitive notion of virtual redex, based on (legal) paths computed from the (syntactical) term tree. Weak-linear terms impose a linearity condition only on functions that are applied (consumed by reduction) and functions that are not applied (therefore persist in the term along any reduction) can be non-linear. This class of terms was shown to be strongly normalising with deciding typability in polynomial time. We revisit this notion through non-idempotent intersection types (also called quantitative types). By using an effective characterisation of minimal typings, based on the notion of tightness, we are able to distinguish between “consumed” and “persistent” term constructors, which allows us to define an expansion relation, between general lambda-terms and weak-linear lambda-terms, whilst preserving normal forms by reduction. |
12:00 | The International School on Rewriting PRESENTER: Johannes Waldmann ABSTRACT. This slot contains an advertisement for ISR in Tbilisi, a proposal and vote for ISR 2024, and perhaps a general discussion on the future of ISR. |
11:00 | Position Paper: Mathematical Logic through Python PRESENTER: Noam Nisan ABSTRACT. We briefly motivate and describe our new book, "Mathematical Logic through Python", which is forthcoming (2022) in Cambridge University Press. |
12:00 | Whom to teach logic? ABSTRACT. The purpose of the workshop is to discuss whether and how we should teach logic to computer science students. However, the perspectives of teaching logic to ``pure math'' students are also not brilliant. In both cases mandatory courses vanish from the curriculum, and non-mandatory courses attract not so many students. So I change the focus a little bit: what audience should we teach logic? My answer is that we should mix interested math and CS students, and teach those who need logic together. Another question is when to teach logic. I will argue that different parts of logic must be taught in different parts of the curriculum. |
11:00 | ABSTRACT. Competitions such as the MiniZinc Challenges or the SAT competitions have been very useful sources for comparing performance of different solving approaches and for advancing the state-of-the-arts of the fields. Traditional competition setting often focuses on producing a ranking between solvers based on their average performance across a wide range of benchmark problems and instances. While this is a sensible way to assess the relative performance of solvers, such ranking does not necessarily reflect the full potential of a solver, especially when we want to utilise a portfolio of solvers instead of a single one for solving a new problem. In this paper, I will describe a portfolio-based analysis method which can give complementary insights into the performance of participating solvers in a competition. The method is demonstrated on the results of the MiniZinc Challenges and new insights gained from the portfolio viewpoint are presented. |
11:25 | Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended abstract) PRESENTER: Emilio Gamba ABSTRACT. We build on a recently proposed method for step-wise explaining solutions of constraint satisfaction problems. An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used, and where the sequence explains all logical consequences of the problem. The explanation-generation algorithms presented in that work rely heavily on calls for Minimal Unsatisfiable Subsets (MUS) of a derived unsatisfiable formula, exploiting a one-to-one correspondence between so-called non-redundant explanations and MUSs. The generation of an explanation sequence for a logic grid puzzle ranges from a few minutes to a few hours, which is more than a human would take. Therefore, we build on these formal foundations and tackle the main points of improvement, namely how to efficiently generate explanations that are provably optimal (with respect to the given cost metric). To address this issue, we develop (1) an algorithm based on the hitting set duality for finding optimal constrained unsatisfiable subsets, where `constrainedness' reduces the multiple calls for (optimal) unsatisfiable subsets to a single call; (2) a method for re-using relevant information over multiple calls to these algorithms and; (3) methods exploiting domain-specific information to speed-up explanation sequence generation. We experimentally validate our algorithms on a large number of CSP instances. These experiments show that our algorithms outperform the MUS approach both in terms of quality of the explanation and in time to generate it. We show that this approach can be used to effectively find sequences of optimal explanation steps for a large set of constraint satisfaction problems. |
11:50 | Automatic Generation of Dominance Breaking Nogoods for Constraint Optimization PRESENTER: Allen Z. Zhong ABSTRACT. Dominance relations describe the relations among solutions of Constraint Optimization Problems (COPs), where some solutions are known to be subordinate compared with others concerning satisfiability of constraints and/or optimality of the objective. Dominance breaking is an effective technique to speed up Branch-and-Bound search for optimal solutions by exploiting dominance relations to prune away search space containing only dominated solutions. In this paper, we review recent progress of dominance breaking, and introduce a new line of work on automatic dominance breaking, a framework to automate the identification and exploitation of dominance relations in COPs thereby generating effective dominance breaking nogoods. The major idea is to generate (a) only nogoods as the building blocks to model arbitrary dominance breaking constraints and (b) only the nogoods that are efficiently and conveniently identifiable by constraint satisfaction. We also discuss possible directions of future work. |
11:00 | ABSTRACT. In formal system design, model checking is a well-established method to automatically check for global correctness of systems. In such a framework, in order to verify whether a system is correct with respect to a desired property, we describe its structure with a mathematical model, specify the property with a temporal logic formula, and check formally that the model satisfies the specification. This method has been first conceived for closed systems whose behavior is completely determined by their internal states and transitions. In late 90s, interest has arisen in analyzing the behavior of individual components (or sets of components) in systems with multiple entities. The interest began in the field of reactive systems, which are systems that interact continually with their environments and whose behaviour depends on this interaction. Successively, researchers have looked for logics to reason about, and verify strategic behavior of agents in multi-agent systems. The aim of this contribution is to report on the main developments made in the last 20 years in formal verification of reactive and multi-agent systems. In particular, we will report on CTL* module checking, ATL* model checking, and Strategy Logic. |
11:10 | PRESENTER: Stefan Leue ABSTRACT. The codication of legal artefacts in formal logic has a long tradition, dating back, among others, to seminal works be G. W. Leibniz. More recent works include the use of epistemic and deontic logics in the formalization of laws and contracts. While many of the existing approaches focus on issues of expressiveness, adequateness and non-automated reasoning with respect to the considered logics, our interest lies in a Computer Science and System Analysis perspective. In particular, we aim at developing tools that build on mechanized logics and perform automated consistency analysis for legal contract documents. Inconsistencies in legal contracts can be static, which means that they include contradictory textual information, or they can be dynamic, leading to (a potential) non-executability of the contract during its dynamic execution. We will present a method and tool called Con- tractCheck [1], which performs static and dynamic logical consistency analyses of Share Purchase Agreements (SPAs) based on the SMT solver Z3. We focus on its application to SPAs in the acquisition of corporations, since these are typically shielded from many of the implicit legal background facts implied by legal dogmatics. Starting from the denition of an ontology for SPAs, we provide structured text blocks in ContractCheck from which the contract can be composed. ContractCheck encodes the relevant legal facts from each of the blocks of an SPA in a logic encompassing decidable fragments of rst-order logic (linear real arithmetic, uninterpreted functions). The logical model, together with the applicable logical consistency constraints, are handed over to the SMT solver, and the analysis results are then depicted by ContractCheck. Using an illustrative example we will exemplify the capabilities of ContractCheck. In addition to the computation of inconsistencies, the model construction capabilities of the SMT solver synthesize dynamic contract executions consistent with the contractual constraints. The MaxSMT capabilities even permit to compute preferable orders of claim executions. We view our work as an illustration of how far mechanized logic can take us in understanding and improving even the non-technical facets of the world that we live in. [1] Alan Khoja, Martin Kolbl, Stefan Leue, and Rudiger Wilhelmi. Automated consistency analysisfor legal contracts. In Proc. of the SPIN Symposium on Model Checking Software, SPIN 2022, LNCS. Springer Verlag, 2022. To appear. |
11:20 | ABSTRACT. We survey our results on a formalism to model and reason about systems with reconfigurable communication interfaces. Processes in our formalism are independent state machines communicating by exchanging messages on broadcast and multicast channels. Messages can include additional data values allowing for further design flexibility. Broadcasts can be customized to target only recipients satisfying certain conditions on their states. Processes can connect to and disconnect from multicast channels and messages can go through only once a condition on the state of all recipients (those listening to the channel) is satisfied. Dependence on the local state of senders and receivers implies that interaction interfaces are reconfigured through execution. Reconfiguration occurs both at message level and channel level. We show that such systems can be represented and reasoned about symbolically by including existential quantification in the definition of the transition relation. The quantification allows to apply conditions imposed by senders in a way that depends on the local state of recipients. While this symbolic representation supports a global semantics and efficient analysis, it is not compositional. We match the symbolic representation with a compositional semantics through Channeled Transition Systems. When coming to consider the partial order semantics of such reconfigurable systems, we identify that additional information about interleaving needs to be included in the partial order. On the reasoning side, we consider an extension of LTL that allows to reason about the contents of messages and the intentions of senders when setting conditions on receivers. We support modelling and analysis through model checking of a simple programming language tailored for the production of such processes. |
11:00 | PRESENTER: David Sabel ABSTRACT. We introduce a call-by-need variant of PCF with a binary probabilistic fair choice operator. We define its operational semantics and contextual equivalence as program equivalence, where the expected convergence is only observed on numbers. We investigate another program equivalence that views two closed expressions as distribution-equivalent if evaluation generates the same distribution on the natural numbers. We show that contextual equivalence implies distribution-equivalence. Future work is to provide a full proof of the opposite direction. |
11:30 | A Machine-Checked Formalisation of Erlang Modules PRESENTER: Péter Bereczky ABSTRACT. This paper contributes to our long-term goal of defining the Erlang programming language in the Coq proof assistant, ultimately allowing us to formally reason about Erlang programs and their transformations. In previous work, we have described a sequential subset of the core language of Erlang in different semantics definition styles, which is extended with the module system in the present formalization. In particular, we define modules and inter-module calls in natural and functional big-step semantics, and we prove their equivalence. Last but not least, we validate the formal definition with respect to the reference implementation. The formal theories are machine-checked in Coq. |
12:00 | Extending a Lemma Generation Approach for Rewriting Induction on Logically Constrained Term Rewriting Systems PRESENTER: Kasper Hagens ABSTRACT. When transforming a program into a more efficient version (for example translating a recursive program into an imperative program), it is important to guarantee that the input-output-behavior remains the same. One approach to assure this uses so-called Logically Constrained Term Rewriting Systems (LCTRSs). Two versions of a program are translated into LCTRSs and compared in a proof system (Rewriting Induction). Proving their equivalence in this system often requires the introduction of a lemma. In this paper we review a lemma generation approach by Fuhs, Kop and Nishida and propose two possible extensions. |
11:00 | PRESENTER: Elena Di Lavore ABSTRACT. We introduce monoidal width as a measure of the difficulty of decomposing morphisms in monoidal categories. By instantiating monoidal width and two variations in a suitable category of cospans of graphs, we capture existing notions, namely branch width, tree width and path width. By changing the category of graphs, we are also able to capture rank width. Through these examples, we propose that monoidal width: (i) is a promising concept that, while capturing known measures, can similarly be instantiated in other settings, avoiding the need for ad-hoc domain-specific definitions and (ii) comes with a general, formal algebraic notion of decomposition using the language of monoidal categories. |
11:20 | Hypertrace Logic ABSTRACT. Security policies are often specified as hyperproperties. The logical specification of hyperproperties requires quantification over both time and traces. The interaction between trace and time quantifiers makes it challenging to understand what is being specified by a hyperlogic formula. For example, starting from the high-level two-state requirement that information from x should not flow to z until information in y does not flow to z, eight different specification variants were identified in a recent work. A two-sorted first-order logic, called Hypertrace logic, was introduced to provide mathematical definitions for the variants identified. In this talk, I introduce Hypertrace logic with some of its fragments. My goal is to present hyperproperties through the lenses of a first-order formalism and discuss possible new research directions. For example, how to use techniques from modal logic to identify decidable fragments in Hypertrace logic. |
11:40 | A goal-oriented proof system for epistemic modal logic ABSTRACT. Epistemic logic enables us to have a logical approach to information, particularly the type of information that is meaningful for an agent as a whole: their knowledge. It compasses a family of well-developed and well-understood systems. However, its proof-theoretic presentation usually is axiomatic. This approach creates obstacles when we try to integrate it into rich theories of human reasoning and automated reasoning systems. This talk develops new proof systems for Epistemic Logic based on the goal-oriented systems of Gabbay & Olivetti (2000). In a goal-oriented proof system with epistemic modality, a deductive process will begin by wondering if an agent knows a particular objective in a particular state, depending on a particular database and the states accessible for that agent. The general approach of this system is based on databases and objectives. By integrating Epistemic Logic with a goal-based proof, we can allow for new human and automated reasoning applications. |
12:00 | A Mixed Linear and Graded Logic-Extended Abstract PRESENTER: Victoria Vollmer ABSTRACT. Benton showed in his seminal paper on Linear/Non-Linear logic and models that the ofcourse modality of linear logic can be split into two modalities connecting intuitionistic logic with linear logic forming a symmetric monoidal adjunction. In this paper, we give a similar result showing that graded modalities can be split into two modalities connecting graded logic with linear logic. We propose a sequent calculus, its proof theory and categorical model, and a natural deduction system in the form of a term assignment system which is shown to be isomorphic to the sequent calculus. |
XLoKR Session 2
11:00 | PRESENTER: Stefan Borgwardt ABSTRACT. When working with description logic ontologies, understanding entailments derived by a description logic reasoner is not always straightforward. So far, the standard ontology editor Protégé offers two services to help: (black-box) justifications for OWL 2 DL ontologies, and (glass-box) proofs for lightweight OWL EL ontologies, where the latter exploits the proof facilities of reasoner Elk. Since justifications are often insufficient for explaining inferences, there is thus only little tool support for explaining inferences in more expressive DLs. To solve these issues, we present the two sisters Evee and Evonne that provide proof-based explanations for more expressive DLs. Evee consists of a Java library for generating proofs (Evee-libs), and a plugin for Protégé to show these proofs (Evee-protege). Evonne is a more avdanced proof visualisation tool that uses Evee-libs. |
11:20 | Finding Good Proofs for Answers to Conjunctive Queries Mediated by Lightweight Ontologies (Extended Abstract) PRESENTER: Stefan Borgwardt ABSTRACT. In ontology-mediated query answering, access to incomplete data sources is mediated by a conceptual layer constituted by an ontology. To correctly compute answers to queries, it is necessary to perform complex reasoning over the constraints expressed by the ontology. In the literature, there exists a multitude of techniques incorporating the ontological knowledge into queries. However, few of these approaches were designed for comprehensibility of the query answers. In this article, we try to bridge these two qualities by adapting a proof framework originally applied to axiom entailment for conjunctive query answering. We investigate the data and combined complexity of determining the existence of a proof below a given quality threshold, which can be measured in different ways. By distinguishing various parameters such as the shape of a query, we obtain an overview of the complexity of this problem for the lightweight ontology languages DL-Lite_R and EL, and also have a brief look at temporal query answering. This is an extended abstract of a paper accepted at the 35th International Workshop on Description Logics. |
11:40 | PRESENTER: Júlia Pukancová ABSTRACT. We propose a unified API for integration of different DL abduction solvers applications, similarly as OWL API does this for deductive OWL/DL reasoners. |
12:00 | Modular Provenance in Multi-Context Systems PRESENTER: Matthias Knorr ABSTRACT. A rapidly increasing amount of data, information and knowledge is becoming available on the Web, often written in different formats and languages, adhering to standardizations driven by the World Wide Web Consortium initiative. Taking advantage of all this heterogeneous knowledge requires its integration for more sophisticated reasoning services and applications. To fully leverage the potential of such systems, their inferences should be accompanied with justifications which allow a user to understand a proposed decision/recommendation, in particular for critical systems (healthcare, law, finances, etc.). However, determining such justifications has commonly only been considered for a single formalism, such as relational databases, description logic ontologies, or declarative rule languages. In this paper, we give an overview on the first approach for providing provenance for heterogeneous knowledge bases building on the general framework of multi-context systems, as an abstract, but very expressive formalism to represent knowledge bases written in different formalisms and the flow of information between them. |
11:35 | ABSTRACT. I will talk about Strategy Logic: a powerful language for strategic reasoning that Moshe Vardi introduced a decade ago and quickly became widely applied in Formal Methods and AI. I will do it with a special focus on its expressiveness in Multi-Agent Systems and Game Theory, as well as for solving Rational Synthesis problems. |
11:45 | PRESENTER: Munyque Mittelmann ABSTRACT. Mechanism Design aims to design a game so that a desirable outcome is reached regardless of agents’ self-interests. In this paper, we show how this problem can be rephrased as a synthesis problem, where mechanisms are automatically synthesized from a partial or complete specification in a high-level logical language. We show that Quantitative StrategyLogic is a perfect candidate for specifying mechanisms as it can express complex strategic and quantitative properties. We solve automated mechanism design in two cases: when the number of actions is bounded, and when agents play in turn. |
11:55 | ABSTRACT. Logical, or formal, specifications have proved extremely useful for automated synthesis of plans, strategies, and other objects such as norms of behaviour in software systems. In addition to purely software systems, synthesis of plans conforming to logical specification has been shown to work for cyber-physical systems, in particular, for manufacturing systems. Among many synthesis problems Moshe Vardi worked on is synthesis for manufacturing. The problem in manufacturing synthesis is as follows: given a production recipe, that is, a description of which operations need to be performed in order to have different versions of a product to be manufactured, and a description of a manufacturing facility, synthesise a controller specific to this manufacturing facility, where each operation is assigned to a specific set of manufacturing robots [3]. In [2, 1], synthesis for manufacturing has been investigated, where both the production recipes and manufacturing facilities are modelled as transducers (Mealy machines). It was shown that this version of the synthesis for manufacturing problem is decidable in 2EXPTime when the set of manufacturing resources is fixed, and is undecidable if the problem requires to find how many copies of each manufacturing resource are needed to manufacture the product. In this talk, I would like to outline possible extensions of the problem while keeping it decidable, for example using first order specifications of the production recipes as in [4] and specifying additional constraints on how the production facilities should be used. |
12:10 | PRESENTER: Ana de Almeida Borges ABSTRACT. The Quantified Reflection Calculus with N modalities, or QRC_N, is a quantified and strictly positive modal logic inspired by the provability reading of the modal symbols. The non-modal language consists of a verum constant and relational symbols as atomic formulas, conjunctions, and universal quantifiers. The available modal connectives are labeled diamonds, one for each natural number n < N. QRC_N statements are assertions of the form A \vdash B, with A and B in this strictly positive language. We describe the axiomatic system for QRC_N and see that it is sound with respect to constant domain Kripke models. Then we focus on the QRC_1 fragment, which is complete with respect to finite and constant domain Kripke models, and thus decidable. The completeness and decidability of the general case with N modalities is still an open problem. (Joint work with Joost J. Joosten) |
Lunches will be held in Taub hall and in The Grand Water Research Institute.
12:30 | MCP: Capturing Big Data by Satisfiability PRESENTER: Miki Hermann ABSTRACT. Experimental data is often given as bit vectors, with vectors corresponding to observations, and coordinates to attributes, with a bit being true if the corresponding attribute was observed. Observations are usually grouped, e.g.\ into positive and negative samples. Among the essential tasks on such data, we have compression, the construction of classifiers for assigning new data, and information extraction. Our system, MCP, approaches these tasks by propositional logic. For each group of observations, MCP constructs a (usually small) conjunctive formula that is true for the observations of the group, and false for the others. Depending on the settings, the formula consists of Horn, dual-Horn, bijunctive or general clauses. To reduce its size, only relevant subsets of the attributes are considered. The formula is a (lossy) representation of the original data and generalizes the observations, as it is usually satisfied by more bit vectors than just the observations. It thus may serve as a classifier for new data. Moreover, (dual-)Horn clauses, when read as if-then rules, make dependencies between attributes explicit. They can be regarded as an explanation for classification decisions. |
Keynote Talk 2
Counting/Probabilistic ASP & Synthesis
14:00 | A Semantics For Probabilistic Answer Set Programs With Incomplete Stochastic Knowledge PRESENTER: David Tuckey ABSTRACT. Some probabilistic answer set programs (PASP) semantics assign probabilities to sets of answer sets and implicitly assumes these answer sets to be equiprobable. While this is a common choice in probability theory, it leads to unnatural behaviours with PASPs. We argue that the user should have a level of control over what assumption is used to obtain a probability distribution when the stochastic knowledge is incomplete. To this end, we introduce the Incomplete Knowledge Semantics (IKS) for probabilistic answer set programming. We take inspiration from the field of decision making under ignorance. Given a cost function, which is given by a user-defined ordering over answer sets through weak constraints, we use the notion of Ordered Weighted Averaging (OWA) operator, to distribute the probability over a set of answer sets accordingly to the user’s level of optimism. The more optimistic (or pessimistic) a user is, the more (or less) probability is assigned to the more (or less) optimal answer sets. We present an implementation and showcase the behaviour of this semantics on simple examples. We also highlight the impact that different OWA operators have on weight learning, showing that the equiprobability assumption isn't always the best option. |
14:20 | PRESENTER: Nicolas Ruehling ABSTRACT. We present plingo, an extension of the ASP system clingo with various probabilistic reasoning modes. Plingo is centered upon LPMLN, a probabilistic extension of ASP based upon a weight scheme from Markov Logic. This choice is motivated by the fact that the core probabilistic reasoning modes can be mapped onto optimization problems and that LPMLN may serve as a middle-ground formalism connecting to other probabilistic approaches. As a result, plingo offers three alternative front-ends, one for LPMLN, PLOG, and PROBLOG. The corresponding input languages and reasoning modes are implemented by means of clingo's multi-shot and theory solving capabilities. Although plingo's core amounts to a re-implementation of LPMLN in terms of modern ASP technology, it integrates various probabilistic reasoning modes with the full modeling language and reasoning spectrum of clingo. |
14:40 | Automatic Synthesis of Boolean Networks from Biological Knowledge and Data PRESENTER: Athénaïs Vaginay ABSTRACT. Boolean Networks (BNs) are a simple formalism used to study complex biological systems when the prediction of exact reaction times is not of interest. They play a key role to understand the dynamics of the studied systems and to predict their disruption in case of complex human diseases. BNs are generally built from experimental data and knowledge from the literature, either manually or with the aid of programs. The automatic synthesis of BNs is still a challenge for which several approaches have been proposed. In this paper, we propose ASKeD-BN, a new approach based on Answer-Set Programming to synthesise BNs constrained in their structure and dynamics. By applying our method on several well-known biological systems, we provide empirical evidence that our approach can construct BNs in line with the provided constraints. We compare our approach with three existing methods (REVEAL, Best-Fit and caspo-TS) and show that our approach synthesises a small number of BNs which are covering a good proportion of the dynamical constraints, and that the variance of this coverage is low. ------- This work was published at the conference OLA2021. https://www.springerprofessional.de/en/automatic-synthesis-of-boolean-networks-from-biological-knowledg/19574476 |
15:00 | ASP in Industry, here and there ABSTRACT. Answer Set Programming (ASP) has become a popular paradigm for declarativeproblem solving and is about to find its way into industry. This is dueto its expressive yet easy knowledge representation language powered by highlyperformant (Boolean) solving technology. As with many other such paradigmsbefore, the transition from academia to industry calls for more versatility.Hence, many real-world applications are not tackled by pure ASP but ratherhybrid ASP. The corresponding ASP systems are usually augmented by foreignlanguage constructs from which additional inferences can be drawn. Examplesinclude linear equations or temporal formulas. For the design of "sound"systems, however, it is indispensable to provide semantic underpinningsright from the start. To this end, we will discuss the vital role of ASP'slogical foundations, the logic of Here-and-There and its non-monotonic extension,Equilibrium Logic, in designing hybrid ASP systems and highlight some of theresulting industrial applications. |
This is a KR-2022 tutorial on Sunday July 31st: two 1.5h sessions in afternoon with a break: 14:00-15:30 and 16:00-17:30. See details at https://www.cs.ryerson.ca/~mes/HTSC-Tutorial-KR2022/
4 talks, each roughly 10 minutes plus 5 minutes for discussion and questions.
We kindly ask presenters of this session to be in the gather after 15:00 so that remote participants can ask questions and discuss with them.
14:00 | LogInd, a tool for supporting students in mastering structural induction PRESENTER: Josje Lodder ABSTRACT. We present LogInd, a tool for teaching structural induction. LogInd supports students in providing inductive proofs of properties of propositional formulae. Students can enter their proofs stepwise and receive feedback on each step. At any moment they can ask for a hint or next step. |
14:15 | Prooffold: a prototype for displaying structured proofs ABSTRACT. Prooffold is a tool for displaying structured proofs, in the same spirit than Leslie Lamport's proposition. Prooffold displays a proof in a top-bottom manner: from the general idea of the proof to the technical details. The graphical user interface proposes panels that the user can open/close to see/hide more details. Proofs in Prooffold are written in a dialect of Markdown that offers structural constructions. |
14:30 | Formal Methods Online: Sequent Calculus Verifier (SeCaV) PRESENTER: Jørgen Villadsen ABSTRACT. We present the Sequent Calculus Verifier (SeCaV) and discuss its advantages and disadvantages as a tool for teaching logic, automated reasoning and formal methods. SeCaV is a formalization in the proof assistant Isabelle/HOL of classical first-order logic with constants and functions. The syntax, the semantics and the inductive definition of the sequent calculus proof system along with the soundness and completeness proofs are verified in Isabelle/HOL. SeCaV is accompanied by the SeCaV Unshortener, which is a browser application for developing sequent calculus proofs that are automatically translated into the corresponding Isabelle-embedded proofs. A compact format for the one-sided sequent calculus is used. The system provides feedback on proof rule applications. Online help and examples are available. Hundreds of computer science students have used SeCaV for course exercises and exams. Reference: https://secav.compute.dtu.dk/ |
14:45 | PRESENTER: Júlia Pukancová ABSTRACT. SIVA is a web application serving for simulation of the tableau algorithm (TA) for the ALC DL. A user builds a completion tree (to prove the input consistency) step by step, supervised by SIVA. The main goal of SIVA is to teach the DL tableau expansion rules and to visualize the completion tree. The visualization with the tool is simpler and more readable than on a blackboard. One of the main advantages is that beside the completion tree also the tableau is visualized. |
FoMLAS Session 3
14:00 | VNN-COMP 2022 PRESENTER: Mark Müller ABSTRACT. Over the last years, the formal verification of safety properties of neural networks has received ever-increasing attention from the research community leading to a vast range of applicable tools. To help practitioners decide which verifier to apply, we hosted the ‘Verification of Neural Networks Competition’ (VNN-COMP’22), facilitating the comparison of current state-of-the-art systems on a range of diverse benchmarks using (cost) equivalent hardware. In this talk, we will present an analysis of the results and have successful participants present the core ideas underlying their methods. Furthermore, we will gather feedback on this year's processes and discuss ideas for future iterations of the competition. This talk will be delivered jointly by Christopher Brix and Mark Müller. |
14:00 | Towards computing cohomology of dependent abelian groups |
14:30 | A Library of Monoidal Categories for Display and Univalence PRESENTER: Kobe Wullaert |
15:00 | The internal AB axioms |
14:00 | ABSTRACT. Logic lies at the heart of computing, but its presence is somewhat dimming. This is particularly striking at the undergraduate level, where far too little logic training is done. This paper examines this regrettable state of affairs, considering what is actually taught, what should be taught, and how logic must be taught at computing undergraduates. |
15:00 | Introducing Logic by Stealth to Computer Science Students PRESENTER: Liam O'Reilly ABSTRACT. Logic is an inevitable part of any undergraduate computer science degree programme. However, today's computer science student typically finds this to be at best a necessary evil with which they struggle to engage. Twenty years ago, we started to address this issue seriously in our university, and we have instituted a number of innovations throughout the years which have had a positive effect on engagement and, thus, attainment. |
14:00 | Towards higher order proof complexity PRESENTER: Anupam Das ABSTRACT. A nonuniform version of elementary computation is naturally given by 'higher-order' Boolean logic, an extension of propositional logic by abstraction and application operations at all finite types. The corresponding Frege-style proof system is nothing more than Church's simple type theory (STT), restricted to Boolean ground type. Our starting point is the observation that this system in fact bears correspondence with the well-known arithmetic theory IDelta0 + exp, a well-known theory of elementary computation¸ in the usual sense of proof complexity. In this work-in-progress, we explore the gap between traditional bounded arithmetic theories in proof complexity (typically below exponential-time) and weak theories of arithmetic in proof theory (typically above elementary computation). The main idea is to formulate higher-order versions of Buss' classical theories that climb up the elementary hierarchy at the same time as analogous restrictions on cuts in STT. The goal is to establish modular characterisations for stages of the elementary hierarchy, and also intermediate alternating-time-hierarchies by way of (higher-type) quantifier complexity. |
14:30 | ABSTRACT. The CSP (constraint satisfaction problems) is a class of problems deciding whether there exists a homomorphism from an instance relational structure to a target one. The CSP dichotomy is a profound result recently proved by Zhuk and Bulatov. It says that for each fixed target structure CSP is either NP-complete or p-time. Zhuk's algorithm solves CSP in polynomial time for constraint languages having a weak near unanimity polymorphism. For negative instances of p-time CSP it is reasonable to explore their proof complexity. We show that the soundness of Zhuk's algorithm can be proved in the theory of bounded arithmetic, namely in theory V^1 augmented by three special universal algebra axioms. |
15:00 | PRESENTER: Dmitry Itsykson ABSTRACT. A canonical communication problem $\mathrm{Search}{\phi}$ is defined for every unsatisfiable CNF $\phi$: an assignment to the variables of $\phi$ is distributed among the communicating parties, they are to find a clause of $\phi$ falsified by this assignment. Lower bounds on the randomized $k$-party communication complexity of $\mathrm{Search}{\phi}$ in the number-on-forehead (NOF) model imply tree-size lower bounds, rank lower bounds, and size-space tradeoffs for the formula $\phi$ in the semantic proof system $\mathrm{T}^{cc}(k,c)$ that operates with proof lines that can be computed by $k$-party randomized communication protocol using at most $c$ bits of communication [Göös, Pitassi, 2014]. All known lower bounds on $\mathrm{Search}{\phi}$ (e.g. [Beame, Pitassi, Segerlind, 2007]; [Göös, Pitassi, 2014], [Impagliazzo, Pitassi, Urquhart, 1994]) are realized on ad-hoc formulas $\phi$ (i.e. they were introduced specifically for these lower bounds). We introduce a new communication complexity approach that allows establishing proof complexity lower bounds for natural formulas. First, we demonstrate our approach for two-party communication and apply it to the proof system $\mathrm{Res}(\oplus)$ that operates with disjunctions of linear equalities over $\mathbb{F}_2$ [Itsykson, Sokolov, 2014]. Let a formula $\mathrm{PM}_G$ encode that a graph $G$ has a perfect matching. If $G$ has an odd number of vertices, then $\mathrm{PM}_{G}$ has a tree-like $\mathrm{Res} oplus)$-refutation of a polynomial-size [Itsykson, Sokolov, 2014]. It was unknown whether this is the case for graphs with an even number of vertices. Using our approach we resolve this question and show a lower bound $2^{\Omega(n)}$ on size of tree-like $\mathrm{Res}(\oplus)$-refutations of $\mathrm{PM}_{K_{n+2,n}}$. Then we apply our approach for $k$-party communication complexity in the NOF model and obtain a $\Omega\left(\frac{1}{k} 2^{n/2k - 3k/2}\right)$ lower bound on the randomized $k$-party communication complexity of $\mathrm{Search} mathrm{BPHP}^{M}_{2^n}}$ w.r.t. to some natural partition of the variables, where $\mathrm{BPHP}^{M}_{2^n}$ is the bit pigeonhole principle and $M=2^n+2^{n(1-1/k)}$. In particular, our result implies that the bit pigeonhole requires exponential tree-like $\mathrm{Th}(k)$ proofs, where $\mathrm{Th}(k)$ is the semantic proof system operating with polynomial inequalities of degree at most $k$ and $k= \O(\log^{1-\epsilon} n)$ for some $\epsilon > 0$. We also show that $\mathrm{BPHP}^{2^n+1}_{2^n}$ superpolynomially separates tree-like $\mathrm{Th}(\log^{1-\epsilon} m)$ from tree-like $\mathrm{Th}(\log m)$, where $m$ is the number of variables in the refuted formula. The talk is based on the paper published in Proceedings of CCC-2021. |
14:00 | On Reducing Non-Occurrence of Specified Runtime Errors to All-Path Reachability Problems of Constrained Rewriting PRESENTER: Naoki Nishida ABSTRACT. A concurrent program with semaphore-based exclusive control can be transformed into a logically constrained term rewrite system that is computationally equivalent to the program. In this paper, we first propose a framework to reduce the non-occurrence of a specified runtime error in the program to an all-path reachability problem of the transformed logically constrained term rewrite system. Here, an all-path reachability problem is a pair of state sets and is demonically valid if every finite execution path starting with a state in the first set and ending with a terminating state includes a state in the second set. Then, as a case study, we show how to apply the framework to the race-freeness of semaphore-based exclusive control in the program. Finally, we show a simplified variant of a proof system for all-path reachability problems. |
14:30 | PRESENTER: Shujun Zhang ABSTRACT. A GSC-terminating and orthogonal inductive definition set (IDS, for short) of first-order predicate logic can be transformed into a many-sorted term rewrite system (TRS, for short) such that a quantifier-free sequent is valid w.r.t. the IDS if and only if a term equation representing the sequent is an inductive theorem of the TRS. Under certain assumptions, it has been shown that a quantifier- and cut-free cyclic proof of a sequent can be transformed into a rewriting-induction (RI, for short) proof of the corresponding term equation. The RI proof can be transformed back into the cyclic proof, but it is not known how to transform arbitrary RI proofs into cyclic proofs. In this paper, we show that for a quantifier- and logical-connective-free sequent, if there exists an RI proof of the corresponding term equation, then there exists a cyclic proof of some sequent w.r.t. some IDS such that the cyclic proof ensures the validity of the initial sequent. To this end, we show a transformation of the RI proof into such a cyclic proof, a sequent, and an IDS. |
15:00 | On Transforming Imperative Programs into Logically Constrained Term Rewrite Systems via Injective Functions from Configurations to Terms PRESENTER: Naoki Nishida ABSTRACT. To transform an imperative program into a logically constrained term rewrite system (LCTRS, for short), previous work converts a statement list to rewrite rules in a stepwise manner, and proves the correctness along such a conversion and the big-step semantics of the program. On the other hand, the small-step semantics of a programming language comprises of inference rules that define transition steps of configurations. Partial instances of such inference rules are almost the same as rewrite rules in the transformed LCTRS. In this paper, we aim at establishing a framework for plain definitions and correctness proofs of transformations from programs into LCTRSs. To this end, for the transformation in previous work, we show an injective function from configurations to terms, and reformulate the transformation by means of the injective function. The injective function maps a transition step to a reduction step, and results in a plain correctness proof. |
14:00 | A normalized edit distance on finite and infinite words ABSTRACT. Robustness in formal verification refers to the question: given a system S satisfying a specification Phi, how likely is it that S will still satisfy Phi, even in the presence of errors disrupting its normal behavior. The formal definition typically assumes some error model and some distance function measuring distance between words (describing computations) and languages. To define robustness for reactive systems which are modeled by infinite words, a notion of distance between infinite words is required. A common distance between finite words w1 and w2 is the edit (Levenstein) distance, that provides the minimal number of edit operations (substitute, delete or insert) that is required to transform w1 into w2. But how can it be enhanced to account for infinite words? Can we find a definition that is both intuitive and satisfies the requirements of a metric? Can it be computed at least for the case of ultimately periodic words? Efficiently? And what about computing the distance between languages of infinite words (say represented by Büchi automata)? In this talk we will see that all the questions above can be answered positively. The distance function enabling this is termed ω-NED and it is a natural extension of NED, the normalized edit distance between finite words. In the quest for answering these questions we stumbled upon the fact that while it was known that NED does not satisfy the triangle inequality for arbitrary weights, the question whether it satisfies the triangle inequality when weights are uniform was open. We managed to close this gap by proving that it does, and build on this for proving the above results. The talk is based on joint works with Joshua Grogin, Oded Margalit and Gera Weiss. |
15:00 | Automated Logic-Based Reasoning for Analyzing Prime Video Code PRESENTER: Ilina Stoilkovska ABSTRACT. Developers working in Prime Video are constantly innovating, introducing new features, and improving the performance of the Prime Video application. Each modification of the Prime Video application code is required to pass a testing and code review process before it is delivered to customer devices. As the application evolves rapidly, it cannot be expected that manual reasoning about the code keeps up with the software complexity curve. This setting presents an opportunity for the application of automated logic-based approaches. In this abstract, we present techniques and tools we have used and developed in order to automatically reason about the the Prime Video application code. We discuss a code review bot, which runs multiple static code analysis tools, and introduce a novel static analyzer for TypeScript, which enables reasoning about deep properties in a trustworthy manner, while providing transparent actionable feedback to the developers. |
XLoKR Session 3
14:00 | PRESENTER: Guilherme Paulino-Passos ABSTRACT. Recent work shows issues of consistency with explanations, with methods generating local explanations that seem reasonable instance-wise, but that are inconsistent across instances. This suggests not only that instance-wise explanations can be unreliable, but mainly that, when interacting with a system via multiple inputs, a user may actually lose confidence in the system. To better analyse this issue, in this work we treat explanations as objects that can be subject to reasoning and present a formal model of the interactive scenario between user and system, via sequences of inputs and outputs. We argue that explanations can be thought of in terms of entailment, which, we argue, should be thought of as non-monotonic. This allows: 1) to solve some considered inconsistencies in explanations; 2) to consider properties from the non-monotonic reasoning literature and discuss their desirability, gaining more insight on the interactive explanation scenario. |
14:20 | Should counterfactual explanations always be data instances? PRESENTER: Francesca Toni ABSTRACT. Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning classifiers. Predominantly, they amount to data instances pointing to potential changes to the inputs that would lead to alternative outputs. In this position paper we question the widespread assumption that CEs should always be data instances, and argue instead that in some cases they may be better understood in terms of special types of relations between input features and classification variables. We illustrate how a special type of these relations, amounting to critical influences, can characterise and guide the search for data instances deemed suitable as CEs. These relations also provide compact indications of which input features - rather than their specific values in data instances - have counterfactual value |
14:40 | Dialogue-Based Explanations of Reasoning in Rule-based Systems PRESENTER: Joe Collenette ABSTRACT. The recent focus on explainable artificial intelligence has been driven by a perception that complex statistical models are opaque to users. Rule-based systems, in contrast, have often been presented as self-explanatory. All the system needs to do is provide a log of its reasoning process and its operations are clear. We believe that such logs are often difficult for users to understand in part because of their size and complexity. We propose dialogue as an explanatory mechanism for rule-based AI systems to allow users and systems to co-create an explanation that focuses on the user’s particular interests or concerns. Our hypothesis is that when a system makes a deduction that was, in some way, unexpected by the user then locating the source of the disagreement or misunderstanding is best achieved through a collaborative dialogue process that allows the participants to gradually isolate the cause. We have implemented a system with this mechanism and performed a user evaluation that shows that in many cases a dialogue is preferred to a reasoning log presented as a tree. These results provide further support for the hypothesis that dialogue explanation could provide a good explanation for a rule-based AI system. |
15:00 | Clustering-Based Approaches for Symbolic Knowledge Extraction PRESENTER: Roberta Calegari ABSTRACT. Opaque models belonging to the machine learning world are ever more exploited in the most different application areas. These models, acting as black boxes (BB) from the human perspective, cannot be entirely trusted if the application is critical, unless there exists a method to extract symbolic and human-readable knowledge out of them. In this paper we analyse a recurrent design adopted by symbolic knowledge extractors for BB regressors---that is, the creation of rules associated with hypercubic input space regions. We argue that this kind of partitioning may lead to suboptimal solutions when the data set at hand is high-dimensional or does not satisfy symmetric constraints. We then propose a (deep) clustering-based approach to be performed before symbolic knowledge extraction to achieve better performance with data sets of any kind. |
14:50 | ABSTRACT. Quantitative reasoning is synonymous with numerical methods. The techniques that come to mind to solve problems involving quantitative properties are numerical methods --- linear programming, optimization, and similar other methods. In this talk, I will introduce a novel framework, called comparators, for quantitative reasoning which is purely based in automata methods as opposed to numerical methods, and discuss their pros and cons. |
15:00 | ABSTRACT. In synthesis, environment specifications are constraints on the environments that rule out certain environment behaviors. To solve synthesis of LTLf goals under environment assumptions, we could reduce the problem to LTL synthesis. Unfortunately, while synthesis in LTLf and in LTL have the same worst-case complexity (both are 2EXPTIME-complete), the algorithms available for LTL synthesis are much harder in practice than those for LTLf synthesis. Recently, it has been shown that when the environment specifications are in form of fairness or stability, or GR(1) formulas, we can avoid such a detour to LTL and keep the simplicity of LTLf synthesis. Furthermore, even when the environment specifications are specified in full LTL we can partially avoid this detour. In this talk, I give a review on how to effetely tackle the synthesis problem for reachability and safety environment task under reachability and safety environment specifications by using synthesis techniques based on finite-state automata, thus exploiting the intrinsic simplicity of LTLf. |
15:10 | ABSTRACT. Boolean synthesis is the process of synthesizing a multiple-output Boolean function from a declarative specification describing a relation between its inputs and outputs. Motivated by applications in diverse areas such as digital-circuit design, QBF certification and reactive synthesis, a number of different approaches have been developed to tackle this problem. This talk intends to give a brief overview of the ongoing development of Boolean-synthesis algorithms based on decision diagrams. This approach to Boolean synthesis is intrinsically connected to its application as a component of reactive synthesis, since decision diagrams are common in a number of successful reactive-synthesis algorithms. We will contrast the original algorithm based on Binary Decision Diagrams (BDDs) with a newly-introduced version making use of a symbolic CNF representation using Zero-Suppressed Decision Diagrams (ZDDs), and show how this alternative representation can mitigate some of the scalability limitations of BDDs. In the course of the talk we will touch on some of the algorithmic insights that went into the design of such techniques. |
15:20 | PRESENTER: Shibashis Guha ABSTRACT. Given a Markov decision process (MDP) $M$ and a formula $\Phi$, the strategy synthesis problem asks if there exists a strategy $\sigma$ s.t. the resulting Markov chain $M[\sigma]$ satisfies $\Phi$. This problem is known to be undecidable for the probabilistic temporal logic PCTL. We study a class of formulae that can be seen as a fragment of PCTL where a local, bounded horizon property is enforced all along an execution. Moreover, we allow for linear expressions in the probabilistic inequalities. This logic is at the frontier of decidability, depending on the type of strategies considered. In particular, strategy synthesis is decidable when strategies are deterministic while the general problem is undecidable. |
Heaps and Concurrency
15:00 | Proving Logical Atomicity using Lock Invariants PRESENTER: Shengyi Wang ABSTRACT. Logical atomicity has been widely accepted as a specification format for data structures in concurrent separation logic. While both lock-free and lock-based data structures have been verified against logically atomic specifications, most of the latter start with atomic specifications for the locks as well. In this paper, we compare this approach with one based on older lock-invariant-based specifications for locks. We show that we can still prove logically atomic specifications for data structures with fine-grained locking using these older specs, but the proofs are significantly more complicated than those that use atomic lock specifications. Our proof technique is implemented in the Verified Software Toolchain, which relies on older lock specifications for its soundness proof, and applied to C implementations of lock-based concurrent data structures. |
15:30 | Deciding Separation Logic with Heap Lists PRESENTER: Mihaela Sighireanu ABSTRACT. Heap list is a data structure used by many implementations of memory allocators to organize the heap part of the program’s memory. Operations on heap lists employ pointer arithmetic and therefore, reasoning about the correctness of heap list implementations requires to deal simultaneously with pointer arithmetic inside the inductive definition specifying the heap list. In this work, we investigate decision problems for SLAH, a separation logic fragment that allows pointer arithmetic and an inductive definition for heap lists, thus enabling specification of properties for programs in memory allocators. Pointer arithmetic inside the inductive definition for heap lists is challenging for automated reasoning. We tackle this challenge and achieve decision procedures for both satisfiability and entailment of SLAH formulas. The crux of our decision procedure for satisfiability is to compute summaries of the inductive definition for heap lists. We show that although the summary is naturally expressed as an existentially quantified non-linear arithmetic formula, it can actually be transformed into an equivalent linear arithmetic formula. The decision procedure for entailment, on the other hand, has to match and split the spatial atoms according to the arithmetic relation between address variables. We report on the implementation of these decision procedures and their good performance in solving problems issued from the verification of building block programs manipulating heap-lists used in memory allocators. |
16:00 | PRESENTER: Alexandre Moine ABSTRACT. We present a program logic for reasoning about heap space in a λ-calculus equipped with garbage collection and mutable state. Compared with prior work by Madiot and Pottier, we target a higher-level language. This requires us to introduce an assertion to reason about the roots held by the evaluation context. |
Modelling and Applications
16:00 | PRESENTER: Nicolas Lecomte ABSTRACT. An H-partition of a finite undirected simple graph G is a labeling of G's vertices such that the constraints expressed by the model graph H are satisfied. For every model graph H, it can be decided in non-deterministic polynomial time whether a given input graph G admits an H-partition. Moreover, it has been shown by Dantas et al. that for most model graphs, this decision problem is in deterministic polynomial time. In this paper, we show that these polynomial-time algorithms for finding H-partitions can be expressed in Datalog with stratified negation. Moreover, using the answer set solver Clingo, we have conducted experiments to compare straightforward guess-and-check programs with Datalog programs. Our experiments indicate that in Clingo, guess-and check programs run faster than their equivalent Datalog programs. |
16:20 | Translating Definitions into the Language of Logic Programming: A Case Study ABSTRACT. In the process of creating a declarative program, the programmer transforms a problem specification expressed in a natural language into an executable specification. We study the case when the given specification is expressed by a mathematically precise definition, and the goal is to write a program for an answer set solver. |
16:40 | A normative model of explanation for binary classification legal AI and its implementation on causal explanations of Answer Set Programming ABSTRACT. In this paper, I provide a normative model of explanation for the output of AI algorithms used in legal practice. I focus on binary classification algorithms due to their extensive use in the field. In the last part of the paper, I examine the model’s compatibility with causal explanations provided by Answer Set Programming (ASP) causal models. The motivation for proposing this model is the necessity for providing explanations for the output of legal AI. From the multiplicity of arguments supporting that necessity, the proposed model addresses the argument that legal AI’s output should be objectionable. That can be achieved only if the explanation of the output has a form that makes it amenable to evaluation by legal practitioners. Hence, I firstly provide a normative model for the explanations used by legal practitioners in their practice (CLM_LP) and then I provide the normative model for the explanations of legal AI’s outputs (EXP_BC) that I base on CLP_LP. CLP_LP in its turn is based on the Classical Model of Science (CMS) which is the normative model of explanations that every “proper” science should follow according to philosophers throughout history. Following the introduction of EXP_BC, I propose three degrees of explainability regarding binary classification explanations according to their fidelity to EXPBC. I further argue that machine learning can not satisfy even the lowest degree of explainability, while rule-based AI - like ASP-based AI - can satisfy the highest degree. Concluding, I propose an ASP methodology in-progress that can use EXP_BC to provide causal explanations. In the proposed ASP methodology, I am using causal graphs as models of causal inference as well as a metaphysically neutral interventionist account of causation. CMS_LP and EXP_BC are normative models that are based on the derivation relation of subsumptive-deductive inference among norms and propositions. On the other hand, the proposed ASP methodology is based on causal - and hence non-subsumptive-deductive - relations among norms and propositions that supervene on the subsumptive-deductive ones. Consequently, the motivation behind proposing this specific ASP methodology is to establish a precedent for a unification of different types of explanations (e.g., deductive and causal explanations) as well as to bridge the gaps among computational modelling, actual practice, and the philosophical underpinnings of a domain of expertise (law in this case). |
17:00 | Assumable Answer Set Programming ABSTRACT. For modeling the assumption-based intelligent agents who make assumptions and use them to construct their belief sets, this paper proposes a logic programming language AASP (Assumable Answer Set Programming) by extending ASP (Answer Set Programming). Rational principles of assumption-based reasoning are given to define the semantics of the AASP program. The relation of AASP to some existing default formalism implies that AASP can be used to model and solve some interesting problems with incomplete information in the framework of assumption-based reasoning. |
This is a KR-2022 tutorial on Sunday July 31st: two 1.5h sessions in afternoon with a break: 14:00-15:30 and 16:00-17:30
FoMLAS Session 4
16:00 | PRESENTER: João Batista Pereira Matos Júnior ABSTRACT. Neural networks are essential components of learning-based software systems. However, their high compute, memory, and power requirements make using them in low resources domains challenging. For this reason, neural networks are often compressed before deployment. Existing compression techniques tend to degrade the network accuracy. We propose Counter-Example Guided Neural Network Compression Refinement (CEG4N). This technique combines search-based quantization and equivalence verification: the former minimizes the computational requirements, while the latter guarantees that the network’s output does not change after compression. We evaluate CEG4N on a diverse set of benchmarks that include large and small networks. Our technique successfully compressed 80% of the networks in our evaluation while producing models with up to 72% better accuracy than state-of-the-art techniques. |
16:30 | ABSTRACT. I'd like to present the content of the attached manuscript, currently in press for IEEE Transaction on Intelligent Vehicles. The work establishes a formal logic foundation for RSS, a recent methodology for "theorem proving" safety of automated driving systems. RSS is proposed by researchers at Intel/Mobileye and is attracting a lot of interests from industry and academia alike, but its formal aspects have been underdeveloped (although RSS is really about theorem proving). In the paper, we introduce a logic dFHL---it can be seen as an adaptation of Platzer's differential dynamic logic---and a workflow for deriving formally verified safety rules. This extension of RSS allows one to prove the safety of much more complicated scenarios than RSS could previously handle. The workflow is experimentally evaluated, too. The work treats automated vehicles as black boxes, so that it accommodates ML components that cannot be logically modeled. I believe the work will be of the audience's interest 1) because of the methodology that black-box models are "verified" by imposing formally guaranteed rules/contracts, and 2) automated driving is a hot application topic. |
16:00 | Copromotion and Taylor Approximation (Work in Progress) PRESENTER: Jean-Simon Lemay ABSTRACT. Quantitative models of Linear Logic, where non-linear proofs are interpreted by power series, hold a special place in the semantics of linear logic. They are at the heart of Differential Linear Logic and its application to probabilistic programming and new proofs methods in lambda-calculus. In this abstract, we explore a refinement of DiLL with a copromotion rule, modelized by categories where all maps are quantitative. |
16:30 | ABSTRACT. Linear logic’s coherent spaces are a simple kind of abstract simplicial complexes (asc, for short). These combinatorially encode a topological space that is usually studied via (simplicial) homology. We can easily associate an asc with any formula of LL under some semantics, in such a way proofs become the faces of the asc and such that this asc satisfactorily represents the “space of the (interpretations of the) proofs” of the formula. Starting from this remark, we discuss the natural question of defining a type-isomorphic invariant homology for those asc’s. |
17:00 | PRESENTER: Rémi Di Guardia ABSTRACT. We propose a new proof of sequentialization for the proof nets of unit-free multiplicative-additive linear logic of Hughes & Van Glabbeek. This is done by adapting a method from unit-free multiplicative linear logic, showing the robustness of this approach. |
16:00 | ABSTRACT. Moshe's has made key contributions in the areas of logic and formal methods, and more recently, he has been one of the main voices discussing publicly the social impact of modern computer technology and AI. I propose to talk about two themes that are related to these. The first is about recent work that highlights the role of formal languages and logic in representation learning. The second is about my experience teaching an optional, upper division course that I developed, called ``Social and technological change'' related to Moshe's concerns. |
16:10 | PRESENTER: Assaf Marron ABSTRACT. The proposed theory of survival of the fitted [1] explains the evolution of species in the biosphere based on sustainability of multi-scale interaction networks: interaction patterns are what sustains the network and its constituents. Winning in struggles, and having reproductive advantages as associated with Darwinian and Neo-Darwinian survival of the fittest and Natural Selection is secondary. We define a species as the collective of organisms that share a set of fundamental, core, sustaining interactions, which we term the species interaction code. Here (see also [2]), we show that interaction codes are formed, and subsequently evolve, in response to genetic, environmental and other changes, in a process that in some ways relates to the well-studied concepts of autoencoding or dimensionality reduction in machine learning. In a unique autoencoding process, which we term natural autoencoding, every biological process or interaction in the entire biosphere takes part in multiple functions: encoding existing organisms and interactions; decoding such codes into networks of sustained ecosystems, and reacting to innovation and change by adjusting species interaction codes. DNA is only one element in this mechanism. One result of this process is that species interaction codes both map and construct the species environment. Natural autoencoding differs from typical artificial autoencoding; natural autoencoding is: 1. Sustainment-guided: Typical artificial autoencoders iteratively optimize a loss function: each input is encoded, its representation is then decoded, and the differences between inputs and corresponding reconstructed outputs are computed. In natural autoencoding, evolving species interaction codes stabilize around de-facto sustained interaction patterns. This is retrospective; there is no sustainability assessment of a present state. What works works. 2. Multi-species and multi-scale: Natural autoencoding concurrently generates many species interaction codes, at multiple scales/levels, including biochemical, cellular, and organism. 3. Recursive and Reflective: The biosphere drives its own evolution: the input to the "natural autoencoder" is the entire biosphere, incorporating all available outputs of previous steps, and the autoencoder itself. Our team is now designing computer-based autoencoders to model natural autoencoding. Principles associated with Natural Selection and survival of the fittest have significantly affected human political, social, economical, and educational norms. Survival-of-the-fitted by natural autoencoding explains why a habitable environment requires diverse networked interactions. This perspective can help drive much needed changes in human behavior, both environmental and social, including the kind of social responsibility advocated by Moshe Vardi [3]. Acknowledgement: We thank D. Harel, S. Szekely and G. Frankel for valuable contributions. |
16:20 | ABSTRACT. in the 1990s, during my PhD studies at Imperial College, London. One of his particular papers, on why modal logic is robustly decidable clearly called my attention, as he neatly explained not only the paper's main topic but also handed the reader the understanding of model checking and validity \cite{Vardi96}. This paper inspired our work on Connectionist Modal Logics years later \cite{DAVILAGARCEZ200734} and our approach to Neurosymbolic AI in general \cite{3rdwave}. Several years later, we invited him to be a keynote at the Brazilian National Conference on Computer Science. Since then, we have become friends and met several times. Our recent collaboration aims at better understanding and explaining machine learning \cite{Tavares2020}, showing that neural learning and logic can lead to richer neurosymbolic AI models, and at integrating machine learning and reasoning toward more robust AI \cite{Lamb2020,PratesALLV19}. In this talk, I will highlight Vardi's contribution to this endeavour and the inspiration that I have drawn from his exquisite yet humongous contribution to computer science, logic, and artificial intelligence. |
16:30 | PRESENTER: Stefania Costantini ABSTRACT. We intend to present our line of work based on epistemic logic, lasting since some years, where we cope with group dynamics in multi-agent settings. We have proposed and extended over time the epistemic logic \textit{L-DINF}, that allows one to formalize such dynamics, with special attention to explainability, modularity, and to the ability of modelling cognitive aspects related to agents' operation within the application at hand. |
16:00 | PRESENTER: Vlad Rusu ABSTRACT. Coinduction is an important concept in functional programming. To formally prove properties of corecursive functions one can try to define them in a proof assistant such as Coq. But there arelimitations on the functions that can be defined. In particular, corecursive calls must occur directly under a call to a constructor, without any calls to other recursive functions in between. In this paper we show how a partially ordered set endowed with a notion of approximation can be organized as a Complete Partial Order. This makes it possible to define corecursive functions without using Coq's corecursion, as the unique solution of a fixpoint equation, thereby escaping Coq's builtin limitations. |
16:30 | Towards a Refactoring Tool for Dependently-Typed Programs (work in progress) PRESENTER: Christopher Brown ABSTRACT. While there is considerable work on refactoring functional programs, so far this had not extended to dependently-typed programs. In this paper, we begin to explore this space by looking at a range of transformations related to indexed data and functions. |
16:45 | Refactoring Steps for P4 (work in progress) PRESENTER: Máté Tejfel ABSTRACT. P4 is a domain specific programming language developed mainly to describe data plane layer of packet processing algorithms of different network devices (e.g. switches, routers). The language has special domain-specific constructs (e.g. match/action tables) that cannot be found in other languages and as such there is no existing methodology yet for refactoring these constructs. The paper introduces the definition of some P4 specific refactoring steps. The proposed steps are implemented using P4Query an analyzer framework dedicated to P4. |
16:00 | Fixed-Template Promise Model Checking Problems PRESENTER: Silvia Butti ABSTRACT. The fixed-template constraint satisfaction problem (CSP) can be seen as the problem of deciding whether a given primitive positive first-order sentence is true in a fixed structure (also called model). We study a class of problems that generalizes the CSP simultaneously in two directions: we fix a set L of quantifiers and Boolean connectives, and we specify two versions of each constraint, one strong and one weak. Given a sentence which only uses symbols from L, the task is to distinguish whether the sentence is true in the strong sense, or it is false even in the weak sense. We classify the computational complexity of these problems for the existential positive equality-free fragment of first-order logic, and we prove some upper and lower bounds for the positive equality-free fragment. The partial results are sufficient, e.g., for all extensions of the latter fragment. |
16:20 | PRESENTER: Tephilla Prince ABSTRACT. Bounded model checking (BMC) is an efficient formal verification technique which allows for desired properties of a software system to be checked on bounded runs of an abstract model of the system. The properties are frequently described in some temporal logic and the system is modeled as a state transition system. In this paper we propose a novel counting logic, L_C, to describe the temporal properties of client-server systems with an unbounded number of clients. We also propose two dimensional bounded model checking (2D-BMC) strategy that uses two distinguishable parameters, one for execution steps and another for the number of tokens in the net representing a client-server system, and these two evolve separately, which is different from the standard BMC techniques in the Petri Nets formalism. This 2D-BMC strategy is implemented in a tool called DCModelChecker which leverages the 2D-BMC technique with a state-of-the-art satisfiability modulo theories (SMT) solver Z3. The system is given as a Petri Net and properties specified using L_C are encoded into formulas that are checked by the solver. Our tool can also work on industrial benchmarks from the Model Checking Contest (MCC). We report on these experiments to illustrate the applicability of the 2D-BMC strategy. |
16:40 | A note on Türing 1936 ABSTRACT. Türing historical article of 1936 is the result of a special endeavor focused around the factuality of a general process for algorithmic computation. The Türing machine as a universal feasibility test for computing procedures is applied up to closely examining what are considered to be the limits of computation itself. In this regard he claims to have defined a number which is not computable, arguing that there can be no machine computing the diagonal on the enumeration of the computable sequences. This article closely examines the original 1936 argument, displaying how it cannot be considered a demonstration, and that there is indeed no evidence of such a defined number that is not computable. |
XLoKR Session 4
16:00 | Invited Talk: Explanation Generation in Applications of Answer Set Programming ABSTRACT. TBA |
17:00 | Explaining Soft-Goal Conflicts through Constraint Relaxations PRESENTER: Rebecca Eifler ABSTRACT. Recent work suggests to explain trade-offs between soft goals in terms of their conflicts, i. e., minimal unsolvable soft-goal subsets. But this does not explain the conflicts themselves: Why can a given set of soft goals not be jointly achieved? Here we approach that question in terms of the underlying constraints on plans in the task at hand, namely resource availability and time windows. In this context, a natural form of explanation for a soft-goal conflict is a minimal constraint relaxation under which the conflict disappears (“if the deadline was 1 hour later, it would work”). We explore algorithms for computing such explanations. A baseline is to simply loop over all relaxed tasks and compute the conflicts for each separately. We improve over this by two algorithms that leverage information -- conflicts, reachable states -- across relaxed tasks. We show that these algorithms can exponentially outperform the baseline in theory, and we run experiments confirming that advantage in practice. |
17:20 | Stepwise Explanations of Unsatisfiable Constraint Programs (Extended abstract) PRESENTER: Ignace Bleukx ABSTRACT. Unsatisfiable constraint satisfaction models (CSPs) are a common sight when formulating a constraint model. For the modeller, only a handful of tools are available to get an explanation why his model is UNSAT, making it hard to understand the cause of the unsatisfiability and to debug the model. Existing tools rely on extracting a subset of constraints which are in itself not satisfiable (MUS). Although some obvious bugs in the model can be detected easily using such techniques, in other cases the extracted MUS involves too many constraints to understand their dynamics and the cause of unsatisfiability may not be made clear. For explaining solutions of Boolean satisfaction problems, recent years has seen the emerge of stepwise explanations. Such explanation sequences provide justifications for Boolean facts in small steps a user can understand. In this work, we aim to generate a sequence which derives the unsatisfiability of a constraint program. This requires several orthogonal extensions on previous work. In this paper, we extend the notion of explainable facts to integer domains; introduce the concept of simple and non-simple constraints and lasty, we propose an algorithm which can be used to derive the unsatisfiability of a constraint program. |
Report on SL-COMP
17:00 | Report on SL-COMP |
Invited Talk by Orna Kupferman
Open problem session and discussions.
CAUSAL and EELP
17:30 | Correct Causal Inference in Probabilistic Logic Programming |
17:50 | A Casual Perspective on AI Deception |
18:10 | Epistemic Logic Programs: a Novel Perspective and some Extensions PRESENTER: Stefania Costantini |
17:30 | The Herbrand Manifesto: Thinking Inside the Box PRESENTER: Vinay Chaudhri ABSTRACT. The traditional semantics for relational logic (sometimes called Tarskian semantics) is based on the notion of interpretations of constants in terms of objects external to the logic. Herbrand semantics is an alternative that is based on truth assignments for ground sentences without reference to external objects. Herbrand semantics is simpler and more intuitive than Tarskian semantics; and, consequently, it is easier to teach and learn. Moreover, it is stronger than Tarskian semantics. For example, while it is not possible to finitely axiomatize integer arithmetic with Tarskian semantics, this can be done easily with Herbrand semantics. The downside is a loss of some common logical properties, such as compactness and inferential completeness. However, there is no loss of inferential power - anything that can be deduced according to Tarskian semantics can also be deduced according to Herbrand semantics. Based on these results, we argue that there is value in using Herbrand semantics for relational logic in place of Tarskian semantics. It alleviates many of the current problems with relational logic and ultimately may foster a wider use of relational logic in human reasoning and computer applications. |
18:30 | Teaching logic to CS undergraduates PRESENTER: Thomas Zeume ABSTRACT. In this position paper we (1) provide context on how logic is (typically) anchored in CS curricula of German universities, (2) describe an introductory level logic course for CS students taught at TU Dortmund and Ruhr University Bochum designed to appeal to a broad spectrum of CS students, (3) report on experiences in establishing this course as a compulsory module in the CS Bachelor curriculum at Ruhr University Bochum. Additionally we (4) sketch a vision for leveraging modern technology in logic instruction. |