FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
PROGRAM FOR MONDAY, AUGUST 8TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 94A: Hyperproperties and Security (CAV)
Location: Taub 1
09:00
Software Verification of Hyperproperties Beyond k-Safety
PRESENTER: Raven Beutner

ABSTRACT. Temporal hyperproperties are system properties that relate multiple execution traces. For (finite-state) hardware, temporal hyperproperties are supported by model checking algorithms, and tools for general temporal logics like HyperLTL exist. For (infinite-state) software, the analysis of temporal hyperproperties has, so far, been limited to $k$-safety properties, i.e., properties that stipulate the absence of a bad interaction between any $k$ traces. In this paper, we present an automated method for the verification of $\forall^k\exists^l$-safety properties in infinite-state systems. A $\forall^k\exists^l$-safety property stipulates that for any $k$ traces, there \emph{exist} $l$ traces such that the resulting $k+l$ traces do not interact badly. This combination of universal and existential quantification enables us to express many properties beyond $k$-safety, including, for example, generalized non-interference or program refinement. Our method is based on a strategy-based instantiation of existential trace quantification combined with a program reduction, both in the context of a fixed predicate abstraction. Notably, our framework allows for mutual dependence of strategy and reduction.

09:20
A Scalable Shannon Entropy Estimator
PRESENTER: Kuldeep S. Meel

ABSTRACT. Quantified information flow (QIF) has emerged as a rigorous approach to quantitatively measure confidentiality; the information-theoretic underpinning of QIF allows the end-users to link the computed quantities with the computational effort required on the part of the adversary to gain access to desired confidential information. In this work, we focus on the estimation of Shannon entropy for a given program P. As a first step, we focus on the case wherein a Boolean formula F(X,Y) captures the relationship between inputs X and output Y of P. Such formulas F(X,Y) have the property that there exists exactly one valuation to Y such that F is satisfied. The existing techniques require O(2^m) model counting queries, where m =|Y|. We propose the first efficient algorithmic technique, called EntropyEstimation, to estimate the Shannon entropy of F(X,Y) with PAC-style guarantees, i.e., the computed estimate is guaranteed to lie within a (1 ± epsilon)-factor of the ground truth with confidence at least 1-delta. Furthermore, EntropyEstimation makes only O(min(m,n)/epsilon^2) counting and sampling queries, where m =|Y|, and n =|X|, thereby achieving a significant reduction in the number of model counting queries. We demonstrate the practical efficiency of our algorithmic framework via a detailed experimental evaluation. Our evaluation demonstrates that the proposed framework scales to the formulas beyond the reach of the previously known approaches.

09:40
PoS4MPC: Automated Security Policy Synthesis for Secure Multi-Party Computation
PRESENTER: Yuxin Fan

ABSTRACT. Secure multi-party computation (MPC) is a promising technique for privacy-persevering applications. A number of MPC frameworks have been proposed to reduce the burden of designing customized protocols, allowing non-experts to quickly develop and deploy MPC applications. To improve performance, recent MPC frameworks allow users to declare secret variables so that only these variables are to be protected. However, in practice, it could be highly non-trivial for non-experts to specify secret variables: declaring too many degrades the performance while declaring too less compromises privacy. To address this problem, in this work, we propose an automated security policy synthesis approach to declare as few secret variables as possible but without compromising security. Our approach is a synergistic integration of type inference and symbolic reasoning. The former is able to quickly infer a sound, but sometimes conservative, security policy, whereas the latter allows to identify secret variables in a security policy that can be declassified in a precise manner. Moreover, the results from the symbolic reasoning are fed back to type inference to refine the security types even further. We implement our approach in a tool, named PoS4MPC. Experimental results on five typical MPC applications confirm the efficacy of our approach.

10:00
Explaining Hyperproperty Violations
PRESENTER: Julian Siber

ABSTRACT. Hyperproperties relate multiple computation traces to each other. Model checkers for hyperproperties thus return, in case a system model violates the specification, a set of traces as a counterexample. Fixing the erroneous relations between traces in the system that led to the counterexample is a difficult manual effort that highly benefits from additional explanations. In this paper, we present an explanation method for counterexamples to hyperproperties described in the specification logic HyperLTL. We extend Halpern and Pearl's definition of actual causality to sets of traces witnessing the violation of a HyperLTL formula, which allows us to identify the events that caused the violation. We report on the implementation of our method and show that it significantly improves on previous approaches for analyzing counterexamples returned by HyperLTL model checkers.

10:20
Distilling Constraints in Zero-Knowledge Protocols
PRESENTER: Miguel Isabel

ABSTRACT. The most widely used Zero-Knowledge (ZK) protocols require provers to prove they know a solution to a computational problem expressed as a Rank-1 Constraint System (R1CS). An R1CS is essentially a system of non-linear arithmetic constraints over a set of signals, whose security level depends on its non-linear part only, as the linear (additive) constraints can be easily solved by an attacker. Distilling the essential constraints from an R1CS by removing the part that does not contribute to its security is important, not only to reduce costs (time and space) of producing the ZK proofs, but also to reveal to cryptographic programmers the real hardness of their proofs. In this paper, we formulate the problem of distilling constraints from an R1CS as the (hard) problem of simplifying constraints in the realm of non-linearity. To the best of our knowledge, it is the first time that constraint-based techniques developed in the context of formal methods are applied to the challenging problem of analysing and optimizing ZK protocols.

09:00-10:30 Session 94B: Voting and Distributed Systems (CSF)
Location: Taub 2
09:00
N-Tube: Formally Verified Secure Bandwidth Reservation in Path-Aware Internet Architectures
PRESENTER: Thilo Weghorn

ABSTRACT. We present N-Tube, a novel, provably secure, inter-domain bandwidth reservation algorithm that runs on a network architecture supporting path-based forwarding. N-Tube reserves global end-to-end bandwidth along network paths in a distributed, neighbor-based, and tube-fair way. It guarantees that benign bandwidth demands are granted available allocations that are immutable, stable, lower-bounded, and fair, even during adversarial demand bursts.

We formalize N-Tube and powerful adversaries as a labeled transition system, and inductively prove its safety and security properties. We also apply statistical model checking to validate our proofs and perform an additional quantitative assessment of N-Tube, providing strong guarantees for protection against DDoS attacks. We are not aware of any other complex networked system designs that have been subjected to a comparable analysis of both their qualitative properties (such as correctness and security) and their quantitative properties (such as performance).

09:30
Applying consensus and replication securely with FLAQR
PRESENTER: Priyanka Mondal

ABSTRACT. Availability is crucial to the security of distributed systems, but guaranteeing availability is hard, especially when participants in the system may act maliciously. Quorum replication protocols provide both integrity and availability: data and computation is replicated at multiple independent hosts, and a quorum of these hosts must agree on the output of all operations applied to the data. Unfortunately, these protocols have high overhead and can be difficult to calibrate for a specific application’s needs. Ideally, developers could use high-level abstractions for consensus and replication to write faulttolerant code by that is secure by construction.

This paper presents Flow-Limited Authorization for Quorum Replication (FLAQR), a core calculus for building distributed applications with heterogeneous quorum replication protocols while enforcing end-to-end information security. Our type system ensures that well-typed FLAQR programs cannot fail (experience an unrecoverable error) in ways that violate their typelevel specifications. We present noninterference theorems that characterize FLAQR’s confidentiality, integrity, and availability in the presence of consensus, replication, and failures, as well as a liveness theorem for the class of majority quorum protocols under a bounded number of faults.

10:00
How Efficient are Replay Attacks against Vote Privacy? A Formal Quantitative Analysis
PRESENTER: Johannes Müller

ABSTRACT. Replay attacks are among the most well-known attacks against vote privacy. Many e-voting systems have been proven vulnerable to replay attacks, including systems like Helios that are used in real practical elections. Despite their popularity, it is commonly believed that replay attacks are inefficient but the actual threat that they pose to vote privacy has never been studied formally. Therefore, in this paper, we precisely analyze for the first time how efficient replay attacks really are. We study this question from commonly used and complementary perspectives on vote privacy, showing as an independent contribution that a simple extension of a popular game-based privacy definition corresponds to a strong entropy-based notion. Our results demonstrate that replay attacks can be devastating for a voter’s privacy even when an adversary’s resources are very limited. We illustrate our formal findings by applying them to a number of real-world elections, showing that a modest number of replays can result in significant privacy loss. Overall, our work reveals that, contrary to a common belief, replay attacks can be very efficient and must therefore be considered a serious threat.

09:00-10:30 Session 94C: Joint NMR/DL Session (3) (DL)
Location: Taub 9
09:00
Rectifying Classifiers

ABSTRACT. Dealing with high-risk or safety-critical applications calls for the development of trustworthy AI systems. Beyond prediction, such systems must offer a number of additional facilities, including explanation and verification. The case when the prediction made is deemed wrong by an expert calls for still another operation, called rectification. Rectifying a classifier aims to guarantee that the predictions made by the classifier (once rectified) comply with the expert knowledge. Here, the expert is supposed more reliable than the predictor, but their knowledge is typically incomplete.

Focusing on Boolean classifiers, I will present rectification as a change operation. Following an axiomatic approach, I will give some postulates that must be satisfied by rectification operators. I will show that the family of rectification operators is disjoint from the family of revision operators and from the family of update operators. I will also present a few results about the computation of a rectification operation.

10:00
Connection-minimal Abduction in EL via translation to FOL (Extended Abstract)
PRESENTER: Sophie Tourret

ABSTRACT. Abduction is the task of finding possible extensions of a given knowledge base that would make a given sentence logically entailed. As such, it can be used to explain why the sentence does not follow, to repair incomplete knowledge bases, and to provide possible explanations for unexpected observations. We consider TBox abduction in the lightweight description logic EL, where the observation is a concept inclusion and the background knowledge is a description logic TBox. To avoid useless answers, such problems usually come with further restrictions on the solution space and/or minimality criteria that help sort the chaff from the grain. We argue that existing minimality notions are insufficient, and introduce connection minimality. This criterion rejects hypotheses that use concept inclusions “disconnected” from the problem at hand. We show how to compute a special class of connection-minimal hypotheses in a sound and complete way. Our technique is based on a translation to first-order logic, and constructs hypotheses based on prime implicates. We evaluate a prototype implementation of our approach on ontologies from the medical domain.

09:00-10:30 Session 94D: Invited Talk and Cooperating Reasoners (IJCAR)
09:00
From the Universality of Mathematical Truth to the Interoperability of Proof Systems (Invited Talk)
10:00
Cooperating Techniques for Solving nonlinear arithmetic in the cvc5 SMT solver (System Description)
PRESENTER: Gereon Kremer

ABSTRACT. The cvc5 SMT solver solves quantifier-free nonlinear real arithmetic problems by combining the cylindrical algebraic coverings method with incremental linearization in an abstraction-refinement loop. The result is a complete algebraic decision procedure that leverages efficient heuristics for refining candidate models. Furthermore, it can be used with quantifiers, integer variables, and in combination with other theories. We describe the overall framework, individual solving techniques, and a number of implementation details. We demonstrate its effectiveness with an evaluation on the SMT-LIB benchmarks.

09:00-10:30 Session 94E (ITP)
09:00
Refinement of Parallel Algorithms down to LLVM

ABSTRACT. We present a stepwise refinement approach to develop verified parallel algorithms, down to efficient LLVM code. The resulting algorithms' performance is competitive with their counterparts implemented in C/C++.

Our approach is backwards compatible with the Isabelle Refinement Framework, such that existing sequential formalizations can easily be adapted or re-used.

As case study, we verify a simple parallel sorting algorithm, and show that it performs on par with its C++ implementation, and is competitive to state-of-the-art parallel sorting algorithms.

09:30
Taming an Authoritative Armv8 ISA Specification: L3 Validation and CakeML Compiler Verification
PRESENTER: Hrutvik Kanabar

ABSTRACT. Machine-readable specifications for the Armv8 instruction set architecture have become publicly available as part of Arm's release processes, providing an official and unambiguous source of truth for the semantics of Arm instructions. To date, compiler and machine code verification efforts have made use of unofficial theorem proving friendly specifications of Armv8, e.g. CakeML uses an L3-based specification. The validity of these verification efforts hinges upon their unofficial ISA specifications being valid with respect to the official Arm specification.

Leveraging the Sail language ecosystem, we bridge this validation gap by formally verifying that an L3-based specification simulates the official Arm specification using the HOL4 interactive theorem prover. We exercise this simulation by proving a novel compiler correctness result for CakeML with respect to Arm's official specification of the Armv8.6 A-class instruction set.

10:00
Dandelion: Certified Approximations of Elementary Functions
PRESENTER: Heiko Becker

ABSTRACT. Elementary function operations such as sin and exp cannot in general be computed exactly on today's digital computers, and thus have to be approximated. The standard approximations in library functions typically provide only a limited set of precisions, and are too inefficient for many applications. Polynomial approximations that are customized to a limited input domain and output accuracy can provide superior performance. In fact, the Remez algorithm computes the best possible approximation for a given polynomial degree, but has so far not been formally verified.

This paper presents Dandelion, an automated certificate checker for polynomial approximations of elementary functions computed with Remez-like algorithms that is fully verified in the HOL4 theorem prover. Dandelion checks whether the difference between a polynomial approximation and its target reference elementary function remains below a given error bound for all inputs in a given constraint. By extracting a verified binary with the CakeML compiler, Dandelion can validate certificates within a reasonable time, fully automating previous manually verified approximations.

10:30-11:00Coffee Break
11:00-12:30 Session 96B: Cryptography 1 (CSF)
Location: Taub 2
11:00
Contingent payments from two-party signing and verification for abelian groups
PRESENTER: Sergiu Bursuc

ABSTRACT. The fair exchange problem has faced for a long time the bottleneck of a required trusted third party. The recent development of blockchains introduces a new type of party to this problem, whose trustworthiness relies on a public ledger and distributed computation. The challenge in this setting is to reconcile the minimalistic and public nature of blockchains with elaborate fair exchange requirements, from functionality to privacy. Zero-knowledge contingent payments (ZKCP) are a class of protocols that are promising in this direction, allowing the fair exchange of data for payment. We propose a new ZKCP protocol that, when compared to others, requires less computation from the blockchain and less interaction between parties. The protocol is based on two-party (weak) adaptor signatures, which we show how to instantiate from state of the art multiparty signing protocols. We improve the symbolic definition of ZKCP security and, for automated verification with Tamarin, we propose a general security reduction from the theory of abelian groups to the theory of exclusive or.

11:30
Collusion-Preserving Computation without a Mediator
PRESENTER: Yun Lu

ABSTRACT. Collusion-free (CF) and collusion-preserving (CP) protocols enrich the standard security offered by multi-party computation (MPC), to tackle settings where subliminal communication is undesirable. However, all existing solutions make arguably unrealistic assumptions on setups, such as physical presence of the parties, access to physical envelopes, or extreme isolation, where the only means of communication is a star-topology network. The above state of affairs remained a limitation of such protocols, which was even reinforced by impossibility results. Thus, for years, it has been unclear if and how the above setup assumptions could be relaxed towards more realistic scenarios. Motivated also by the increasing interest in using hardware tokens for cryptographic applications, in this work we provide the first solution to collusion preserving computation which uses weaker and more common assumptions than the state of the art, i.e., an authenticated broadcast functionality and access to honestly generated trusted hardware tokens. We prove that our protocol is collusion-preserving (in short, CP) secure as long as no parties abort. In the case of an aborting adversary, our protocol still achieves standard (G)UC security with identifiable (and unanimous) abort.

Leveraging the above identifiability property, we augment our protocol with a penalization scheme which ensures that it is not profitable to abort, thereby obtaining CP security against incentive-driven attackers. To define (and prove) this latter result, we combine the Rational Protocol Design (RPD) methodology by Garay et al. [FOCS 2013] with the CP framework of Alwen et al. [CRYPTO 2012] to derive a definition of security in the presence of incentive-driven local adversaries which can be of independent interest. Similar to existing CP/CF solutions, our protocol preserves, as a fallback, security against monolithic adversaries, even when the setup (i.e., the hardware tokens) is compromised or corrupted. In addition, our fallback solution achieves identifiable and unanimous abort, which we prove are impossible in previous CP solutions.

12:00
Bringing State-Separating Proofs to EasyCrypt - A Security Proof for Cryptobox
PRESENTER: Sabine Oechsner

ABSTRACT. Machine-checked cryptography aims to reinforce confidence in the primitives and protocols that underpin all digital security. However, machine-checked proof techniques remain in practice difficult to apply to real-world constructions. A particular challenge is structured reasoning about complex constructions at different levels of abstraction. The State-Separating Proofs (SSP) methodology for guiding cryptographic proofs by Brzuska, Delignat-Lavaud, Fournet, Kohbrok and Kohlweiss (ASIACRYPT'18) is a promising contestant to support such reasoning. In this work, we explore how SSPs can guide EasyCrypt formalisations of proofs for modular constructions. Concretely, we propose a mapping from SSP to EasyCrypt concepts which enables us to enhance cryptographic proofs with SSP insights while maintaining compatibility with existing EasyCrypt proof support. To showcase our insights, we develop a formal security proof for the cryptobox family of public-key authenticated encryption schemes based on non-interactive key exchange and symmetric authenticated encryption. As a side effect, we obtain the first formal security proof for NaCl's instantiation of cryptobox. Finally we discuss changes to the practice of SSP on paper and potential implications for future tool designers.

11:00-12:30 Session 96C: Joint NMR/DL Session (4) (DL)
Location: Taub 9
11:00
AGM Revision in Description Logics Under Fixed-Domain Semantics

ABSTRACT. While semantic approaches for revising knowledge bases are fine-grained and independent of the syntactical forms, they are unable to be straightforwardly applied in the standard semantic DLs. In this paper, we present a characterization of KB revision in DL under the fixed-domain semantics, where the domain is fixed and finite to accommodate the use of knowledge bases adapting a closed-world assumption. We also introduce an instantiation of a model-based revision operator which satisfies all standard postulates using the notion of distance between interpretations. The model set of the revision result is shown to be expressible into a KB in our setting. In addition, by weakening the KB based on certain domain elements, an individual-based revision operator is provided as an alternative approach.

11:25
Repairing Ontologies via Kernel Pseudo-Contraction

ABSTRACT. Rational agents must have some internal representation of their knowledge or belief system. Belief Revision is a research area that aims at understanding how they should change their representations when they are faced with new information. In a contraction operation, a sentence is removed from a knowledge base and must not be logically entailed by the resulting set. Pseudo-contraction was proposed by Hansson as an alternative to base contraction where some degree of syntax independence is allowed. In this work, we analyse kernel constructions for pseudo-contraction operations and their formal properties. Also, we show the close relationship between concepts and definitions of Belief Revision and Ontology Repair (such as pseudo-contractions and gentle repairs, respectively).

11:50
Pointwise Circumscription in Description Logics

ABSTRACT. Circumscription is one of the major approaches to bringing non-monotonic (common-sense) reasoning features to first-order logic and related formalisms. In a nutshell, when computing logical entailments from circumscribed first-order theories, we focus on a restricted subset of the theory’s classical models, so that the less plausible models (from the common-sense perspective) are eliminated. Circumscription has been studied for knowledge bases expressed in various Description Logics (DLs), with a focus on understanding the computational complexity of reasoning. Those studies revealed that circumscription causes a dramatic increase in computational complexity in a broad range of DLs. Roughly speaking, this is due to the second-order quantification step that checks the non-existence of a “better” model, e.g., one where some specific predicate that we want to minimize has a smaller extension. In this paper, we consider a new notion of circumscription in DLs, aiming to preserve the key ideas and advantages of classical circumscription while mitigating its impact on the computational complexity of reasoning. Our main idea is to replace the second-order quantification step with a series of (pointwise) local checks on all domain elements and their immediate neighborhood. This approach provides a sound approximation of classical circumscription and is closely related to the notion of pointwise circumscription proposed by Lifschitz for first-order logic. We formalize several variants of pointwise circumscription in DLs and perform a preliminary study of computational complexity in this setting. Our main achievement is to show that, under certain syntactic restrictions, standard reasoning problems like subsumption testing or concept satisfiability for ALCIO knowledge bases with pointwise circumscription are (co)NEXPTIME-complete.

12:15
Hybrid MHS-MXP ABox Abduction Solver: First Emprical Results
PRESENTER: Martin Homola

ABSTRACT. Minimal Hitting Set (MHS) is a well-known and complete method to compute all explanations of an ABox abduction problem. MHS is NP-complete and widely recognized as inefficient. MergeXplain (MXP), on the other hand, is fast, but does not guarantee to find all explanations. MHS-MXP is a hybrid algorithm which adopts the divide-and-conquer heuristics of MXP and combines it with MHS. MHS-MXP is complete and – at least on a part of the inputs – more efficient than MHS. We describe a favourable class of inputs for the hybrid algorithm. We describe an experimental implementation which enables us to perform first preliminary empirical evaluation on this class of inputs.

11:00-12:30 Session 96D: Effective Superposition and Orderings (IJCAR)
11:00
SCL(EQ): SCL for First-Order Logic with Equality

ABSTRACT. We propose a new calculus SCL(EQ) for first-order logic with equality that only learns non-redundant clauses. Following the idea of CDCL (Conflict Driven Clause Learning) and SCL (Clause Learning from Simple Models) a ground literal model assumption is used to guide inferences that are then guaranteed to be non-redundant. Redundancy is defined with respect to a dynamically changing ordering derived from the ground literal model assumption. We prove SCL(EQ) sound and complete and provide examples where our calculus improves on superposition.

11:20
Ground joinability and connectedness in the superposition calculus
PRESENTER: André Duarte

ABSTRACT. Theories axiomatised by unit equalities include problems in group theory, loops, lattices, and other algebraic structures are notoriously difficult for general purpose theorem provers. Provers specialised for unit equational theories (UEQ) are based on Knuth-Bendix (KB) completion in contrast to provers applicable to general equational problems which are based on the superposition calculus. Although Knuth-Bendix completion and superposition are closely related, a number of simplifications which make Knuth-Bendix completion efficient are not available in superposition.

In particular, key simplifications such as ground joinability, although known for more than 30 years, have not been extended from KB completion to superposition. In fact, all previous completeness proofs for ground joinability rely on proof orderings and proof reductions, which are not easily extensible to general clauses together with redundancy elimination. In this paper we address this limitation and extend superposition with ground joinability, and show that under an adapted notion of redundancy, simplifications based on ground joinability preserve completeness.

Another recently introduced simplification for UEQ is connectedness. We extend this notion to “ground connectedness” and show superposition is complete with both connectedness and ground connectedness.

We implemented ground joinability in a theorem prover, iProver, using a novel algorithm which we also present in this paper, and evaluated over the TPTP library with encouraging results.

11:40
Term Ordering for Non-Reachability of (Conditional) Rewriting

ABSTRACT. We propose generalizations of reduction pairs, well-established techniques for proving termination of term rewriting, in order to prove unsatisfiability of reachability (infeasibility) in plain and conditional term rewriting. We adapt the weighted path order, a merger of the Knuth-Bendix order and the lexicographic path order, into the proposed framework. The proposed approach is implemented in the termination prover NaTT, and the strength of our approach is demonstrated through examples and experiments.

12:00
An Efficient Subsumption Test Pipeline for BS(LRA) Clauses
PRESENTER: Lorenz Leutgeb

ABSTRACT. The importance of subsumption testing for redundancy elimination in first-order logic automatic reasoning is well-known. Although the problem is already NP-complete for first-order clauses, the meanwhile developed test pipelines efficiently decide subsumption in almost all practical cases. We consider subsumption between first-oder clauses of the Bernays-Schönfinkel fragment over linear, real arithmetic constraints: BS(LRA). The bottleneck in this setup is deciding implication between the LRA constraints of two clauses. Our new sample point heuristic preempts expensive implication decisions in about 94% of all cases in benchmarks. Combined with filtering techniques for the first-order BS part of clauses, it results again in an efficient subsumption test pipeline for BS(LRA) clauses.

11:00-12:30 Session 96E (ITP)
11:00
Synthetic Kolmogorov Complexity in Coq
PRESENTER: Yannick Forster

ABSTRACT. We present a generalised, constructive, and machine-checked approach to Kolmogorov complexity in the constructive type theory underlying the Coq proof assistant. By proving that nonrandom numbers form a simple predicate, we obtain elegant proofs of undecidability for random and nonrandom numbers and a proof of uncomputability of Kolmogorov complexity.

We use a general and abstract definition of Kolmogorov complexity and subsequently instantiate it to several definitions frequently found in the literature.

Whereas textbook treatments of Kolmogorov complexity usually rely heavily on classical logic and the axiom of choice, we put emphasise on the constructiveness of all our arguments, however without blurring their essence. We first give a high-level proof idea using classical logic, which can be formalised using Markov's principle using folklore techniques we subsequently explain.Lastly, we show a strategy how to eliminate Markov's principle from a certain class of computability proofs, rendering all our results fully constructive.

All our results are machine-checked by the Coq proof assistant, which is enabled by using a synthetic approach to computability: Rather than formalising a model of computation, which is well-known to introduce a considerable overhead, we abstractly assume a universal function, allowing the proofs to focus on the mathematical essence.

11:30
Formalizing Szemerédi's Regularity Lemma in Lean
PRESENTER: Yaël Dillies

ABSTRACT. Szemerédi's Regularity Lemma is a fundamental result in graph theory with extensive applications to combinatorics and number theory. In essence, it says that all graphs can be approximated by well-behaved unions of random bipartite graphs. We present a formalisation in the Lean theorem prover of a strong version of this lemma in which each part of the union must be approximately the same size. This stronger version has not been formalised previously in any theorem prover. Our proof closely follows the pen-and-paper method, allowing our formalisation to provide an explicit upper bound on the number of parts. An application of this lemma is also formalised, namely Roth's theorem on arithmetic progressions in qualitative form via the triangle removal lemma.

12:00
Formalizing the divergence theorem and the Cauchy integral formula in Lean

ABSTRACT. I formalize a version of the divergence theorem for a function on a rectangular box that does not assume regularity of individual partial derivatives, only Fréchet differentiability of the vector field and integrability of its divergence. Then I use this theorem to prove Cauchy-Goursat theorem (for some simple domains) and bootstrap complex analysis in the Lean mathematical library. The main tool is the GP-integral, a version of the Henstock-Kurzweil integral suggested by J. Mawhin in 1981. The divergence theorem for this integral does not require any regularity assumptions on the derivative of a vector field.

11:00-12:30 Session 96F: Joint DL/NMR Session D (NMR)
Location: Taub 9
11:00
AGM Revision in Description Logics Under Fixed-Domain Semantics
11:25
Repairing Ontologies via Kernel Pseudo-Contraction

ABSTRACT. Rational agents must have some internal representation of their knowledge or belief system. Belief Revision is a research area that aims at understanding how they should change their representations when they are faced with new information. In a contraction operation, a sentence is removed from a knowledge base and must not be logically entailed by the resulting set. Pseudo-contraction was proposed by Hansson as an alternative to base contraction where some degree of syntax independence is allowed. In this work, we analyse kernel constructions for pseudo-contraction operations and their formal properties. Also, we show the close relationship between concepts and definitions of Belief Revision and Ontology Repair (such as pseudo-contractions and gentle repairs, respectively).

11:50
Pointwise Circumscription in Description Logics
12:15
Hybrid MHS-MXP ABox Abduction Solver: First Emprical Results
PRESENTER: Martin Homola
12:30-14:00Lunch Break

Lunches will be held in Taub lobby (CAV, CSF) and in The Grand Water Research Institute (DL, NMR, IJCAR, ITP).

14:00-15:30 Session 97A: Formal Methods for Hardware, Cyber-Physical and Hybrid Systems (CAV)
Location: Taub 1
14:00
Oblivious Online Monitoring for Safety LTL Specification via Fully Homomorphic Encryption
PRESENTER: Ryotaro Banno

ABSTRACT. In many Internet of Things (IoT) applications, data sensed by an IoT device are continuously sent to the server and monitored against a specification. Since the data often contain sensitive information, and the monitored specification is usually proprietary, both must be kept private from the other end. We propose a protocol to conduct oblivious online monitoring---online monitoring conducted without revealing the private information of each party to the other---against a safety LTL specification. In our protocol, we first convert a safety LTL formula into a DFA and conduct online monitoring with the DFA. Based on fully homomorphic encryption (FHE), we propose two online algorithms (ReverseStream and BlockStream) to run a DFA obliviously. We prove the correctness and security of our entire protocol. We also show the scalability of our algorithms theoretically and empirically. Our case study shows that our algorithms are fast enough to monitor blood glucose levels online, demonstrating our protocol's practical relevance.

14:20
Abstraction Modulo Stability for Reverse Engineering
PRESENTER: Anna Becchi

ABSTRACT. The analysis of legacy systems requires the automated extraction of high-level specifications. We propose a framework, called Abstraction Modulo Stability, for the analysis of transition systems operating in stable states, and responding with run-to-completion transactions to external stimuli. The abstraction captures the effects of external stimuli on the system state, and describes it in the form of a finite state machine. This approach is parametric on a set of predicates of interest and the definition of stability. We consider some possible stability definitions which yield different practically relevant abstractions, and propose a parametric algorithm for abstraction computation. The obtained FSM is extended with guards and effects on a given set of variables of interest. The framework is evaluated in terms of expressivity and adequacy within an industrial project with the Italian Railway Network, on reverse engineering tasks of relay-based interlocking circuit to extract specifications for a computer-based reimplementation.

14:40
Reachability of Koopman Linearized Systems Using Random Fourier Feature Observables and Polynomial Zonotope Refinement
PRESENTER: Ethan Lew

ABSTRACT. Koopman operator linearization approximates nonlinear systems of differential equations with higher-dimensional linear systems. For formal verification using reachability analysis, this is an attractive conversion, as highly scalable methods exist to compute reachable sets for linear systems. However, two main challenges are present with this approach, both of which are addressed in this work. First, the approximation must be sufficiently accurate for the result to be meaningful, which is controlled by the choice of observable functions during Koopman operator linearization. By using random Fourier features as observable functions, the process becomes more systematic than earlier work, while providing a higher-accuracy approximation. Second, although the higher-dimensional system is linear, simple convex initial sets in the original space can become complex non-convex initial sets in the linear system. We overcome this using a combination of Taylor model arithmetic and polynomial zonotope refinement. Compared with prior work, the result is more efficient, more systematic and more accurate.

15:00
RINO: Robust INner and Outer Approximated Reachability of Neural Networks Controlled Systems
PRESENTER: Sylvie Putot

ABSTRACT. We present a unified approach, implemented in the RINO tool, for the computation of inner and outer approximations of reachable sets of discrete-time and continuous-time dynamical systems, possibly controlled by neural networks with differentiable activation functions. RINO combines a zonotopic set representation with generalized mean-value AE extensions to compute under and over approximations of the robust range of differentiable functions, and applies these techniques to the particular case of learning-enabled dynamical systems. The AE extensions require an efficient and accurate evaluation of the function and its Jacobian with respect to the inputs and initial conditions. For continuous-time dynamical systems, possibly controlled by neural networks, the function to evaluate is the solution of the dynamical system. It is over-approximated in RINO using Taylor methods in time coupled with a set-based evaluation with zonotopes. We demonstrate the good performances of RINO compared to state-of-the art tools Verisig 2.0 and ReachNN* on a set of classical benchmark examples of neural network controlled closed loop systems. For generally comparable precision to Verisig 2.0 and higher precision than ReachNN*, RINO is always at least one order of magnitude faster, while also computing the more involved inner-approximations that the other tools do not compute.

15:10
STLmc: Robust STL Model Checking of Hybrid Systems using SMT
PRESENTER: Kyungmin Bae

ABSTRACT. We present the STLmc model checker for signal temporal logic (STL) properties of hybrid systems. STLmc can perform STL model checking up to a robustness threshold for a wide range of nonlinear hybrid systems with ordinary differential equations. Our tool utilizes the refutation-complete SMT-based model checking algorithm with various SMT solvers by reducing the robust STL model checking problem into the Boolean STL model checking problem. If STLmc does not find a counterexample, the system is guaranteed to be correct up to the given bounds and robustness threshold. We demonstrate the effectiveness of STLmc on a number of hybrid system benchmarks.

15:20
UCLID5: Multi-Modal Formal Modeling, Verification, and Synthesis

ABSTRACT. UCLID5 is a tool for the multi-modal formal modeling, verification, and synthesis of systems. It enables one to tackle verification problems for heterogeneous systems such as combinations of hardware and software, or those that have multiple, varied specifications, or systems that require hybrid modes of modeling. A novel aspect of UCLID5 is an emphasis on the use of syntax-guided and inductive synthesis to automate steps in modeling and verification. This tool paper presents new developments in the UCLID5 tool including new language features, integration with new techniques for syntax-guided synthesis and satisfiability solving, support for hyperproperties and combinations of axiomatic and operational modeling, demonstrations on new problem classes, and a robust implementation.

14:00-15:30 Session 97B: Invited Talk 1 (CSF)
Location: Taub 2
14:00
Membership Inference Attacks against Classifiers

ABSTRACT. In membership inference (MI) attacks, the adversary is given access to a target classifier and a number of data instances, and aims to determine whether these data instances have been used when training the classifier. Given the close relationship between membership inference and differential privacy, the degree of vulnerability to MI attacks is an excellent empirical measure of a classifier's level of privacy.

MI attacks can be studied in different settings depending on how the adversary is allowed to access a model, e.g., blacbox access to the target classifier, access to model parameters in the classifier, access to gradients during the training in the classifier such as in a federated learning setting, and the ability to actively manipulate the training process to conduct membership inference.

In this talk, we will discuss existing results on MI attacks and defense mechanisms in these settings, and discuss open problems.

14:00-15:30 Session 97C: Query Answering & Extensions (DL)
Location: Taub 9
14:00
Querying Inconsistent Prioritized Data with ORBITS: Algorithms, Implementation, and Experiments (Extended Abstract)
PRESENTER: Meghyn Bienvenu

ABSTRACT. We investigate practical algorithms for inconsistency-tolerant query answering over prioritized knowledge bases, which consist of a logical theory, a set of facts, and a priority relation between conflicting facts. We consider three well-known semantics (AR, IAR and brave) based upon two notions of optimal repairs (Pareto and completion). Deciding whether a query answer holds under these semantics is (co)NP-complete in data complexity for a large class of logical theories, and SAT-based procedures have been devised for repair-based semantics when there is no priority relation, or the relation has a special structure. The present paper introduces the first SAT encodings for Pareto- and completion-optimal repairs w.r.t. general priority relations and proposes several ways of employing existing and new encodings to compute answers under (optimal) repair-based semantics, by exploiting different reasoning modes of SAT solvers. The comprehensive experimental evaluation of our implementation compares both (i) the impact of adopting semantics based on different kinds of repairs, and (ii) the relative performances of alternative procedures for the same semantics.

14:25
Ontology-based Data Federation (Extended Abstract)
PRESENTER: Diego Calvanese

ABSTRACT. We formally introduce ontology-based data federation (OBDF), to denote a framework combining ontology-based data access (OBDA) with a data federation layer, which virtually exposes multiple heterogeneous sources as a single relational database. In this setting, the SQL queries generated by the OBDA component by translating user SPARQL queries are further transformed by the data federation layer so as to be efficiently executed over the data sources. The structure of these SQL queries directly affects their execution time in the data federation layer and their optimization is crucial for performance. We propose here novel optimizations specific for OBDF, which are based on "hints" about existing data redundancies in the sources, empty join operations, and the need for materialized views. Such hints can be systematically inferred by analyzing the OBDA mappings and ontology and exploited to simplify the query structure. We also carry out an experimental evaluation in which we show the effectiveness of our optimizations.

14:50
Comonadic Semantics for Description Logics Games

ABSTRACT. A categorical approach to study model comparison games in terms of comonads was recently initiated by Abramsky et al. In this work, we analyse games that appear naturally in the context of description logics and knowledge representation. We consider expressive sublogics of ALCIOQbSelf, namely, the logics that extend ALC with any combination of inverses, nominals, number restrictions, safe boolean roles combinations and the Self operator. Our construction augments and modifies the so-called modal comonad by Abramsky and Shah. The approach that we took heavily relies on the use of relative monads, which we leverage to encapsulate additional capabilities within the bisimilation games in a compositional manner.

15:05
Advanced languages of terms for ontologies

ABSTRACT. This paper is about the integration in a unique formalism of knowledge representation languages such as those provided by description logic languages and rule-based reasoning paradigms such as those provided by logic programming languages. We aim at creating an hybrid formalism where description logics constructs are used for defining concepts that are given as arguments to the predicates of the logic programs.

14:00-15:30 Session 97D: Knowledge Representation and Justification (IJCAR)
Chair:
14:00
Actions over Core-closed Knowledge Bases
PRESENTER: Claudia Cauli

ABSTRACT. We present new results on the application of semantic- and knowledge-based reasoning techniques to the analysis of cloud deployments. In particular, to the security of Infrastructure as Code configuration files, encoded as description logic knowledge bases. We introduce an action language to model mutating actions; that is, actions that change the structural configuration of a given deployment by adding, modifying, or deleting resources. We mainly focus on two problems: the problem of determining whether the execution of an action, no matter the parameters passed to it, will not cause the violation of some security requirement (static verification), and the problem of finding sequences of actions that would lead the deployment to a state where (un)desirable properties are (not) satisfied (plan existence and plan synthesis). For all these problems, we provide definitions, complexity results, and decision procedures.

14:20
GK: Implementing Full First Order Default Logic for Commonsense Reasoning (System Description)
PRESENTER: Tanel Tammet

ABSTRACT. Our goal is to develop a logic-based component for hybrid – machine learning plus logic – commonsense question answering systems. The paper presents an implementation GK of default logic for handling rules with exceptions in unrestricted first order knowledge bases. GK is built on top of our existing automated reasoning system with confidence calculation capabilities. To overcome the problem of undecidability of checking potential exceptions, GK performs delayed recursive checks with diminishing time limits. These are combined with the taxonomy-based priorities for defaults and numerical confidences.

14:35
Local Reductions for the Modal Cube
PRESENTER: Cláudia Nalon

ABSTRACT. The modal logic K is commonly used to represent and reason about necessity and possibility and its extensions with combinations of additional axioms is used to represent knowledge, belief, desires and intentions. Here we present local reductions of all propositional modal logics in the so-called modal cube, that is, extensions of K with arbitrary combinations of the axioms B, D, T, 4 and 5 to a normal form comprising a formula and the set of modal levels it occurs at. Using these reductions we can carry out reasoning for all these logics with the theorem prover KSP. We define benchmarks for these logics and experiment with the reduction approach as compared to an existing resolution calculus with specialised inference rules for the various logics.

14:55
Evonne: Interactive Proof Visualization for Description Logics (System Description)
PRESENTER: Patrick Koopmann

ABSTRACT. Explanations for description logic (DL) entailments provide important support for the design and maintenance of large ontologies. The "justifications" usually employed for this purpose in ontology editors pinpoint the parts of the ontology responsible for a given entailment. Proofs for entailments make the intermediate reasoning steps explicit, and thus explain how a consequence can actually be derived. We present an interactive system for exploring description logic proofs, called Evonne, which not only visualizes proofs of consequences for ontologies written in expressive DLs, but also offers several proof generation procedures and various functions for condensing large proofs.

15:10
Hyper-graph based Inference Rules for Computing EL+-Ontology Justifications
PRESENTER: Hui Yang

ABSTRACT. To give concise explanations for a conclusion obtained by reasoning over ontologies, justifications have been proposed as minimal subsets of an ontology that entail the given conclusion. Even though computing one justification can be done in polynomial time for tractable Description Logics such as EL+, computing all justifications is complicated and often challenging for real-world ontologies. In this paper, based on a graph representation of EL+-ontologies, we propose a new set of inference rules (called H-rules) and take advantage of them for providing a new method of computing all justifications for a given conclusion. The advantage of our setting is that most of the time, it reduces the number of inferences (generated by H-rules) required to derive a given conclusion. This accelerates the enumeration of justifications relying on these inferences. We validate our approach by running real-world ontology experiments. Our graph-based approach outperforms PULi, the state-of-the-art algorithm, in most of cases.

14:00-15:30 Session 97E (ITP)
14:00
Deeper Shallow Embeddings
PRESENTER: Jacob Prinz

ABSTRACT. Deep and shallow embeddings are two popular techniques for embedding a language in a host language with complementary strengths and weaknesses. In a deep embedding, embedded constructs are defined as data in the host: this allows for syntax manipulation and facilitates metatheoretic reasoning, but is challenging to implement---especially in the case of dependently typed embedded languages. In a shallow embedding, by contrast, constructs are encoded using features of the host: this makes them quite straightforward to implement, but limits their use in practice.

In this paper, we attempt to bridge the gap between the two, by presenting a general technique for extending a shallow embedding of a type theory with a deep embedding of its typing derivations. Such embeddings are almost as straightforward to implement as shallow ones, but come with capabilities traditionally associated with deep ones. We demonstrate these increased capabilities in a number of case studies; including a DSL that only holds affine terms, and a dependently typed core language with computational beta reduction that leverages function extensionality.

14:30
Compositional Verification of Interacting Systems using Event Monads
PRESENTER: Bohua Zhan

ABSTRACT. Large software systems are usually divided into multiple components that interact with each other. How to verify interacting components in a modular way is one of the major problems in formal verification. In many cases, interaction between components can be modeled asynchronously, where events are sent without requiring a response in order to continue with execution of the component. In this paper, we propose a lightweight, event-based framework for verification of components with asynchronous interaction. We define event monads and event systems, and a Hoare logic-style calculi for reasoning about them. The framework is implemented in Isabelle and applied to several case studies, including models for distributed computing, cache-coherence protocols, and verification of partition scheduling in a real-time operating system.

15:00
Reflexive tactics for algebra, revisited

ABSTRACT. Computational reflection allows us to turn verified decision procedures into efficient automated reasoning tools in proof assistants. The typical applications of such methodology include mathematical structures that have decidable theory fragments, e.g., equational theories of commutative rings and lattices. However, such existing tools are known not to cooperate with packed classes, a methodology to define mathematical structures in dependent type theory, that allows for the sharing of vocabulary across the inheritance hierarchy. Additionally, such tools do not support homomorphisms whose domain and codomain types may differ. This paper demonstrates how to implement reflexive tactics that support packed classes and homomorphisms. As applications of our methodology, we adapt the ring and field tactics of Coq to the commutative ring and field structures of the Mathematical Components library, and apply the resulting tactics to the formal proof of the irrationality of ζ(3) by Chyzak, Mahboubi, and Sibut-Pinote, to bring more proof automation.

14:00-15:30 Session 97F: Belief Revision (NMR)
Location: Taub 4
14:00
Trust Graphs for Belief Revision: Framework and Implementation
PRESENTER: Aaron Hunter

ABSTRACT. Trust plays a role in the process of belief revision. When information is reported by another agent, it should only be believed if the reporting agent is trusted as an authority over some relevant domain. In practice, an agent will be trusted on a particular topic if they have provided accurate information on that topic in the past. In this paper, we demonstrate how an agent can construct a model of knowledge-based trust based on the accuracy of past reports. We then show how this model of trust can be used in conjunction with Ordinal Conditional Functions to define two approaches to trust-influenced belief revision. In the first approach, strength of trust and strength of belief are assumed to be incomparable as they are on different scales. In the second approach, they are aggregated in a natural manner. We then describe a software tool for modelling and reasoning about trust and belief change. Our software allows a trust graph to be updated incrementally by looking at the accuracy of past reports. After constructing a trust graph, the software can then compute the result of AGM-style belief revision using two different approaches to incorporating trust.

14:30
Truth-Tracking with Non-Expert Information Sources
PRESENTER: Joseph Singleton

ABSTRACT. We study what can be learned when receiving reports from multiple non-expert information sources. We suppose that sources report all that they consider possible, given their expertise. This may result in false and inconsistent reports when sources lack expertise on a topic. A learning method is truth-tracking, roughly speaking, if it eventually converges to correct beliefs about the ``actual" world. This involves finding both the actual state of affairs in the domain described by the sources, and finding the extent of the expertise of the sources themselves. We investigate the extent to which truth-tracking is possible, and describe what information can be learned even if the actual world cannot be pinned down uniquely. We find that a broad spread of expertise among the sources allows the actual state of affairs to be found, even if no individual source is an expert on all topics. On the other hand, narrower expertise at the individual level allows the actual expertise to be found more easily. Finally, we turn to concrete families of learning methods adapted from the belief change literature, and characterise the conditions under which they are truth-tracking.

15:00
Asking human reasoners to judge postulates of belief change for plausibility
PRESENTER: Clayton Baker

ABSTRACT. Empirical methods have been used to test whether human reasoning conforms to models of reasoning in logic-based artificial intelligence. This work investigates through surveys whether postulates of belief revision and update are plausible with human reasoners. The results show that participants' reasoning tend to be consistent with the postulates of belief revision and belief update when judging the premises and conclusion of the postulate separately.

15:30-16:00Coffee Break
16:00-17:30 Session 98A: Probabilistic Techniques (CAV)
Location: Taub 1
16:00
PAC Statistical Model Checking of Mean Payoff in Discrete- and Continuous-Time MDP

ABSTRACT. Markov decision processes (MDP) and continuous-time MDP (CTMDP) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first algorithm to compute mean payoff probably approximately correctly in unknown MDP; further, we extend it to unknown CTMDP. We do not require any knowledge of the state space, only a lower bound on the minimum transition probability, which has been advocated in literature. In addition to providing PAC bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.

16:20
Sampling-Based Verification of CTMCs with Uncertain Rates
PRESENTER: Thom Badings

ABSTRACT. We employ uncertain parametric CTMCs with parametric transition rates and a prior on the parameter values. The prior encodes uncertainty about the actual transition rates, while the parameters allow dependencies between transition rates. Sampling the parameter values from the prior distribution then yields a standard CTMC, for which we may compute relevant reachability probabilities. We provide a principled solution, based on a technique called scenario-optimization, to the following problem: From a finite set of parameter samples and a user-specified confidence level, compute confidence regions on the reachability probabilities. The confidence regions should (with high probability) contain the reachability probabilities of a CTMC induced by any additional sample. To boost the scalability of the approach, we employ standard abstraction techniques and adapt our methodology to support approximate reachability probabilities. Experiments with various well-known benchmarks show the applicability of the approach.

16:40
Playing Against Fair Adversaries in Stochastic Games with Total Rewards
PRESENTER: Pablo Castro

ABSTRACT. We investigate zero-sum turn-based two-player stochastic games in which the objective of one player is to maximize the amount of rewards obtained during a play, while the other aims at minimizing it. We focus on games in which the minimizer plays in a fair way. We believe that these kinds of games enjoy interesting applications in software verification, where the maximizer plays the role of a system intending to maximize the number of “milestones” achieved, and the minimizer represents the behavior of some uncooperative but yet fair environment. Normally, to study total reward properties, games are requested to be stopping (i.e., they reach a terminal state with probability 1). We relax the property to request that the game is stopping only under a fair minimizing player. We prove that these games are determined, i.e., each state of the game has a value defined. Furthermore, we show that both players have memoryless and deterministic optimal strategies, and the game value can be computed by approximating the greatest-fixed point of a set of functional equations. We implemented our approach in a prototype tool, and evaluated it on an illustrating example and an Unmanned Aerial Vehicle case study.

17:00
Automated Expected Amortised Cost Analysis of Probabilistic Data Structures
PRESENTER: Lorenz Leutgeb

ABSTRACT. In this paper, we present the first fully-automated expected amortised cost analysis of self-adjusting data structures, that is, of randomised splay trees, randomised splay heaps and randomised meldable heaps, which so far have only (semi-)manually been analysed in the literature. We generalise a recently proposed type-and-effect system for logarithmic amortised resource analysis, proposed by Hofmann et al. to probabilistic programs. Further, we provide a prototype implementation able to fully automatically analyse the aforementioned case studies.

17:20
Murxla: A Modular and Highly Extensible API Fuzzer for SMT Solvers
PRESENTER: Aina Niemetz

ABSTRACT. State-of-the-art SMT solvers are highly complex pieces of software with performance, robustness, and correctness as key requirements. Complementing traditional testing techniques for these solvers with randomized stress testing has been shown to be quite effective. Re-cent work has showcased the value of input fuzzing for finding issues, but typically does not comprehensively test a solver’s API. Previous work on model-based API fuzzing was tailored to a single solver and a small subset of SMT-LIB. We present Murxla, a comprehensive, modular, and highly extensible model-based API fuzzer for SMT solvers. Murxla randomly generates valid sequences of solver API calls based on a customizable API model, with full support for the semantics and features of SMT-LIB.It is solver-agnostic but extensible to allow for solver-specific testing and supports option fuzzing, cross-checking with other solvers, translation toSMT-LIBv2, and SMT-LIBv2 input fuzzing. Our evaluation confirms its efficacy in finding issues in multiple state-of-the-art SMT solvers.

16:00-17:30 Session 98C: Tractable DL (DL)
Location: Taub 9
16:00
Efficient TBox Reasoning with Value Restrictions using the FL0wer Reasoner (Extended Abstract)
PRESENTER: Patrick Koopmann

ABSTRACT. The inexpressive Description Logic (DL) FL0, which has conjunction and value restriction as its only concept constructors, had fallen into disrepute when it turned out that reasoning in FL0 w.r.t. general TBoxes is ExpTime-complete, that is, as hard as in the considerably more expressive logic ALC. In this paper, published in the journal Theory and Practice of Logic Programming, we rehabilitate FL0 by presenting a dedicated subsumption algorithm for FL0 , which is much simpler than the tableau-based algorithms employed by highly optimized DL reasoners. Our experiments show that the performance of our novel algorithm, as prototypically implemented in our FL0wer reasoner, compares very well with that of the highly optimized reasoners. FL0wer can also deal with ontologies written in the extension FLbot of FL0 with the top and the bottom concept by employing a polynomial-time reduction, shown in this paper, which eliminates top and bottom. We also investigate the complexity of reasoning in DLs related to the Horn-fragments of FL0 and FLbot.

16:25
Actively Learning ELIQs in the Presence of DL-Lite-Horn Ontologies

ABSTRACT. Learning, in Angluin’s framework of exact learning, a query in the presence of a description logic ontology often involves as a crucial (iterated) step the generalization of a hypothesis query. This has been done, for example, by constructing a least general common generalization of the hypothesis and a counterexample that was provided by the oracle. In this research note, we observe that it is useful to instead use a more liberal construction that uses the counterexample as a guide to produce a generalization of the hypothesis, but does not necessarily generalize the counterexample. This approach allows us to learn in polynomial time ELI concept queries in the presence of DL-LiteF-Horn ontologies.

16:50
A new dimension to generalization: computing temporal EL concepts from positive examples (Extended Abstract)

ABSTRACT. The authoring of complex concepts or (instance) queries written in a description logic (DL) can be a difficult task. An established approach to generate such concepts from positive examples is to employ the most specific concept (msc), which generalizes an ABox individual into a concept and the least common subsumer (lcs), which generalizes a collection of concepts into a single concept. These inferences are investigated for the EL, but so far there are no methods for the 2-dimensional case such as temporalized DLs. We report in this abstract on our computation algorithms for the lcs and the msc in a temporalized DL: EL extended by the LTL operators next and global. We sketch the computation algorithms for both inferences---with and without the use of rigid symbols.

17:15
Reasoning about actions with EL ontologies in a temporal action theory (Extended Abstract)
PRESENTER: Laura Giordano

ABSTRACT. In this extended abstract we report about an approach for reasoning about actions with domain descriptions including an EL⊥ ontology in a temporal action theory. The action theory is based on a Dynamic Linear Time Temporal Logic, and extensions are defined through temporal answer sets. The work provides conditions under which action consistency can be guaranteed with respect to an EL⊥ ontology, by polynomially encoding an EL⊥ knowledge base into the domain description of the temporal action theory.

16:00-17:30 Session 98D: Preprocessing and Simplification (IJCAR)
16:00
SAT-based Proof Search in Intermediate Propositional Logics

ABSTRACT. We present a decision procedure for intermediate logics relying on a modular extension of the SAT-based proof search-engine intuitR for intuitionistic propositional logic. Given the module for the intermediate logic L, our procedure iteratively searches for a Kripke countermodel for a formula A, exploiting the search engine of intuitR: whenever a countermodel for A is built, the module checks whether it is a model of the logic L. If this is the case A is not valid in L. Otherwise, the module provides an instance of an axiom of L falsified in the countermodel that is fed to the SAT-solver for a new try. We prove the partial correctness of our procedure and its termination for some well-known intermediate logics.

16:20
Preprocessing of Propagation Redundant Clauses
PRESENTER: Joseph Reeves

ABSTRACT. The propagation redundant (PR) proof system generalizes the resolution and resolution asymmetric tautology proof systems used by conflict-driven clause learning (CDCL) solvers. PR allows short proofs of unsatisfiability for some problems that are difficult for CDCL solvers. Previous attempts to automate PR clause learning used hand-crafted heuristics that work well on some highly-structured problems. For example, the solver SaDiCaL incorporates PR clause learning into the CDCL loop, but it cannot compete with modern CDCL solvers due to its fragile heuristics. We present PReLearn, a preprocessing technique that learns short PR clauses. Adding these clauses to a formula reduces the search space that the solver must explore. By performing PR clause learning as a preprocessing stage, PR clauses can be found efficiently without sacrificing the robustness of modern CDCL solvers. On a large number of SAT competition benchmarks we found that preprocessing with PReLearn significantly improves solver performance on both satisfiable and unsatisfiable formulas. PReLearn supports proof logging, giving a high level of confidence in the results.

16:40
Clause Redundancy and Preprocessing in Maximum Satisfiability
PRESENTER: Jeremias Berg

ABSTRACT. The study of clause redundancy in Boolean satisfiability (SAT) has proven significant in various terms, from fundamental insights into preprocessing and inprocessing to the development of practical proof checkers and new types of strong proof systems. We study liftings of the recently-proposed notion of propagation redundancy---based on a semantic implication relationship between formulas---in the context of maximum satisfiability (MaxSAT), where of interest are reasoning techniques that preserve optimal cost (in contrast to preserving satisfiability in the realm of SAT). We establish that the strongest MaxSAT-lifting of propagation redundancy allows for changing in a controlled way the set of minimal correction sets in MaxSAT. This ability is key in succinctly expressing MaxSAT reasoning techniques and allows for obtaining correctness proofs in a uniform way for MaxSAT reasoning techniques very generally. Bridging theory to practice, we also provide a new MaxSAT preprocessor incorporating such extended techniques, and show through experiments its wide applicability in improving the performance of modern MaxSAT solvers.

17:00
Formula Simplification via Invariance Detection by Algebraically Indexed Types (ONLINE)
PRESENTER: Tomohiro Fujita

ABSTRACT. We describe a system that detects an invariance in a problem expressed in a logical formula and simplifies it by eliminating variables utilizing the invariance. Pre-defined function and predicate symbols in the problem representation language are associated with algebraically indexed types, which signify the invariance property of them. A Hindley-Milner style type reconstruction algorithm is derived for detecting the invariance of a problem. In the experiment, the invariance-based formula simplification significantly enhanced the performance of a problem solver based on quantifier-elimination for real-closed fields, especially on the problems taken from the International Mathematical Olympiads.

16:00-17:30 Session 98F: Preferences and Conditionals (NMR)
Location: Taub 4
16:00
Conditional Syntax Splitting, Lexicographic Entailment and the Drowning Effect
PRESENTER: Jesse Heyninck

ABSTRACT. Lexicographic inference is a well-behaved and popular approach to reasoning with non-monotonic conditionals. In recent work we have shown that lexicographic inference satisfies syntax splitting, which means we can restrict our attention to parts of the belief base that share atoms with a given query. In this paper, we introduce the concept of conditional syntax splitting, inspired by the notion of conditional independence as known from probability theory. We show that lexicographic inference satisfies conditional syntax splitting, and connect conditional independence to several known properties from the literature on non-monotonic reasoning, including the drowning effect.

16:30
Situated Conditionals - A Brief Introduction
PRESENTER: Giovanni Casini

ABSTRACT. Conditionals are useful for modelling many forms of everyday human reasoning, but are not always sufficiently expressive for representing the information we want to reason about. Here we make the case for a form of situated conditional. By ‘situated’ we mean that there is a context, based on an agent’s beliefs and expectations, that works as background information in evaluating a conditional. These conditionals are able to distinguish, for example, between expectations and counterfactuals. We show that situated conditionals can be described in terms of a set of rationality postulates. We then propose an intuitive semantics for these conditionals, and present a representation result which shows that our semantic construction corresponds exactly to the description in terms of postulates. With the semantics in place, we proceed to define a form of entailment for situated conditional knowledge bases, which we refer to as minimal closure and is inspired by the well-known notion of rational closure. Finally, we proceed to show that it is possible to reduce the computation of minimal closure to a series of propositional entailment and satisfiability checks.

17:00
On some weakenings of transitivity in the logic of norms (extended abstract)

ABSTRACT. The paper (under review) investigates the impact of weakened forms of transitivity of the betterness relation on the logic of conditional obligation, originating from the work of Hansson, Lewis, and others. These weakened forms of transitivity are well-known from the rational choice theory literature, and include: quasi-transitivity, Suzumura consistency, a-cyclicity, and interval order.

17:30-18:30 Session 99: Herbrand Award Ceremony (IJCAR)
17:30
The Pleasures of Proof

ABSTRACT. Philosophers since Thales have demanded valid proofs as the litmus test for the acceptance of a mathematical conjecture as a demonstrable fact. Dually, counterexamples have served as evidence for the demonstrable falsity of a claim. Euclid's codification of the rules of geometry proofs became the paradigm for a foundation for reliable knowledge. With Ramon Llull and Gottfried Leibniz came the idea of mechanizing reason with the unreasonable expectation that this would spark religious and political harmony. The quest for the automation of mathematical reasoning came into sharp focus through the Hilbert's second and tenth problem. The Entscheidungsproblem posed by Hilbert and Ackermann was answered negatively by Church and Turing. The work of logicians in the first half of the 20th century laid the foundation for the development of early theorem-proving programs in the fifties. Skipping ahead, by the dawn of the eighties, automated reasoning had a niche position within the field of artificial intelligence but was still regarded with amusement by outsiders as a somewhat quixotic quest to discover the holy grail of problem solving. We describe the experience of sharing a ringside seat to the growth and maturation of the field of automated reasoning during the last four decades into an integral sub-discipline of computing and a vehicle for the large-scale formalization of mathematical and computational knowledge.

18:30-20:30 Walking tour (at Haifa) (FLoC)

pickup at 18:00 from the Technion (Tour at Haifa, no food will be provided)