ABSTRACT. This paper studies the strength of embedding Call-by-Name (dCBN) and Call-by-Value (dCBV) into a unifying framework called the Bang Calculus (dBANG). These embeddings allow us to establish (static and dynamic) properties of dCBN and dCBV through their respective counterparts in dBANG. While some specific static properties have been already successfully studied in the literature, the dynamic ones are more challenging and have been left unexplored. We accomplish that by using a standard embedding for the (easy) dCBN case, while a novel one must be introduced for the (difficult) dCBV case. Moreover, a key point of our approach is the identification of dBANG diligent reduction sequences, which eases the preservation of dynamic properties from dBANG to dCBN/dCBV. We illustrate our methodology through a concrete application: factorizations for both dCBN and dCBV are derived from factorization in dBANG.
Mechanised uniform interpolation for modal logics K, GL and iSL
ABSTRACT. The uniform interpolation property in a given logic can be understood as the
definability of propositional quantifiers. We mechanize the computation of these
quantifiers and prove correctness in the Coq proof assistant for three modal
logics, namely: (1) the modal logic K, for which a pen-and-paper proof exists;
(2) Gödel-Löb logic GL, for which our formalization clarifies an important point
in an existing, but incomplete, sequent-style proof; and (3) intuitionistic
strong Löb logic iSL, for which this is the first proof-theoretic construction
of uniform interpolants. Our work also yields verified programs that allow one
to compute the propositional quantifiers on any formula in this logic.
ABSTRACT. Non-monotonic modal logics are typically interpreted over
neighbourhood frames. For unary operators, this is just a set of
world, together with an endofunction on predicates (subsets of
worlds). It is known that all systems of not necessarily monotonic
modal logics that are axiomatised by formulae of modal rank at most
one (non-iterative modal logics) are Kripke-complete over neighbourhood
semantics. In this paper, we give a uniform construction to obtain
complete resolution calculi for all non-iterative logics. We show
completeness for generative calculi (where new clauses with new
literals are added to the clause set) by means of a canonical model
construction. We then define absorptive calculi (where new clauses
are generated by generalised resolution rules) and establish
completeness by translating between generative and absorptive
calculi. Instances of our construction re-prove completeness for
already known calculi, but also give rise to a number of previously
unknown complete calculi.
ABSTRACT. We present deterministic model construction algorithms for sets of modal clauses saturated with respect to one of three refinements of the modal-layered resolution calculus implemented in the prover KSP. The model construction algorithms are inspired by the Bachmair-Ganzinger method for constructing a model for a set of ground first-order clauses saturated with respect to ordered resolution with selection. The challenge is that the inference rules of the modal-layered resolution calculus
for modal operators are more restrictive than an adaptation of ordered resolution with selection for these would be. While these model construction algorithms provide an alternative means to proving completeness of the calculus, our main interest is the provision of a ‘certificate’ for satisfiable modal formulae that can be independently checked to assure a user that the result of KSP is correct. This complements the existing provision of proofs for unsatisfiable modal formulae.
A Logic for Repair and State Recovery in Byzantine Fault-tolerant Multi-agent Systems
ABSTRACT. We provide an epistemic logical language and semantics for the modeling and analysis of byzantine fault-tolerant multi-agent systems. This not only facilitates reasoning about the agents' fault status but also supports model updates for implementing repair and state recovery. For each agent, besides the standard knowledge modality our logic provides an additional modality called hope, which is capable of expressing that the agent is correct (not faulty), and also dynamic modalities enabling change of the agents' correctness status. These dynamic modalities are interpreted as model updates that come in three flavours: fully public, more private, or involving factual change. We provide complete axiomatizations for all these variants in the form of reduction systems: formulas with dynamic modalities are equivalent to formulas without. Therefore, they have the same expressivity as the logic of knowledge and hope. Multiple examples are provided to demonstrate the utility and flexibility of our logic for modeling a wide range of repair and state recovery techniques that have been implemented in the context of fault-detection, isolation, and recovery (FDIR) approaches in fault-tolerant distributed computing with byzantine agents.
ABSTRACT. Building on the progress in Boolean satisfiability (SAT) solving over the last
decades, maximum satisfiability (MaxSAT) has become a viable approach for
solving NP-hard optimization problems, but ensuring correctness of MaxSAT
solvers has remained an important concern. For SAT, this is largely a solved
problem thanks to the use of proof logging, meaning that solvers emit
machine-verifiable proofs of (un)satisfiability to certify correctness.
However, for MaxSAT, proof logging solvers have started being developed only
very recently. Moreover, these nascent efforts have only targeted the core
solving process, ignoring the preprocessing phase where input problem instances
can be substantially reformulated before being passed on to the solver proper.
In this work, we demonstrate how pseudo-Boolean proof logging can be used to
certify the correctness of a wide range of modern MaxSAT preprocessing
techniques. By combining and extending the VeriPB and CakePB tools, we provide
formally verified, end-to-end proof checking that the input and preprocessed
output MaxSAT problem instances have the same optimal value. An extensive
evaluation on applied MaxSAT benchmarks shows that our approach is feasible in
practice.
Quantifier Shifting for Quantified Boolean Formulas Revisited
ABSTRACT. Modern solvers for quantified Boolean formulas (QBFs) process formulas
in prenex form, which divides each QBF into two parts: the quantifier
prefix and the propositional matrix. While this representation does
not cover the full language of QBF, every non-prenex formula can be
transformed in an equivalent formula in prenex form. This
transformation offers several degrees of freedom and blurs structural
information that might be useful for the solvers. In a case study
conducted 20 years back, it has been shown that the applied
transformation strategy heavily impacts solving time. We revisit this
work and investigate how sensitive recent solving technology performs
w.r.t. various prenexing strategies.
ABSTRACT. The CTL learning problem consists in finding for a given sample of positive and negative Kripke structures a distinguishing CTL formula that is verified by the former but not by the latter. Further constraints may bound the size and shape of the desired formula or even ask for its minimality in terms of syntactic size. This synthesis problem is motivated by explanation generation for dissimilar models, e.g. comparing a faulty implementation with the original protocol. We devise a SAT-based encoding for a fixed size CTL formula, then provide an incremental approach that guarantees minimality. We further report on a prototype implementation whose contribution is twofold: first, it allows us to assess the efficiency of various output fragments and optimizations. Secondly, we can experimentally evaluate this tool by randomly mutating Kripke structures or syntactically introducing errors in higher-level models, then learning CTL distinguishing formulas.
ABSTRACT. We describe a formally verified checker for unsatisfiability certificates in the LRAT format, which can be run in parallel with the SAT solver, processing the certificate while it is being produced.
It is implemented time and memory efficiently, thus increasing the trust into the SAT solver at low additional cost.
The verification is done wrt. a grammar of the DIMACS format and a semantics of CNF formulas, down to the LLVM code of the checker.
In this paper, we report on the checker and its design process using the Isabelle-LLVM stepwise refinement approach.
ABSTRACT. Dependent type theory gives an expressive type system which facilitates succinct formalizations of mathematical concepts. In practice, it is mainly used for interactive theorem proving in intensional type theories, with PVS being a notable exception. In this paper, we present native rules for automated reasoning in a dependently-typed version (DHOL) of classical higher-order logic (HOL). DHOL has an extensional type theory with an undecidable type checking problem which contains theorem proving. We implemented the inference rules as well as an automatic type checking mode in Lash, a fork of Satallax, the leading tableaux-based prover for HOL. Our method is sound and complete with respect to provability in DHOL. Completeness is guaranteed by the incorporation of a sound and complete translation from DHOL to HOL recently proposed by Rothgang et al. While this translation can already be used as a preprocessing step to any HOL prover, to achieve better performance, our system directly works in DHOL. Moreover, experimental results show that the DHOL version of Lash can outperform all major HOL provers executed on the translation.