View: session overviewtalk overview
Break
Non-traditional Logics
10:30 | ABSTRACT. Conditioning plays an important role in revising uncertain information in light of new evidence. The standard conditionings of uncertainty distributions (probabilistic Bayesian inference, Dempster's rule in belief functions, qualitative and quantitative conditionings in possibility theory) have been widely studied both from an axiomatic point of view and from a syntactic and computational point of view. This work focuses on the study of Fagin and Halpern (FH-)conditioning in the context where uncertain information is represented by weighted or possibilistic belief bases. Weighted belief bases are extensions of classical logic where a weight or degree of belief is associated with each propositional logic formula. This paper proposes a characterization of a syntactic computation of the revision of weighted belief bases (in the light of new information) which is in full agreement with the semantics of the FH-conditioning of possibilistic distributions. We show that the size of the revised belief base is linear with respect to the size of the initial base and that the computational complexity amounts to performing O(log_2(n)) calls to the propositional logic satisfiability tests, where n is the number of different degrees of certainty used in the initial belief base. |
11:00 | ABSTRACT. The famous double negation translation establishes an embedding of classical into intuitionistic logic. Curiously, the reverse direction has not been covered in literature. Utilizing a normal form for intuitionistic logic, we establish a small model property for intuitionistic propositional logic. We use this property for a direct encoding of the Kripke semantics into classical propositional logic and quantified boolean formulas. Next, we transfer the developed techniques to the first order case and provide an embedding of intuitionistic first-order logic into classical first-order-logic. Our goal here is an encoding that facilitates the use of state-of-the-art provers for classical first-order logic for determining intuitionistic validity. In an experimental evaluation, we show that our approach can compete with state-of-the-art provers for certain classes of benchmarks, in particular when the intuitionistic content is low. We further note that our constructions support the transfer of counter-models to validity, which is a desired feature in model checking applications. |
11:30 | PRESENTER: Julian Siber ABSTRACT. Lewis' theory of counterfactuals is the foundation of many contemporary notions of causality. In this paper, we extend this theory in the temporal direction to enable symbolic counterfactual reasoning on infinite sequences, such as counterexamples found by a model checker and trajectories produced by a reinforcement learning agent. In particular, our extension considers a more relaxed notion of similarity between worlds and proposes two additional counterfactual operators that close a semantic gap between the previous two in this more general setting. Further, we consider versions of counterfactuals that minimize the distance to the witnessing counterfactual worlds, a common requirement in causal analysis. To automate counterfactual reasoning in the temporal domain, we introduce a logic that combines temporal and counterfactual operators, and outline decision procedures for the satisfiability and trace-checking problems of this logic. |
12:00 | ABSTRACT. Several differentiable logics (DL) have been recently suggested in the context of machine learning verification. A differentiable logic consists of a syntax in which verification properties of learning systems are stated; and an interpretation function that translates any given logical formula into a loss function. As the name of DL suggests, this loss function is then used for learning the property via gradient descent. Although in many ways DLs resemble probabilistic or fuzzy logics, they also raise problems that are unique to this domain: e.g. representation of vectors and learners within the language, as well as obtaining certain geometric properties of the resulting loss function (such as shadow lifting or (weak) smoothness). The fact of large discrepancy in the languages and semantics of existing DLs has prevented a systematic comparative study of their properties or formal approaches to their implementations. This paper remedies this problem by suggesting a meta-language for defining DLs. Syntactically, it generalises the syntax of existing DLs, and for the first time introduces the formalism for reasoning about vectors and learners. Semantically, it introduces an abstract interpretation that can be instantiated to define loss functions arising from different existing DLs. We call the resulting language and semantics the Logic of Differentiable Logics, or LDL. We use LDL to establish several theoretical properties of existing DLs, and to conduct their empirical study in neural network verification. |
Lunch (provided at a restuarant)
Farewell Banquet - Hotel Resort del café by Varuna, Chinchiná-La Manuela (passwd "resort')