FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
PLP ON MONDAY, AUGUST 1ST

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:30-10:30 Session 28B

Invited talk: Alexander Artikis & Periklis Mantenogloy (University of Piraeus, Greece) - 

Online Reasoning under Uncertainty with the Event Calculus

Location: Ullmann 305
09:30
Online Reasoning under Uncertainty with the Event Calculus

ABSTRACT. Activity recognition systems detect temporal combinations of 'low-level' or 'short-term' activities on sensor data streams. Such streams exhibit various types of uncertainty, often leading to erroneous recognition. We will present an extension of an interval-based activity recognition system which operates on top of a probabilistic Event Calculus implementation. Our proposed system performs online recognition, as opposed to batch processing, thus supporting streaming applications. Our empirical analysis demonstrates the efficacy of our system, comparing it to interval-based batch recognition, point-based recognition, as well as structure and weight learning models.

10:30-11:00Coffee Break
11:00-12:30 Session 31J
Location: Ullmann 305
11:00
On Projectivity in Markov Logic Networks
PRESENTER: Sagar Malhotra

ABSTRACT. Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Like most relational models, MLNs do not admit consistent marginal inference over varying domain sizes i.e. marginal probabilities depend on the domain size. Furthermore, MLNs learned on a fixed domain do not generalize to domains of different sizes. In recent works, connections have emerged between domain size dependence, lifted inference, and learning from a sub-sampled domain. The central idea of these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special class of models, namely Relational Block Models (RBMs). In terms of data likelihood, RBMs allow us to learn the best possible projective MLN in the two-variable fragment. Furthermore, RBMs also admit consistent parameter learning over sub-sampled domains.

11:30
Exploiting the Full Power of Pearl's Causality in Probabilistic Logic Programming

ABSTRACT. We introduce new semantics for acyclic probabilistic logic programs in terms of Pearl's functional causal models. Further, we show that our semantics generalize the classical distribution semantics and CP-logic. This enables us to establish all query types of functional causal models, namely probability calculus, predicting the effect of external interventions and counterfactual reasoning, within probabilistic logic programming. Finally, we briefly discuss the problems regarding knowledge representation and the structure learning task which result from the different semantics and query types.

12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

14:00-15:30 Session 34K
Location: Ullmann 305
14:00
Semantics for Hybrid Probabilistic Logic Programs with Function Symbols: Technical Summary
PRESENTER: Fabrizio Riguzzi

ABSTRACT. Hybrid probabilistic logic programs extends probabilistic logic programs by adding the possibility to manage continuous random variables. Despite the maturity of the field, a semantics that unifies discrete and continuous random variables and function symbols was still missing. In this paper, we summarize the main concepts behind a new proposed semantics for hybrid probabilistic logic programs with function symbols.

14:30
Statistical Statements in Probabilistic Logic Programming
PRESENTER: Fabrizio Riguzzi

ABSTRACT. Probabilistic Logic Programs under the distribution semantics (PLPDS) do not allow statistical probabilistic statements of the form 90% of birds fly, which were defined Type 1 statements by Halpern. In this paper, we add this kind of statements to PLPDS and introduce the PASTA (Probabilistic Answer set programming for STAtistical probabilities) language. We translate programs in our new formalism into probabilistic answer set programs under the credal semantics. This approach differs from previous proposals, such as the one based on probabilistic conditionals as, instead of choosing a single model by making the maximum entropy assumption, we take into consideration all models and we assign probability intervals to queries. In this way we refrain from making assumptions and we obtain a more neutral framework. We also propose an inference algorithm and compare it with an existing solver for probabilistic answer set programs on a number of programs of increasing size, showing that our solution is faster and can deal with larger instances. --- NOTE: this is the original paper submitted and accepted at the 16th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR). It will be published in the proceedings of LPNMR so it must not be published on CEUR. ---

15:30-16:00Coffee Break
16:00-17:00 Session 37J
Location: Ullmann 305
16:00
Explainability, causality and computational and-or graphs

ABSTRACT. In recent years, there has been an increasing interest in studying causality-related properties in machine learning models generally, and in generative models in particular. While that is well-motivated, it inherits the fundamental computational hardness of probabilistic inference, making exact reasoning intractable. Probabilistic tractable models have also recently emerged, which guarantee that conditional marginals can be computed in time linear in the size of the model, where the model is usually learned from data. In the talk, we will discuss a number of new results in this area. We will discuss what kind of casual queries can be answered on trained tractable models, what kind of domain constraints can be posed and what methods are available to extract (counterfactual) explanations from them.

18:30-20:00Workshop Dinner (at the Technion, Taub Terrace Floor 2) - Paid event