View: session overviewtalk overviewside by side with other conferences
Invited talk: Alexander Artikis & Periklis Mantenogloy (University of Piraeus, Greece) -
Online Reasoning under Uncertainty with the Event Calculus
09:30 | Online Reasoning under Uncertainty with the Event Calculus PRESENTER: Alexander Artikis ABSTRACT. Activity recognition systems detect temporal combinations of 'low-level' or 'short-term' activities on sensor data streams. Such streams exhibit various types of uncertainty, often leading to erroneous recognition. We will present an extension of an interval-based activity recognition system which operates on top of a probabilistic Event Calculus implementation. Our proposed system performs online recognition, as opposed to batch processing, thus supporting streaming applications. Our empirical analysis demonstrates the efficacy of our system, comparing it to interval-based batch recognition, point-based recognition, as well as structure and weight learning models. |
11:00 | PRESENTER: Sagar Malhotra ABSTRACT. Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Like most relational models, MLNs do not admit consistent marginal inference over varying domain sizes i.e. marginal probabilities depend on the domain size. Furthermore, MLNs learned on a fixed domain do not generalize to domains of different sizes. In recent works, connections have emerged between domain size dependence, lifted inference, and learning from a sub-sampled domain. The central idea of these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special class of models, namely Relational Block Models (RBMs). In terms of data likelihood, RBMs allow us to learn the best possible projective MLN in the two-variable fragment. Furthermore, RBMs also admit consistent parameter learning over sub-sampled domains. |
11:30 | Exploiting the Full Power of Pearl's Causality in Probabilistic Logic Programming PRESENTER: Kilian Rueckschloss ABSTRACT. We introduce new semantics for acyclic probabilistic logic programs in terms of Pearl's functional causal models. Further, we show that our semantics generalize the classical distribution semantics and CP-logic. This enables us to establish all query types of functional causal models, namely probability calculus, predicting the effect of external interventions and counterfactual reasoning, within probabilistic logic programming. Finally, we briefly discuss the problems regarding knowledge representation and the structure learning task which result from the different semantics and query types. |
Lunches will be held in Taub hall and in The Grand Water Research Institute.
14:00 | PRESENTER: Fabrizio Riguzzi ABSTRACT. Hybrid probabilistic logic programs extends probabilistic logic programs by adding the possibility to manage continuous random variables. Despite the maturity of the field, a semantics that unifies discrete and continuous random variables and function symbols was still missing. In this paper, we summarize the main concepts behind a new proposed semantics for hybrid probabilistic logic programs with function symbols. |
14:30 | PRESENTER: Fabrizio Riguzzi ABSTRACT. Probabilistic Logic Programs under the distribution semantics (PLPDS) do not allow statistical probabilistic statements of the form 90% of birds fly, which were defined Type 1 statements by Halpern. In this paper, we add this kind of statements to PLPDS and introduce the PASTA (Probabilistic Answer set programming for STAtistical probabilities) language. We translate programs in our new formalism into probabilistic answer set programs under the credal semantics. This approach differs from previous proposals, such as the one based on probabilistic conditionals as, instead of choosing a single model by making the maximum entropy assumption, we take into consideration all models and we assign probability intervals to queries. In this way we refrain from making assumptions and we obtain a more neutral framework. We also propose an inference algorithm and compare it with an existing solver for probabilistic answer set programs on a number of programs of increasing size, showing that our solution is faster and can deal with larger instances. --- NOTE: this is the original paper submitted and accepted at the 16th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR). It will be published in the proceedings of LPNMR so it must not be published on CEUR. --- |