LUXLOGAI 2018: LUXEMBOURG LOGIC FOR AI SUMMIT
RULEML+RR ON TUESDAY, SEPTEMBER 18TH, 2018
Days:
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:15-09:30 Opening of Registration

The LuxLogAI registration desk will open on 8.15am every day from Monday, Sep 17, to Friday, Sep 21. Please pick up your conference badges here. The registration desk will also help you with any issues or problems throughout the whole day.

See also the LuxLogAI conference booklet for further information.

09:00-09:30 Session 9B: Applications of Rules and Neural Networks
Location: MSA 3.520
09:00
Mixing Logic Programming and Neural Networks to Support Neurological Disorders Analysis

ABSTRACT. The incidence of neurological disorders is constantly growing, and the use of Artificial Intelligence techniques in supporting neurologists is steadily increasing. Deductive reasoning and neural networks are two prominent areas in AI that can support discovery processes; unfortunately, they have been considered as separate research areas for long time. In this paper we start from a specific neurological disorder, namely Multiple Sclerosis, to define a generic framework showing the potentially disruptive impact of mixing rule-based systems and neural networks. The ambitious goal is to boost the interest of the research community in developing a more tight integration of these two approaches.

09:35-10:30 Session 12: Tuesday morning invited talk (joint with GCAI)
Location: MSA 3.520
09:35
Bridging Trouble

ABSTRACT. Some ten years ago, when I left Xerox PARC to work for a search startup, I hadn’t realized how much the work I had done till then was not mine and could not be continued, for licensing reasons. For almost nine years at PARC I worked on a project to create logic from language, the Bridge project, using a collection of technologies developed by a strong collection of researchers, through at least two decades, under the leadership of Bobrow and Kaplan. I decided that I needed to redo my part of this work, using only open source tools, as I was not ready to give up on the idea of logic from language. I gave a talk at SRI, explaining my reasons and plans, published in ENTCS as ”Bridges from Language to Logic: Concepts, Contexts and Ontologies”, LSFA2010. This talk recalls and unifies some of the research that came up from this project and that is scattered in applications. We focus on a methodology for producing specific domain knowledge from text that we hope to improve, but that is already producing promising initial results, based on Universal Dependencies

10:30-11:00Coffee Break
11:00-12:30 Session 13C: Description Logics
Location: MSA 3.520
11:00
Justifications under the Fixed-Domain Semantics

ABSTRACT. The fixed-domain semantics for OWL and description logic has been introduced to open up the OWL modeling and reasoning tool landscape for use cases resembling constraint satisfaction problems. While standard reasoning under this new semantics is by now rather well-understood theoretically and supported practically, more elaborate tasks like computation of justifications have not been considered so far, although being highly important in the modeling phase.

In this paper, we compare three different approaches to this problem: one using standard OWL technology employing an axiomatization of the fixed-domain semantics, one using our dedicated fixed-domain reasoner "Wolpertinger" in combination with standard justification computation technology, and one where the problem is encoded entirely into answer-set programming.

11:30
Cardinality Restrictions within Description Logic Connection Calculi

ABSTRACT. Abstract. Recently, we have proposed the θ-connection method for the description logic (DL) ALC, the ALC θ-CM. It replaces the usage of Skolem terms and unification by additional annotation and introduces blocking, a typical feature of DL provers, by a new rule, to ensure termination in the case of cyclic ontologies. In this work, we enhance this calculus and its representation to take on ALCHQ++, the extended fragment that includes role Hierarchies, Qualified number restrictions and (in)equalities. The calculus' main enhancement lies in the introduction of (in)equalities, as well as the redefinition of connection so as to accommodate number restrictions, either explicitly or expressed though equalities. The application of Bibel’s eq-connections (equality connections) consists in a first solution to deal with (in)equalities. Termination, soundness and completeness of the calculus are proven, complementing the proofs presented for the ALC θ-CM.

12:00
On the Impact and Proper Use of Heuristics in Test-Driven Ontology Debugging

ABSTRACT. Given an ontology that does not meet required properties such as consistency or the (non-)entailment of certain axioms, Ontology Debugging aims at identifying a set of axioms, called diagnosis, that must be properly modified or deleted in order to resolve the ontology’s faults. As there are, in general, large numbers of competing diagnoses and the choice of each diagnosis leads to a repaired ontology with different semantics, Test-Driven Ontology Debugging (TOD) aims at narrowing the space of diagnoses until a single (highly probable) one is left. To this end, TOD techniques automatically generate a sequence of queries to an interacting oracle (domain expert) about (non-)entailments of the correct ontology. Diagnoses not consistent with the answers are discarded. To minimize debugging cost (oracle effort), various heuristics for selecting the best next query have been proposed. We report preliminary results of extensive ongoing experiments with a set of such heuristics on real-world debugging cases. In particular, we try to answer questions such as "Is some heuristic always superior to all others?", "On which factors does the (relative) performance of the particular heuristics depend?" or "Under which circumstances should I use which heuristic?"

12:30-14:00Lunch Break
14:00-15:30 Session 14C: KR Systems and Applications
Location: MSA 3.520
14:00
Integrating Rule-Based AI Tools into Mainstream Game Development

ABSTRACT. Rule-based declarative formalisms enjoy several advantages when compared with imperative solutions, especially when dealing with AI-based application development: solid theoretical bases, no need for al- gorithm design or coding, explicit and easily modifiable knowledge bases, executable declarative specifications, fast prototyping, quick error de- tection, modularity. For these reasons, ways for combining declarative paradigms, such as Answer Set Programming (ASP), with traditional ones have been significantly studied in the recent years; there are how- ever relevant contexts, in which this road is unexplored, such as devel- opment of real-time games. In such a setting, the strict requirements on reaction times, the presence of computer-human interactivity and a generally increased impedance between the two development paradigms make the task nontrivial. In this work we illustrate how to embed rule- based reasoning modules into the well-known Unity game development engine. To this end, we present an extension of EmbASP, a framework to ease the integration of declarative formalisms with generic applications. We prove the viability of our approach by developing a proof-of-concept Unity game that makes use of ASP-based AI modules.

14:18
Faceted Answer-Set Navigation

ABSTRACT. Even for small logic programs, the number of resulting answer-sets can be tremendous. In such cases, users might be incapable of comprehending the space of answer-sets as a whole nor being able to identify a specific answer-set according to their needs.

To overcome this difficulty, we propose a general formal framework that takes an arbitrary logic program as input, and allows for navigating the space of answer-sets in a systematic interactive way alike faceted browsing. The navigation is carried out stepwise, where each step narrows down the remaining solutions, eventually arriving at a single one. We formulate two navigation modes, one stringent conflict avoiding, and a "free" mode, where conflicting selections of facets might occur. For the latter mode, we provide efficient algorithms for resolving the conflicts. We provide an implementation of our approach and demonstrate that our framework is able to handle logic programs for which it is currently infeasible to retrieve all answer sets.

14:36
An optimized KE-tableau-based system for reasoning in the description logic $\shdlssx$

ABSTRACT. We present a \ke-based system for the principal TBox and ABox reasoning problems of the description logic called $\dlssx$, in short $\shdlssx$. The logic $\shdlssx$, representable in the decidable multi-sorted quantified set-theoretic fragment $\flqsr$, combines the high scalability and efficiency of rule languages such as the Semantic Web Rule Language (SWRL) with the expressivity of description logics. In fact it supports, among other features, Boolean operations on concepts and roles, role constructs such as the product of concepts and role chains on the left hand side of inclusion axioms, and role properties such as transitivity, symmetry, reflexivity, and irreflexivity.

Our algorithm is based on a variant of the \ke\space system for sets of universally quantified clauses, where the KE-elimination rule is generalized in such a way as to incorporate the $\gamma$-rule. The novel system, called \keg, turns out to be an improvement of the system introduced in \cite{RR2017}, which includes a preliminary phase for the elimination of universal quantifiers. Suitable benchmark test sets executed on C++ implementations of the two systems show that in several cases the performances of the \keg-based reasoner are up to about 400\% better than the ones of the other system.

14:54
Clinical Decision Support based on OWL Queries in a Knowledge-as-a-Service Architecture

ABSTRACT. Due to the need to improve access to knowledge and the establishment of means for sharing and organizing data in the health domain, this research proposes an architecture based on the paradigm of Knowledge-as-a-Service (KaaS). It can be used in the medical field to offer centralized access to ontologies and other means of knowledge representation. In this paper, a detailed description of each part of the architecture and its implementation was made, highlighting its main features and interfaces. In addition, a communication protocol was specified and used between the knowledge consumer and the knowledge service provider. One possible use of the proposed architecture is to provide clinical decision support, and it is demonstrated via OWL queries that help decision making. Thus, the development of this research contributed to the creation of a new architecture, called H-Kaas, which established itself as a platform capable of managing multiple data sources and knowledge models, centralizing access through an API that can be instantiated to different purposes, such as clinical decision support, education, etc.

15:12
Answer Set Programming Modulo `Space-Time'

ABSTRACT. We present ASP Modulo `Space-Time', a novel declarative representational and computational framework to perform commonsense reasoning about regions with both spatial and temporal components. Supported are capabilities for mixed qualitative-quantitative reasoning, consistency checking, and inferring compositions of space-time relations; these capabilities combine and synergise for applications in a range of AI application areas. The resulting system for ASP Modulo Space-Time is the only general KR-based method for declaratively reasoning about the dynamics of `space-time' regions as first-class objects. We present an empirical evaluation (with scalability and robustness results), as well as an application in the robotics domain.

15:30-16:00Coffee Break
16:00-16:18 Session 16C: Reasoning with Modalities
Location: MSA 3.520
16:00
The MET: The Art of Flexible Reasoning with Modalities

ABSTRACT. Modal logics have numerous applications in computational linguistics, artificial intelligence, rule-based reasoning, and, in general, alethic, deontic and epistemic contexts. Higher-order quantified modal logics additionally incorporate the expressiveness of higher-order formalisms and thereby provide a quite general reasoning framework. By exploiting this expressiveness, the Modal Embedding Tool (MET) allows to automatically encode higher-order modal logic problems into equivalent problems of classical logic, enabling the use of a broad variety of established reasoning tools. In this system description, the functionality and usage of MET as well as a suitable input syntax for flexible reasoning with modalities are presented.

16:18-17:30 Session 17: Doctoral Consortium
Location: MSA 3.520
16:18
Improving Probabilistic Rules Compilation using PRM

ABSTRACT. Widely adopted for more than 20 years in industrial fields, business rules offer the opportunity to non-IT users to define decision-making policies in a simple and intuitive way. To facilitate their use, systems known as Business Rule Management Systems have been developed, separating the business logic from the application one. While suitable for processing structured and complete data, BRMS face difficulties when those are incomplete or uncertain. This study proposes a new approach for the integration of probabilistic reasoning in IBM Operational Decision Manager (ODM), IBM’s BRMS, especially through the introduction of a notion of risk, making the compilation phase more complex but increasing the expressiveness of business rules.

16:36
Computational Hermeneutics: Using Automated Theorem Proving for the Logical Analysis of Natural-Language Arguments

ABSTRACT. While there have been major advances in automated theorem proving (ATP) during the last years, its main field of application has mostly remained bounded to mathematics and hardware/software verification. I argue that the use of ATP in philosophy can also be very fruitful, not only because of the obvious quantitative advantages of automated reasoning tools (e.g. reducing by several orders of magnitude the time needed to test some argument's validity), but also because it enables a novel approach to the logical analysis of arguments. This approach, which I have called computational hermeneutics, draws its inspiration from work in the philosophy of language such as Donald Davidson's theory of radical interpretation and contemporary so-called inferentialist theories of meaning, which do justice to the inherent circularity of linguistic understanding: the whole is understood (compositionally) on the basis of its parts, while each part is understood only in the (inferential) context of the whole. Computational hermeneutics is thus a holistic, iterative, trial-and-error enterprise, where we evaluate the adequacy of some candidate formalization of a sentence by computing the logical validity of the whole argument. We start with formalizations of some simple statements (taking them as tentative) and use them as stepping stones on the way to the formalization of other argument's sentences, repeating the procedure until arriving at a state of reflective equilibrium: A state where our beliefs have the highest degree of coherence and acceptability.

16:54
Towards knowledge-based integration and visualization of geospatial data using Semantic Web technologies

ABSTRACT. Geospatial data have been pervasive and indispensable for various real-world ap-plication of e.g. urban planning, traffic analysis and emergency response. To this end, the data integration and knowledge transfer are two prominent issues for augmenting the use of geospatial data and knowledge. In order to address these issue, Semantic Web technologies have been considerably adopted in geospatial domain, and there are currently still some inactivates investigating the benefits brought up from the adoption of Semantic Web technologies. In this context, this paper showcases and discusses the knowledge-based geospatial data integration and visualization leveraging ontologies and rules. Specifically, we use the Linked Data paradigm for modelling geospatial data, and then create knowledge base of the visualization of such data in terms of scaling, data portrayal and geometry source. This approach would benefit the transfer, interpret and reuse the visuali-zation knowledge for geospatial data. At the meantime, we also identified some challenges of modelling geospatial knowledge and outreaching such knowledge to other domains as future study.

17:12
A new approach to conceive ASP solvers

ABSTRACT. The Answer set programming (ASP) is a non-monotonic declarative programming paradigm that is widely used for the formulation of problems in artificial intelligence. The ASP paradigm provides also a general framework for the resolution of decision and optimization problems. The idea behind ASP is to represent a problem as a logic program and solve that problem by computing sta- ble models. In our work, we propose a new method for searching stable models of logical programs. This method is based on a relatively new semantic that had not been exploited yet. This semantic captures and extends that one of the stable models. The method performs a DPLL enumerative process only on a restricted set of literals called the strong back-door (STB). This method has the advantage to use a Horn clause representation having the same size as the input logic pro- gram and has constant spatial complexity. It avoids the heaviness induced by the loop management from which suffer most of the ASP solvers based on the Clark completion.

19:00-22:30 River cruise conference banquet

The conference banquet of LuxLogAI will take place on 18 Sep on a boat on the Moselle river during a cruise in the evening.

The boat will leave from Remich, the pearl of the Moselle, and take us in the direction of Schengen in the tri-border area of France – Germany – Luxembourg, where the so-called Schengen Agreement was signed on a passenger vessel on 14th June 1985.

See the LuxLogAI web pages for details.