LUXLOGAI 2018: LUXEMBOURG LOGIC FOR AI SUMMIT
PROGRAM FOR TUESDAY, SEPTEMBER 18TH, 2018
Days:
previous day
next day
all days

View: session overviewtalk overview

08:15-09:30 Opening of Registration (LuxLogAI)

The LuxLogAI registration desk will open on 8.15am every day from Monday, Sep 17, to Friday, Sep 21. Please pick up your conference badges here. The registration desk will also help you with any issues or problems throughout the whole day.

See also the LuxLogAI conference booklet for further information.

09:00-09:10 Session 9A: Lifetime Achievement Award and Most Influential DM People (DecisionCAMP)

This year for the first time one of the major contributors to Business Rules and Decision Management movement will be presented with the Lifetime Achievement Award during DecisionCAMP-2018.  The attendees will also vote for the “Most Influential DM People”.

See also: https://dmcommunity.org/top-ten/

Location: MSA 3.110
09:00-09:30 Session 9B: Applications of Rules and Neural Networks (RuleML+RR)
Location: MSA 3.520
09:00
Mixing Logic Programming and Neural Networks to Support Neurological Disorders Analysis

ABSTRACT. The incidence of neurological disorders is constantly growing, and the use of Artificial Intelligence techniques in supporting neurologists is steadily increasing. Deductive reasoning and neural networks are two prominent areas in AI that can support discovery processes; unfortunately, they have been considered as separate research areas for long time. In this paper we start from a specific neurological disorder, namely Multiple Sclerosis, to define a generic framework showing the potentially disruptive impact of mixing rule-based systems and neural networks. The ambitious goal is to boost the interest of the research community in developing a more tight integration of these two approaches.

09:10-10:30 Session 10 (DecisionCAMP)
Location: MSA 3.110
09:10
Spreadsheets in Decision Management

ABSTRACT. Spreadsheets have been used from the beginning of times. Many businesses continue to maintain their decision logic in this beloved format. From that point on, humans may have to referred to them by hand, or they might be consumed by automated systems. In this presentation, we will focus on the integration of spreadsheets with Decision Management Systems. In some cases, they translate to business rules. In others, spreadsheets remain the format of choice for maintenance, while systems must learn how to interpret them at runtime. We will demonstrate both scenarios through live examples.

09:50
Managing a Decision Zoo (Requirements Engineering for Complex Decisions)

ABSTRACT. In recent years, the use of DMN in decision management has increased, but the scope still tends to be limited to simple requirements and decision logics. Our talk aims to demonstrate how to handle complex decisions and how to model them in a structured way.

We aim to provide you with an overview in modelling large human knowledge into consistent, structured modeling decisions, based on our own experiences. We will present how to gather the business rules activities built on analyzed business processes formalized by BPMN. Further, we would like to explain our approach to build DRDs that arise from these activities. As this sounds quite uncomplicated, we identified that the real challenge with building DRDs is to consider the whole cardinality of business requirements.

We will demonstrate our approach on building complex decisions partitioned into multiple DRDs. If one wants to know how to successfully manage them, we suggest that particular tools and structured procedures are necessary to fulfill this job. Especially complex decisions require comprehensive tools and sophisticated processes.

Instead of showing the description of the decision logic level with decision tables, we will rather use boxed expressions, contexts, complex data structures and FEEL. In particular, we will demonstrate the Logic structure of our decisions and BKMs as well as some of our best practices. Finally, we explain how business users can test their models in a timely, tool-supported manner. Therefore, we show how a web application, based on RedHat Drools, allows users to define and execute test cases.

09:35-10:30 Session 12: Tuesday morning invited talk (GCAI and RuleML+RR)
Location: MSA 3.520
09:35
Bridging Trouble

ABSTRACT. Some ten years ago, when I left Xerox PARC to work for a search startup, I hadn’t realized how much the work I had done till then was not mine and could not be continued, for licensing reasons. For almost nine years at PARC I worked on a project to create logic from language, the Bridge project, using a collection of technologies developed by a strong collection of researchers, through at least two decades, under the leadership of Bobrow and Kaplan. I decided that I needed to redo my part of this work, using only open source tools, as I was not ready to give up on the idea of logic from language. I gave a talk at SRI, explaining my reasons and plans, published in ENTCS as ”Bridges from Language to Logic: Concepts, Contexts and Ontologies”, LSFA2010. This talk recalls and unifies some of the research that came up from this project and that is scattered in applications. We focus on a methodology for producing specific domain knowledge from text that we hope to improve, but that is already producing promising initial results, based on Universal Dependencies

10:30-11:00Coffee Break
11:00-12:20 Session 13A (DecisionCAMP)
Location: MSA 3.110
11:00
Automatic judgment of decision authority using OpenRules

ABSTRACT. In China and Southeast Asia including Japan, there are many companies with complex authorization of approval, so complicated control such as complex conditional branches, consultation, delegation of authority to approve and concurrent administration is required when introducing the workflow system.

However, since the setting patterns are different for each company, it is difficult to provide all of them as standard functions of the workflow, so far it has often been customized individually.

Therefore, in order to solve these problems, automatic judgment of decision authority was realized by combining workflow system with rule engine. It is possible to control the conditional branching with noncoding and to dynamically arrange authority for decision. Since automatic processing of workflow can be realized without customizing, it is possible to drastically reduce the cost of introducing the workflow.

In this paper, we explain concrete cooperation method of workflow and rule engine by taking "intra-mart" and "OpenRules" as an example.

11:40
Explaining the Unexplainable: Using DMN to Justify the Outcomes of Black-box Analytics

ABSTRACT. Ethics, regulation and software safety are just a few of the pressures pushing organizations to make automated decisions more transparent, especially when people’s lives are directly and significantly impacted by the outcome. This transparency is essential to gain some insight into, and comfort from, why a particular decision was made. In some cases it may soon be a legal requirement. At the same time, automated decisions are increasingly relying on sophisticated analytics and machine learning models to increase their predictive power. Although a few machine learning models are noted for their transparency, most of the more powerful ones are, by their very nature, inscrutable: their internal state gives no human readable clue regarding the reason for their outcome. This opacity may limit their application. Can we bring the noted transparency of DMN to bear on this problem?

Academia and industry have proposed fledgling ideas to make black box machine learning models explainable and in this presentation we explore how decision modelling in DMN can be used to both improve the transparency of an analytic model and explain outcomes. We will focus, using practical examples, on techniques that attempt to provide a post-hoc explanation of an analytic, independently of the details of that analytic.

At last year’s Decision CAMP we saw how DMN can be used to contextualize analytics, being explicit about its inputs and how it is used. Whilst this is very useful, it doesn’t address how the analytical decision can justify its outcome in a specific case. This presentation will explore this and look at how DMN can be used to explain outcomes using the latest ideas currently available for analytic explain.

Attendees of this session will see the importance of analytic transparency and gain an overview of some of the explain techniques. The use of DMN as a vehicle for representing the explanation will be demonstrated using a real, publicly available dataset.

11:00-12:30 Session 13B: Tuesday morning second session (GCAI)
Location: MSA 4.530
11:00
Towards a Closer Integration of Dynamic Programming and Constraint Programming

ABSTRACT. Three connections between Dynamic Programming (DP) and Constraint Programming (CP) have previously been explored in the literature: DP-based global constraints, DP-like memoisation during tree search to avoid recomputing results, and subsumption of both by bucket elimination. In this paper we propose a new connection: many discrete DP algorithms can be directly modelled and solved as a constraint satisfaction problem (CSP) without backtracking. This has applications including the design of monolithic CP models for bilevel optimisation. We show that constraint filtering can occur between leader and follower variables in such models, and demonstrate the method on network interdiction.

11:30
Standard and Non-Standard Inferences in the Description Logic FL0 Using Tree Automata

ABSTRACT. Although being quite inexpressive, the description logic (DL) FL0, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FL0 concepts w.r.t. general TBoxes.

12:00
Historical Gradient Boosting Machine
SPEAKER: Zeyu Feng

ABSTRACT. We introduce the Historical Gradient Boosting Machine with the objective of improving the convergence speed of gradient boosting. Our approach is analyzed from the perspective of numerical optimization in function space and considers gradients in previous steps, which have rarely been appreciated by traditional methods. To better exploit the guiding effect of historical gradient information, we incorporate both the accumulated previous gradients and the current gradient into the computation of descent direction in the function space. By fitting to the descent direction given by our algorithm, the weak learner could enjoy the advantages of historical gradients that mitigate the greediness of the steepest descent direction. Experimental results show that our approach improves the convergence speed of gradient boosting without significant decrease in accuracy.

11:00-12:30 Session 13C: Description Logics (RuleML+RR)
Location: MSA 3.520
11:00
Justifications under the Fixed-Domain Semantics

ABSTRACT. The fixed-domain semantics for OWL and description logic has been introduced to open up the OWL modeling and reasoning tool landscape for use cases resembling constraint satisfaction problems. While standard reasoning under this new semantics is by now rather well-understood theoretically and supported practically, more elaborate tasks like computation of justifications have not been considered so far, although being highly important in the modeling phase.

In this paper, we compare three different approaches to this problem: one using standard OWL technology employing an axiomatization of the fixed-domain semantics, one using our dedicated fixed-domain reasoner "Wolpertinger" in combination with standard justification computation technology, and one where the problem is encoded entirely into answer-set programming.

11:30
Cardinality Restrictions within Description Logic Connection Calculi

ABSTRACT. Abstract. Recently, we have proposed the θ-connection method for the description logic (DL) ALC, the ALC θ-CM. It replaces the usage of Skolem terms and unification by additional annotation and introduces blocking, a typical feature of DL provers, by a new rule, to ensure termination in the case of cyclic ontologies. In this work, we enhance this calculus and its representation to take on ALCHQ++, the extended fragment that includes role Hierarchies, Qualified number restrictions and (in)equalities. The calculus' main enhancement lies in the introduction of (in)equalities, as well as the redefinition of connection so as to accommodate number restrictions, either explicitly or expressed though equalities. The application of Bibel’s eq-connections (equality connections) consists in a first solution to deal with (in)equalities. Termination, soundness and completeness of the calculus are proven, complementing the proofs presented for the ALC θ-CM.

12:00
On the Impact and Proper Use of Heuristics in Test-Driven Ontology Debugging

ABSTRACT. Given an ontology that does not meet required properties such as consistency or the (non-)entailment of certain axioms, Ontology Debugging aims at identifying a set of axioms, called diagnosis, that must be properly modified or deleted in order to resolve the ontology’s faults. As there are, in general, large numbers of competing diagnoses and the choice of each diagnosis leads to a repaired ontology with different semantics, Test-Driven Ontology Debugging (TOD) aims at narrowing the space of diagnoses until a single (highly probable) one is left. To this end, TOD techniques automatically generate a sequence of queries to an interacting oracle (domain expert) about (non-)entailments of the correct ontology. Diagnoses not consistent with the answers are discarded. To minimize debugging cost (oracle effort), various heuristics for selecting the best next query have been proposed. We report preliminary results of extensive ongoing experiments with a set of such heuristics on real-world debugging cases. In particular, we try to answer questions such as "Is some heuristic always superior to all others?", "On which factors does the (relative) performance of the particular heuristics depend?" or "Under which circumstances should I use which heuristic?"

12:30-14:00Lunch Break
14:00-15:20 Session 14A (DecisionCAMP)
Location: MSA 3.110
14:00
Fusion Project – replacing a legacy system elephant, one bite at a time

ABSTRACT. Medscheme is a Medical Aid Administrator in South Africa, administering several Medical Schemes. At the heart of the business is processing medical claims of members and healthcare providers. Nexus is the bespoke system, over 20 years old, on which the Medical Aid Administration business is conducted. Documenting Nexus while changing and growing fast was historically lacking. Maintaining and changing it has become difficult as a lot of the rules are hidden in data and procedural code. This affects business agility as well as being time and labour intensive, a team of 30 people maintains the claims module alone.

This presentation will elaborate on how the Fusion Project set out to solve the problem, using Decision Analysis and DMN. How we adopted the methodology, made it part of our lives and how it is implemented to replace parts of the claims process. Showcasing the technical implementation, using FICO’s Blaze Advisor and the modern architecture used to host it and integrate it. Elaborating how we dealt with performance, testing the accuracy as well as coexistence with the legacy system. A phase of this is live and we share the challenges, successes and progress of this phase as well as the ongoing phases.

14:40
Implementing Decision Modeling in a Large Organization

ABSTRACT. Starting over five years ago, a small group of early adopters working principally autonomously within a large organization began implementing Decision Modeling. Over that time and with little tool support, the group has obtained many successes through leveraging the principles of Decision Modeling. Even so, there has been and still remain significant doubts and a great deal of skepticism regarding the Decision Modeling methodology’s capabilities at larger scale, both from business and technology. This presentation will describe the journey that Decision Modeling has taken in a large organization: from the first set of rule families to current state. Presentation will be a case study that cover lessons learned in overcoming organizational hesitation and resistance to adopting Decision Modeling in a large company.

14:00-15:30 Session 14B: Tuesday afternoon first session (GCAI)
Location: MSA 4.530
14:00
Interpretability of a Service Robot: Enabling User Questions and Checkable Answers

ABSTRACT. Service robots are becoming more and more capable but at the same time they are opaque to their users. Once a robot starts executing a task it is hard to tell what it is doing or why. To make robots more transparent to their users we propose to expand the capabilities of robots to not only execute tasks but also answer questions about their experience.

During execution, our CoBot robots record log files. We propose to use these files as a recording of the robot experience. Log files record the experience of the robot in term of its internals. To process information from the logs we define Log Primitives Operations (LPOs) that the robot can autonomously perform. Each LPO is defined in term of an operation and a set of filters. We frame the problem of understanding questions about robot past experiences, as grounding input sentences to LPOs. To do so, we introduce a probabilistic model to ground sentences to these primitives. We evaluate our approach on a corpus of 133 sentences showing that our method is able to learn the meaning of users' questions.

Finally we introduce the concept of checkable answers to have the robot provide answers that better explain the computation performed to achieve the result reported.

14:30
A Data-Driven Metric of Hardness for WSC Sentences

ABSTRACT. The Winograd Schema Challenge (WSC) ---the task of resolving pronouns in certain forms of sentences where shallow parsing techniques seem not to be directly applicable --- has been proposed as an alternative to the Turing Test. According to Levesque, having access to a large corpus of text would likely not help much in the WSC. Among a number of attempts to tackle this challenge, one particular approach has demonstrated the plau- sibility of using commonsense knowledge automatically acquired from raw text in English Wikipedia. Here, we present the results of a large-scale experiment that shows how the performance of that particular automated approach varies with the availability of training material. We compare the results of this experiment with two studies: one from the literature that inves- tigates how adult fluent speakers tackle the WSC, and one that we design and undertake to investigate how teenager non-fluent speakers tackle the WSC. We find that the perfor- mance of the automated approach correlates positively with the performance of humans, suggesting that the performance of the particular automated approach could be used as a metric of hardness for WSC instances.

15:00
Classifier Labels as Language Grounding for Explanations

ABSTRACT. Advances in state-of-the-art techniques including convolutional neural networks (CNNs) have led to improved perception in autonomous robots. However, these new techniques make the robot's decision-making process obscure even for the experts. Our goal is to automatically generate natural language explanations for robot's state and decision-making algorithms in order to help people understand how they made their decisions. Generating natural language explanations is particularly challenging for perception and other high-dimension classification tasks because 1) we lack a mapping from features to language and 2) there are a large number of features which could be explained. We present a novel approach to generating explanations that first find important features that most affect the classification output and then utilize a secondary detector to label (i.e., generate natural language groundings) only those features. We demonstrate our explanation algorithm's ability to explain our service robot's building floor identification classifier.

14:00-15:30 Session 14C: KR Systems and Applications (RuleML+RR)
Location: MSA 3.520
14:00
Integrating Rule-Based AI Tools into Mainstream Game Development

ABSTRACT. Rule-based declarative formalisms enjoy several advantages when compared with imperative solutions, especially when dealing with AI-based application development: solid theoretical bases, no need for al- gorithm design or coding, explicit and easily modifiable knowledge bases, executable declarative specifications, fast prototyping, quick error de- tection, modularity. For these reasons, ways for combining declarative paradigms, such as Answer Set Programming (ASP), with traditional ones have been significantly studied in the recent years; there are how- ever relevant contexts, in which this road is unexplored, such as devel- opment of real-time games. In such a setting, the strict requirements on reaction times, the presence of computer-human interactivity and a generally increased impedance between the two development paradigms make the task nontrivial. In this work we illustrate how to embed rule- based reasoning modules into the well-known Unity game development engine. To this end, we present an extension of EmbASP, a framework to ease the integration of declarative formalisms with generic applications. We prove the viability of our approach by developing a proof-of-concept Unity game that makes use of ASP-based AI modules.

14:18
Faceted Answer-Set Navigation

ABSTRACT. Even for small logic programs, the number of resulting answer-sets can be tremendous. In such cases, users might be incapable of comprehending the space of answer-sets as a whole nor being able to identify a specific answer-set according to their needs.

To overcome this difficulty, we propose a general formal framework that takes an arbitrary logic program as input, and allows for navigating the space of answer-sets in a systematic interactive way alike faceted browsing. The navigation is carried out stepwise, where each step narrows down the remaining solutions, eventually arriving at a single one. We formulate two navigation modes, one stringent conflict avoiding, and a "free" mode, where conflicting selections of facets might occur. For the latter mode, we provide efficient algorithms for resolving the conflicts. We provide an implementation of our approach and demonstrate that our framework is able to handle logic programs for which it is currently infeasible to retrieve all answer sets.

14:36
An optimized KE-tableau-based system for reasoning in the description logic $\shdlssx$

ABSTRACT. We present a \ke-based system for the principal TBox and ABox reasoning problems of the description logic called $\dlssx$, in short $\shdlssx$. The logic $\shdlssx$, representable in the decidable multi-sorted quantified set-theoretic fragment $\flqsr$, combines the high scalability and efficiency of rule languages such as the Semantic Web Rule Language (SWRL) with the expressivity of description logics. In fact it supports, among other features, Boolean operations on concepts and roles, role constructs such as the product of concepts and role chains on the left hand side of inclusion axioms, and role properties such as transitivity, symmetry, reflexivity, and irreflexivity.

Our algorithm is based on a variant of the \ke\space system for sets of universally quantified clauses, where the KE-elimination rule is generalized in such a way as to incorporate the $\gamma$-rule. The novel system, called \keg, turns out to be an improvement of the system introduced in \cite{RR2017}, which includes a preliminary phase for the elimination of universal quantifiers. Suitable benchmark test sets executed on C++ implementations of the two systems show that in several cases the performances of the \keg-based reasoner are up to about 400\% better than the ones of the other system.

14:54
Clinical Decision Support based on OWL Queries in a Knowledge-as-a-Service Architecture

ABSTRACT. Due to the need to improve access to knowledge and the establishment of means for sharing and organizing data in the health domain, this research proposes an architecture based on the paradigm of Knowledge-as-a-Service (KaaS). It can be used in the medical field to offer centralized access to ontologies and other means of knowledge representation. In this paper, a detailed description of each part of the architecture and its implementation was made, highlighting its main features and interfaces. In addition, a communication protocol was specified and used between the knowledge consumer and the knowledge service provider. One possible use of the proposed architecture is to provide clinical decision support, and it is demonstrated via OWL queries that help decision making. Thus, the development of this research contributed to the creation of a new architecture, called H-Kaas, which established itself as a platform capable of managing multiple data sources and knowledge models, centralizing access through an API that can be instantiated to different purposes, such as clinical decision support, education, etc.

15:12
Answer Set Programming Modulo `Space-Time'

ABSTRACT. We present ASP Modulo `Space-Time', a novel declarative representational and computational framework to perform commonsense reasoning about regions with both spatial and temporal components. Supported are capabilities for mixed qualitative-quantitative reasoning, consistency checking, and inferring compositions of space-time relations; these capabilities combine and synergise for applications in a range of AI application areas. The resulting system for ASP Modulo Space-Time is the only general KR-based method for declaratively reasoning about the dynamics of `space-time' regions as first-class objects. We present an empirical evaluation (with scalability and robustness results), as well as an application in the robotics domain.

15:30-16:00Coffee Break
15:45-16:25 Session 15A (DecisionCAMP)
Location: MSA 3.110
15:45
Introduction and Updates on DMN TCK

ABSTRACT. The Decision Model and Notation (DMN) standard allows organization to describe, model and execute business decisions; DMN also enables interchange of defined models across organizations via the standardized xml interchange format. The DMN standard provides three level of Conformance, which aims to define helpful scopes of support for Vendors providing implementations of the standard; this in turn is beneficial for organizations adopting the standard, for a responsible awareness of the level of support provided by the chosen tools and implementations. As the DMN standard receive more traction from Decision Management practitioners and increased adoption from organizations, a verifiable method for testing the level of Conformance claimed by the implementation would generally benefit all actors involved.

The DMN Technology Compatibility Kit (DMN TCK) is a community-led proposal for a verifiable and executable method to demonstrate the Conformance level of support provided by a Vendor- supplied DMN implementation. In addition, this method provides more finer-grained details on the actual support for specific DMN constructs for each implementation.

The DMN TCK working group is composed by vendors and practitioners of DMN, with the goal to assist and ensure Conformance to the specification, by defining test cases and expected results, by providing tools to run these tests and validate results; the outcome also represent an additional and pragmatical way to recognize and publicize vendor success. Joining the TCK is free, it also holds weekly conference calls and new members are always welcome.

15:45-16:00 Session 15B: AI and Art: AI+A serendipity? Exhibition Opening (LuxLogAI)

 Presented by curator Yolanda Spinola-Elias. Featured artists are Sergio Albiac, José Manuel Berenguer, Roc Parés, a selection of videoartworks from Loop Festival of Videoart. some of them related to AI/New Technologies and society impact vision. The exhibition includes an AI+A timeline art installation.

Location: MSA 3.180
16:00-17:30 Session 16A: Tuesday afternoon second session (GCAI)
Location: MSA 4.530
16:00
Analysis of Attack Graph Representations for Ranking Vulnerability Fixes

ABSTRACT. Software vulnerabilities in organizational computer networks can be leveraged by an attacker to gain access to sensitive information. As fixing all vulnerabilities requires much effort, it is critical to rank the possible fixes by their importance. Centrality measures over logical attack graphs, or over the network connectivity graph, often provide a scalable method for finding the most critical vulnerabilities.

In this paper we suggest an analysis of the planning graph, originating in classical planning, as an alternative for the logical attack graph, to improve the ranking produced by centrality measures. The planning graph also allows us to enumerate the set of possible attack plans, and hence, directly count the number of attacks that use a given vulnerability. We evaluate a set of centrality-based ranking measures over the logical attack graph and the planning graph, showing that metrics computed over the planning graph reduce more rapidly the set of shortest attack plans.

16:30
Multi-Armed Bandit Algorithms for a Mobile Service Robot's Spare Time in a Structured Environment

ABSTRACT. We assume that service robots will have spare time in between scheduled user requests, which they could use to perform additional unrequested services in order to learn a model of users' preferences and receive rewards. However, a mobile service robot is constrained by the need to travel through the environment to reach users in order to perform services for them, as well as the need to carry out scheduled user requests. We assume service robots operate in structured environments comprised of hallways and floors, resulting in scenarios where an office can be conveniently added to the robot's plan at a low cost, which affects the robot's ability to plan and learn.

We present two algorithms, Planning Thompson Sampling and Planning UCB1, which are based on existing algorithms used in multi-armed bandit problems, but are modified to plan ahead considering the time and location constraints of the problem. We compare them to existing versions of Thompson Sampling and UCB1 in two environments representative of the types of structures a robot will encounter in an office building. We find that our planning algorithms outperform the original naive versions in terms of both reward received and the effectiveness of the model learned in a simulation. The difference in performance is partially due to the fact that the original algorithms frequently miss opportunities to perform services at a low cost for convenient offices along their path, while our planning algorithms do not.

17:00
Genetic Algorithms for Scheduling and Optimization of Ore Train Networks

ABSTRACT. Search and optimization problems are a major arena for the practical application of Artificial Intelligence. However, when supply chain optimization and scheduling is tackled, techniques based on linear or non-linear programming are often used in preference to Evolutionary Computation such as Genetic Algorithms (GAs). It is important to analyse whether GA are suitable for continuous realworld supply chain scheduling tasks which needs adaption regularly? We analysed a practical situation of iron ore train networks which is indeed one of significant economic importance. In addition, iron ore train networks have some interesting and distinctive characteristics so analysing this situation is an important step toward understanding the performance of GA in real-world supply chain scheduling. We compared the performance of GA with Nonlinear programming heuristics and existing industry’s scheduling approaches. The main result is that our comparison of techniques here produce an example in which GAs perform well and is a cost effective approach.

16:00-16:18 Session 16C: Reasoning with Modalities (RuleML+RR)
Location: MSA 3.520
16:00
The MET: The Art of Flexible Reasoning with Modalities

ABSTRACT. Modal logics have numerous applications in computational linguistics, artificial intelligence, rule-based reasoning, and, in general, alethic, deontic and epistemic contexts. Higher-order quantified modal logics additionally incorporate the expressiveness of higher-order formalisms and thereby provide a quite general reasoning framework. By exploiting this expressiveness, the Modal Embedding Tool (MET) allows to automatically encode higher-order modal logic problems into equivalent problems of classical logic, enabling the use of a broad variety of established reasoning tools. In this system description, the functionality and usage of MET as well as a suitable input syntax for flexible reasoning with modalities are presented.

16:18-17:30 Session 17: Doctoral Consortium (RuleML+RR)
Location: MSA 3.520
16:18
Improving Probabilistic Rules Compilation using PRM

ABSTRACT. Widely adopted for more than 20 years in industrial fields, business rules offer the opportunity to non-IT users to define decision-making policies in a simple and intuitive way. To facilitate their use, systems known as Business Rule Management Systems have been developed, separating the business logic from the application one. While suitable for processing structured and complete data, BRMS face difficulties when those are incomplete or uncertain. This study proposes a new approach for the integration of probabilistic reasoning in IBM Operational Decision Manager (ODM), IBM’s BRMS, especially through the introduction of a notion of risk, making the compilation phase more complex but increasing the expressiveness of business rules.

16:36
Computational Hermeneutics: Using Automated Theorem Proving for the Logical Analysis of Natural-Language Arguments

ABSTRACT. While there have been major advances in automated theorem proving (ATP) during the last years, its main field of application has mostly remained bounded to mathematics and hardware/software verification. I argue that the use of ATP in philosophy can also be very fruitful, not only because of the obvious quantitative advantages of automated reasoning tools (e.g. reducing by several orders of magnitude the time needed to test some argument's validity), but also because it enables a novel approach to the logical analysis of arguments. This approach, which I have called computational hermeneutics, draws its inspiration from work in the philosophy of language such as Donald Davidson's theory of radical interpretation and contemporary so-called inferentialist theories of meaning, which do justice to the inherent circularity of linguistic understanding: the whole is understood (compositionally) on the basis of its parts, while each part is understood only in the (inferential) context of the whole. Computational hermeneutics is thus a holistic, iterative, trial-and-error enterprise, where we evaluate the adequacy of some candidate formalization of a sentence by computing the logical validity of the whole argument. We start with formalizations of some simple statements (taking them as tentative) and use them as stepping stones on the way to the formalization of other argument's sentences, repeating the procedure until arriving at a state of reflective equilibrium: A state where our beliefs have the highest degree of coherence and acceptability.

16:54
Towards knowledge-based integration and visualization of geospatial data using Semantic Web technologies

ABSTRACT. Geospatial data have been pervasive and indispensable for various real-world ap-plication of e.g. urban planning, traffic analysis and emergency response. To this end, the data integration and knowledge transfer are two prominent issues for augmenting the use of geospatial data and knowledge. In order to address these issue, Semantic Web technologies have been considerably adopted in geospatial domain, and there are currently still some inactivates investigating the benefits brought up from the adoption of Semantic Web technologies. In this context, this paper showcases and discusses the knowledge-based geospatial data integration and visualization leveraging ontologies and rules. Specifically, we use the Linked Data paradigm for modelling geospatial data, and then create knowledge base of the visualization of such data in terms of scaling, data portrayal and geometry source. This approach would benefit the transfer, interpret and reuse the visuali-zation knowledge for geospatial data. At the meantime, we also identified some challenges of modelling geospatial knowledge and outreaching such knowledge to other domains as future study.

17:12
A new approach to conceive ASP solvers

ABSTRACT. The Answer set programming (ASP) is a non-monotonic declarative programming paradigm that is widely used for the formulation of problems in artificial intelligence. The ASP paradigm provides also a general framework for the resolution of decision and optimization problems. The idea behind ASP is to represent a problem as a logic program and solve that problem by computing sta- ble models. In our work, we propose a new method for searching stable models of logical programs. This method is based on a relatively new semantic that had not been exploited yet. This semantic captures and extends that one of the stable models. The method performs a DPLL enumerative process only on a restricted set of literals called the strong back-door (STB). This method has the advantage to use a Horn clause representation having the same size as the input logic pro- gram and has constant spatial complexity. It avoids the heaviness induced by the loop management from which suffer most of the ASP solvers based on the Clark completion.

17:00-17:30 Session 19: AI and Art: Cutting-edge, Innovative Art and audience experience (LuxLogAI)

Round table discussion coordinated by Yolanda Spinola-Elias with artist Roc Parés, Sergio Albiac and Egberdien van der Torre.

Location: MSA 3.170
19:00-22:30 River cruise conference banquet (LuxLogAI)

The conference banquet of LuxLogAI will take place on 18 Sep on a boat on the Moselle river during a cruise in the evening.

The boat will leave from Remich, the pearl of the Moselle, and take us in the direction of Schengen in the tri-border area of France – Germany – Luxembourg, where the so-called Schengen Agreement was signed on a passenger vessel on 14th June 1985.

See the LuxLogAI web pages for details.