next day
all days

View: session overviewtalk overview

08:15-09:30 Opening of Registration (LuxLogAI)

The LuxLogAI registration desk will open on 8.15am every day from Monday, Sep 17, to Friday, Sep 21. Please pick up your conference badges here. The registration desk will also help you with any issues or problems throughout the whole day.

See also the LuxLogAI conference booklet for further information.

09:00-10:30 Session 1A (DecisionCAMP)
Location: MSA 3.110
Welcome address
Specifying collaborative decision-making systems using BPMN, CMMN & DMN

ABSTRACT. Abstract:

In Knowledge Automation (2012) I proposed a decision-modelling methodology (DRAW) for defining automated decision making. This approach, although successful, focusses on the functionality of decision services. It models the surrounding business processes in only enough detail to provide context for the required decision making, and models user interactions quite crudely. This predisposes analysts to specify systems with relatively simplistic human interactions, neglecting the rich possibilities of collaboration between people and computers in decision making.

Using the OMG “Triple Crown” (with other standards) it is possible to model the totality of functional requirements for a decision-making system, including complex interactions between automated business processes and human case workers. BPMN models the processes, CMMN models the user participation, and DMN models the automated decision making. However, it is not always clear how these standards should be applied together.

I suggest a set of principles for partitioning functional requirements between these three modelling domains so that there is no omission or duplication of functionality between them, and so that all interactions – between the domains and with external components – are explicitly modelled. These principles use only existing features in the “Triple Crown” standards, supported by other UML-based models, particularly use case models, object models and state models, and are therefore already supported by existing toolsets.

While modelling can be approached from many directions, an “Outside-In” approach would adopt the sequence: case modelling, business process modelling, decision modelling. This may be most appropriate when specifying systems with complex human interactions. The presentation uses examples from real-world modelling problems.

Key take-away: A clear, simple method of specifying the complete functionality of organisational decision management systems.

Audience: Process stakeholders, business analysts, system architects.

Key technologies: Business Process Management Systems, Case Management Systems and Decision Management Systems.

Decision Automation using Models, Services and Dashboards

ABSTRACT. The Decision Model and Notation (DMN) offers the perfect solution to specifying Business Decisions. Symbolic and sub-symbolic artificial Intelligence (AI) approaches can effectively be used jointly within DMN, delivering explainable decision automation that is desired if not mandatory in any business context. The resulting DMN Decision Models are aso the perfect architecture for the creation of decision support dashboards.

In this session we will demonstrate how line of business people can define business decisions that are explainable, auditable, and traceable. These business decisions can be assembled and consumed as services via a modern platform API architecture and visualized via graphical dashboards. The resulting dashboards help visualize information and knowledge that that are critical for the operations of any type of business.

09:00-10:30 Session 1B (MIREL)
Location: MSA 3.120
Artificial Intelligence for Consumer Law
Legal Reasoning and Big Data: Opportunities and Challenges

ABSTRACT. The main underlying assumption of traditional legal knowledge representation and reasoning is that knowledge and data are both available in main memory. However, in the era of big data, where large amounts of data are generated daily, an increasing range of scientific disciplines, as well as business and human activities, are becoming data-driven. This paper discusses new opportunities and potential applications of legal reasoning involving big data as well as the technical challenges associated with the main concepts of the big data landscape, namely volume, velocity, variety and veracity. Future research directions based on the identified challenges are also proposed.

09:30-10:30 Session 2: Monday morning invited talk (GCAI)
Location: MSA 4.530
Knowledgeable Robots

ABSTRACT. In the talk we will first introduce the approach followed by our recent research in Artificial Intelligence and Robotics, which we regard as an attempt towards general Artificial Intelligence. Our aim is to build systems that achieve high levels of competence in specialized domains, by learning it incrementally, as opposed to the main trend of creating systems, that can work from scratch in any domain. Our long term plan is to address three types of knowledge: about environment, tasks and user.

As of today, we can report results in the first two realms, in particular, we shall present our recent work on semantic mapping and task learning. In addition, we shall focus on the domain of service robotics and address performance evaluation of the systems we are developing, specifically focusing on robot competitions in this domain.

10:30-11:00Coffee Break
11:00-12:20 Session 3A (DecisionCAMP)
Location: MSA 3.110
The support of Decision Modeling features and concepts in tooling

ABSTRACT. This presentation examines to which extent some of the important Decision Model and Notation (DMN) features and concepts are supported by tooling. It is not a tool comparison, and no product names are revealed, but the analysis tries to give an indication of which elements of decision requirements diagrams, decision logic specifications and the (S)FEEL expression language are commonly present in current decision modeling (and execution) tools.

This analysis complements the DMN Technology Compatibility Kit ( (TCK), as attention is also paid to tools which can only be examined manually or which do not obey the exact specification of the standard. The approach does however give an indication of which modeling features and concepts are considered important by tool providers.

Goal-Oriented Business Decision Modeling

ABSTRACT. Goal-oriented business decision modeling is driven by the need to simplify communication between business analysts and operational business decision models while extending the capabilities of traditional business rules and decision management systems. Decision models created in accordance with the current DMN standard usually address only one question and expect to determine a single answer given different input data. The proposed goal-oriented approach aims to creation of business decision models that cover certain business domains and are capable to reach not one but multiple business goals by providing answers to various questions in terms of automatically calculated decision variables. Such decision models can be designed by defining the hierarchy of business goals and sub-goals with relationships between them described in business-friendly decision tables. Without asking a human decision modeler to specify knowledge and information requirements, these models should be able to automatically calculate an execution path within the decision model that leads to a selected goal. In this presentation we will demonstrate the goal-oriented approach using well-known decision models published at We will show a new interactive web interface that allows a business analyst to execute and analyze goal-oriented decision models.

11:00-12:30 Session 3B: Monday morning second session (GCAI)
Location: MSA 4.530
Learning to Plan from Raw Data in Grid-based Games

ABSTRACT. An agent that autonomously learns to act in its environment must acquire a model of the domain dynamics. This can be a challenging task, especially in real-world domains, where observations are high-dimensional and noisy. Although in automated planning the dynamics are typically given, there are action schema learning approaches that learn symbolic rules (e.g. STRIPS or PDDL) to be used by traditional planners. However, these algorithms rely on logical descriptions of environment observations. In contrast, recent methods in deep reinforcement learning for games learn from pixel observations. However, they typically do not acquire an environment model, but a policy for one-step action selection. Even when a model is learned, it cannot generalize to unseen instances of the training domain. Here we propose a neural network-based method that learns from visual observations an approximate, compact, implicit representation of the domain dynamics, which can be used for planning with standard search algorithms, and generalizes to novel domain instances. The learned model is composed of submodules, each implicitly representing an action schema in the traditional sense. We evaluate our approach on visual versions of the standard domain Sokoban, and show that, by training on one single instance, it learns a transition model that can be successfully used to solve new levels of the game.

What if the world were different? Gradient-based exploration for new optimal policies

ABSTRACT. Planning under uncertainty assumes a model of the world that specifies the probabilistic effects of the actions of an agent in terms of changes of the state. Given such model, planning proceeds to determine a policy that defines for each state the choice of action that the agent should follow in order to maximize a reward function. In this work, we realize that the world can be changed in more ways than those possible by the execution of the agent's repertoire of actions. These additional configurations of the world may allow new policies that let the agent accumulate even more reward than that possible by following the optimal policy of the original world. We introduce and formalize the problem of planning while considering these additional possible worlds. We then present an approach that models feasible changes to the world as modifications to the probability transition function, and show that the problem of computing the configuration of the world that allows the most rewarding optimal policy can be formulated as a constrained optimization problem. Finally, we contribute a gradient-based algorithm for solving this optimization problem. Experimental evaluation shows the effectiveness of our approach in multiple problems of practical interest.

Using the Winograd Schema Challenge as a CAPTCHA

ABSTRACT. CAPTCHAs have established themselves as a standard technology to condently dis- tinguish humans from bots. Beyond the typical use for security reasons, CAPTCHAs have helped promote AI research in challenge tasks such as image classication and optical character recognition. It is, therefore, natural to consider what other challenge tasks for AI could serve a role in CAPTCHAs. The Winograd Schema Challenge (WSC), a certain form of hard pronoun resolution tasks, was proposed by Levesque as such a challenge task to promote research in AI. Based on current reports in the literature, the WSC remains a challenging task for bots, and is, therefore, a candidate to serve as a form of CAPTCHA. In this work we investigate whether this a priori appropriateness of the WSC as a form of CAPTCHA can be justied in terms of its acceptability by the human users in relation to existing CAPTCHA tasks. Our empirical study involved a total of 329 students, aged between 11 and 15, and showed that the WSC is generally faster and easier to solve than, and equally entertaining with, the most typical existing CAPTCHA tasks.

11:00-12:30 Session 3C (MIREL)
Location: MSA 3.120
Automatic Catchphrase Extraction from Legal Case Documents via Scoring using Deep Neural Networks

ABSTRACT. In this paper, we present a method of automatic catchphrase extracting from legal case documents. We utilize deep neural networks for constructing scoring model of our extraction system. We achieve comparable performance with systems using corpus-wide and citation information which we do not use in our system.

A Prototype for Dealing with Exceptions in Lawsuit Simulation and for Legible Inference Proofs

ABSTRACT. Although the representation of normative texts and simulation of legal acts are commonly interdisciplinary themes in the field of Artificial Intelligence and Law (AI & Law), some questions remain open or are scarcely explored. As for these incipient fields, we can mention the formalization of the legal body in the face of explicit or implicit exceptions in the juridical reasoning that occurs daily, and the treatment of readability issues, in exposing or justifying decisionmaking. In this paper, we present the prototype LEGIS and discuss a proposal to simulate legal action on two fronts. We adopt a non-monotonic semantics for knowledge representation that is appropriate to the singularities of the legal realm, the Preferential Semantics, and propose a transformation to a formal logic argumentation style, the Sequent Calculus, in order to raise the inference proofs to a level of legibility not yet conveniently attained by conventional reasoners.

Nomothesia: A Linked Data Platform for Greek Legislation

ABSTRACT. We present Nomothesia, a linked data platform that makes Greek legislation easily accessible to the public, law professionals and application developers. Nomothesia offers advanced services for retrieving and querying Greek legislation and is intended for citizens and law professionals through intuitive presentational views and search interfaces, but also for application developers that would like to consume content through two web services: a SPARQL endpoint and a RESTful API. Opening up legislation in this way is a great leap towards making governments accountable to citizens and increasing transparency.

12:30-14:00Lunch Break
14:00-15:20 Session 4A (DecisionCAMP)
Location: MSA 3.110
High-Performance Decision Model Execution by Compilation of DMN into Machine Code

ABSTRACT. The burgeoning scale of automated decision-making in developing economies, such as that required by financial fraud and customer personalization in China, will create a demand for high performance decision execution several orders of magnitude higher than today’s workhorses. Multiplying this by the new scale of data imposed by the Internet of Things and the accountability required by increasingly rigorous compliance regulations will demand unprecedented volumes of complex decision-making; volumes requiring not just scalable hardware, but software purpose-built for the execution of compiled decisions.

This presentation looks at implementation strategies for supporting very-high-performance, DMN-based decision-making using modest hardware, and outlines the results. It examines the use of a single-pass compiler for decision tables based on a bitwise maintenance of rule masks to minimize execution time and enable real-world decisions to be made in microseconds. It also articulates the challenges of creating a DMN XML parser in XSLT.

Attendees of this session will discover which of DMN’s features posed the biggest challenges, both in terms of satisfying the TCK tests and during performance optimization. Further, we discuss some proposed revisions to DMN’s type system to improve performance with no practical impact to its flexibility. During a demonstration, several demanding decision models will be compiled, benchmarked, verified and executed. In addition, the presentation will highlight some of the technical challenges imposed by practical application of the DMN standard such as null propagation and hit policy enforcement.

Accord Project for Smart Legal Contracts

ABSTRACT. Accord Project is the leading community of legal and technical professionals, creating the standards and software for the formation and execution of blockchain-agnostic smart legal contracts, including a domain specific language, execution engine, and templating system.

In his presentation, Dan outlines the goals of the Accord Project, similarities, and differences with previous efforts in expert systems, business rules, and decision management.

14:00-15:30 Session 4B: Monday afternoon invited and contributed session (GCAI)
Location: MSA 4.530
New old frontiers in deep learning: curriculum learning, generative models

ABSTRACT. In the first part of the lecture I will talk about curriculum learning, where a learner is exposed to examples whose difficulty level is gradually increased. This heuristic has been empirically shown to improve the outcome of learning in various models. Our main contribution is a theoretical result, showing that learning with a curriculum speeds up the rate of learning in the context of the regression loss. Interestingly, we also show how curriculum learning and hard-sample mining, although conflicting at first sight, can coexist harmoniously within the same theoretical model. Specifically, we show that it benefits to start training with easier examples with respect to the global optimum of the model, while at the same time preferring the more difficult examples with respect to the current estimate of the model’s parameters. Finally, we show an empirical study using deep CNN models for image classification, where curriculum learning is shown to speed up the rate of learning, AND improve the final generalization performance.

In the second part of the lecture I will talk about a new GAN variant, which we call Multi-Modal-GAN. I will show how this model can be used for novelty detection, and also augment data in a semi-supervised setting when the labeled sample is small. Finally, I will show interesting unsupervised clustering results, with comparable results to state-of-the-art supervised classification using the MNIST dataset.


Classifier-Based Evaluation of Image Feature Importance

ABSTRACT. Significant advances in the performance of deep neural networks have created a drive for understanding how they work. Different techniques have been proposed to determine which features (e.g., CNN pixels) are most important for the classification. However, these techniques have only been judged subjectively by a human. We address the need for an objective measure to assess the quality of different feature importance measures. In particular, we propose measuring the ratio of the CNN's accuracy on the whole image compared to an image containing only the important features. We also consider scaling this ratio by the relative size of the important region in order to measure the conciseness. We demonstrate that our measures correlate well with prior subjective comparisons of important features, but importantly do not require their usability studies. We also demonstrate that the features that multiple techniques agree are important have higher impact on accuracy than those features that only one technique finds.

14:00-14:30 Session 4C: AI and Art: Invited Talk (LuxLogAI)
Location: MSA 3.170
The True Value of Art

ABSTRACT. Since the turn of the millennium, investing in fine art has garnered increasing interest among alternative asset investors. Fuelled by a strong rise in global wealth and the search for yield in an environment of low interest rates and stock returns, the global art market has expanded exponentially in recent decades — auction sales have grown from $43 million in 1970 to $9.3 billion in 2015. The role of art in investments is still a hotly debated topic among practitioners and academics. However, the most recent research in the field of art-finance indicates that returns have been over-exaggerated. The general finding is that investors should buy paintings if they like looking at them, but not to make money. They can hope that their children will sell one or more of them later for a gain — but paintings are primarily aesthetic investments, not financial ones.

14:00-15:30 Session 4D (MIREL)
Location: MSA 3.120
Principles for a judgement editor based on BDD

ABSTRACT. We describe the theoretical principles that underlie the design of a software tool which could be used by judges for writing judgements and for making decisions about litigations. The tool is based on Binary Decision Diagrams (BDD), which are graphical representations of truth–valued functions associated to propositional formulas. Given a specific litigation, the tool asks questions to the judge; each question is represented by a propositional atom. Their answers, true or false, allow to evaluate the truth value of the formula which encodes the overall recommendation of the software about the litigation. Our approach combines some sort of ‘theoretical’ or ‘legal’ reasoning dealing with the core of the litigation itself together with some sort of ‘procedural’ reasoning dealing with the protocol that has to be followed by the judge during the trial: some questions or group of questions must necessarily be examined and sometimes in a specific order. That is why we consider extensions of BDD called Multi-BDD. They are BDD with multiple entries corresponding to the different specific issues that must necessarily be addressed by the judge during the trial. We illustrate our ideas on a case study dealing with French union trade elections, an example that has been used throughout a project with the French Cour de cassation.

The Logic of Silence in Testimonies

ABSTRACT. The right to silence is considered in various legal systems. However, this phenomenon has not been studied enough from a logical perspective. After reviewing some previous studies of silence and conversational implicature of Grice, we formulate two different interpretations of silence (Defensive and Acquiescent Silence), in terms of the Says() predicate. Then, we explore the consequences of such interpretations in a case study involving testimonies, by expressing them in logic programming. Several conclusions are derived from the different possibilities that opened for analysis.

\AA qvist's Dyadic Deontic Logic {\bf E} in HOL

ABSTRACT. We devise a shallow semantical embedding of \AA{}qvist's dyadic deontic logic {\bf E} in classical higher-order logic. This embedding is encoded in Isabelle/HOL, which turns this system into a proof assistant for deontic logic reasoning. The experiments with this environment provide evidence that this logic \textit{implementation} fruitfully enables interactive and automated reasoning at the meta-level and the object-level.

14:30-15:30 Session 5: AI and Art: The future of the art market (LuxLogAI)

Round table discussion with Prof. Roman Kräussl and artists Alexandre Gurita and Sergio Albiac. See LuxLogAI web pages for details.

Location: MSA 3.170
15:30-16:00Coffee Break
15:45-17:45 Session 6 (DecisionCAMP)
Location: MSA 3.110
Smart contracts from legal text: interpretation, analysis and modelling !!

ABSTRACT. Many believe blockchain will be a disruptive technology that will facilitate digital economy in a trusted way. We hope that promise will be kept, because it will be a major boost to the business rules / decision management community. For such an economy to actually work, many of the legal interactions between parties have to be digitised. And this requires Interpretation , Analysis an Modelling of legal texts and translating those models into smart contracts or other implementations that can be accessed via blockchain solutions. And the business rules community has the goodies in place to make this work. Our goal is not only to demonstrate the way this process works, but also what is needed to make the resulting smart contracts trustworthy for the community that uses such smart contracts. In our view this requires transparency through documentation of the interpretation, analysis and modelling process and repeatable code generation. We propose to publish all the information and capabilities on the blockchain so the blockchain community can actually check if the smart contracts they use are in line with that documentation.

P.S. we uploaded a blank paper for now. If the abstract is interesting enough we will draw up a final paper.

jDMN: An execution engine for DMN in Java

ABSTRACT. The Decision Modeling and Notation (DMN) is a modeling language for decisions. DMN is an industry standard maintained by OMG. Successful Domain Specific Languages (DSLs) must be simple, easy to understand and use, with a higher level of abstraction and supported by a mature language workbench (e.g. editors and executions engines).

This paper presents jDMN an open-source execution engine for DMN implemented in Java. The attendees will have the opportunity to understand the internals of jDMN and the benefits of adopting DMN based solutions and executing them in jDMN.

jDMN provides support for DMN models validation, transformation, evaluation or translation to Java followed by an execution on JVM. The provided framework is flexible and configurable. For example, the users can define their own DMN transformers, validators and translators.

A jDMN dialect is a collection of certain DMN features, for example the built-in library and the mapping of the FEEL types to native types. The main purpose of a jDMN dialect is to be able to support DMN features variation. The jDMN dialects are organized as a taxonomy. jDMN supports several dialects, for example Signavio and Java 8 dialects. The framework is extensible, for example users can define their own dialect and execute DMN models accordingly.

Several code generation optimizations supported by jDMN such as: tree and DAG models execution, linked decisions and lazy evaluation for sparse decision tables are also presented.

Process discovery technique of decision-making in sales activity: Process mining based approach

ABSTRACT. In decision making on sales activities, the dependence on individual skills becomes a problem, as human judgment from past experiences play a primary role. It is important to visualize the regularity between decision-making processes and their outcomes to support sales staff and enhance sales activities. To solve this problem, we are developing a business decision support system using a machine learning model. For that, it is necessary to learn the process of sales activities with high probability of obtaining orders; therefore, the technology of process discovery that extracts regularity from the decision-making process is essential. However,it is difficult to apply the process discovery method of conventional process mining in the decision making process of sales activities,because the rules are not known in advance and the input information is unstructured data, such as business diaries. In this study, we provide an activity estimation system based on unstructured data, and a process discovery method for stochastic expression of regularity in an atypical process.

16:00-17:30 Session 7A: Monday afternoon second session (GCAI)
Location: MSA 4.530
Iterative Planning for Deterministic QDec-POMDPs

ABSTRACT. QDec-POMDPs are a qualitative alternative to stochastic Dec-POMDPs for goal-oriented planning in cooperative partially observable multi-agent environments. Although QDec-POMDPs share the same worst case complexity as Dec-POMDPs, previous research has shown an ability to scale up to larger domains while producing high quality plan trees. A key difficulty in distributed execution is the need to construct a joint plan tree branching on the combinations of observations of all agents. In this work, we suggest an iterative algorithm, IMAP, that plans for one agent at a time, while taking into considerations collaboration constraints about action execution of previous agents, and generating new constraints for the next agents. We explain how these constraints are generated and handled, and a backtracking mechanism for changing constraints that cannot be met. We provide experimental results on multi-agent planning domains, showing our methods to scale to much larger problems with several collaborating agents and huge state spaces.

Computing minimal subsumption modules of ELHr terminologies

ABSTRACT. In the paper we study algorithms for computing modules that are minimal w.r.t. set inclusion and that preserve the entailment of all ELHr-subsumptions over a signature of interest. We follow the black-box approach for finding one or all justifications by replacing the entailment tests with logical difference checks, obtaining modules that preserve not only a given consequence but all entailments over a signature. Such minimal modules can serve to improve our understanding of the internal structure of large and complex ontologies. Additionally, several optimisations for speeding up the computation of minimal modules are investigated. We present an experimental evaluation of an implementation of our algorithms by applying them on the medical ontologies Snomed CT and NCIt.

Discovering Causal Relations in Semantically-Annotated Probabilistic Business Process Diagrams

ABSTRACT. Business Process Diagrams (BPDs) have been used for documenting, analyzing and optimizing business processes. Business Process Modeling and Notation (BPMN) provides a rich graphical notation and it is supported by a formalization that permits automating such tasks. Stochastic versions of BPMN allows to represent the probability every possible way a process can develop. Nevertheless this support is not enough for representing conditional dependencies between events occurring during process development. We show how structural learning on a Bayesian Network obtained from a BPD is used for discovering causal relations. We illustrate our approach by detecting dishonest bidders in an on-line auction scenario. Temporal precedence between events, captured in the BDP, is used for pruning and correcting the model discovered by a Inferred Causation algorithm.

16:00-16:50 Session 7B: AI and Art: Presentation (LuxLogAI)
Location: MSA 3.170
Spect-actor: A manifesto for an un-virtual immersive experience

ABSTRACT. How to create an interactive artistic experience? Three directions will be explored: create a personal and a unique artistic experience; interactivity; the more I give, the more I get. A return on experience by Lou Salomé visual artist in Luxembourg illustrated by her artworks.

What does the object want from me?

ABSTRACT. A presentation of Egerdien van der Torre about the new interaction between the artist and (smart) objects and systems with the title: What does the object want from me? This presentation is based on the publication: The Standard Book of Noun-Verb Exhibition Grammar written by Niekolaas Johannes Lekkerkerk. This Dutch curator emphasizes the need for a new way of exhibiting.

16:00-18:00 Session 7C (MIREL)
Location: MSA 3.120
I/O Logic in HOL

ABSTRACT. A shallow semantical embedding of Input/Output logic in classical higher-order logic is presented, and shown to be faithful (sound an complete). This embedding has been implemented in the higher-order proof assistant Isabelle/HOL. We provide an empirical regulative framework for assessing General Data Protection Regulation.

Deontic Description Logic
Visualizing Legal Information: an Ontology-Based Data Protection Icon Set

ABSTRACT. Privacy policies are known to be impenetrable and lengthy texts that are hardly read and poorly understood. Research streams towards machine-interpretability of privacy terms are emerging, while the human-readable implementation is often overlooked. DaPIS is a machine-readable Data Protection Icon Set that was designed following human-centred methods drawn from the emerging discipline of legal design. Icons can serve as information markers and support the navigation of privacy policies. DaPIS is modelled on PrOnto, an ontology of the GDPR, and is machine-readable and automatically retrievable, thus offering a comprehensive solution for the Semantic Web. In this way, the lawyer-readable, the machine-readable, and the human-readable representations of legal information can be interlinked to enhance its comprehensibility.

Logic and Graphs of Legal Relations: Why Hohfeld Was Right about Rights

ABSTRACT. The century-old theory of legal rights presented by Wesley N. Hohfeld is highly influential not just in analytic legal theory but also as serving basis of normative multi-agent systems, specifications of aspects in computer systems, and formal theory of organisation. Still, it is often criticized as having limitations on what legal situations can be modelled in it: while it seems an admittedly proper picturing of how a contract works, it is often thought to be unable to grasp rights considered in criminal law like the right to life or physical integrity. I show in this talk how we need to modify the classical formalizations' language and the classical approaches' definition in order to get a formal setup in which these limitations can be refuted.