LUXLOGAI 2018: LUXEMBOURG LOGIC FOR AI SUMMIT
GCAI ON MONDAY, SEPTEMBER 17TH, 2018
Days:
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:15-09:30 Opening of Registration

The LuxLogAI registration desk will open on 8.15am every day from Monday, Sep 17, to Friday, Sep 21. Please pick up your conference badges here. The registration desk will also help you with any issues or problems throughout the whole day.

See also the LuxLogAI conference booklet for further information.

09:30-10:30 Session 2: Monday morning invited talk
Chair:
Location: MSA 4.530
09:30
Knowledgeable Robots

ABSTRACT. In the talk we will first introduce the approach followed by our recent research in Artificial Intelligence and Robotics, which we regard as an attempt towards general Artificial Intelligence. Our aim is to build systems that achieve high levels of competence in specialized domains, by learning it incrementally, as opposed to the main trend of creating systems, that can work from scratch in any domain. Our long term plan is to address three types of knowledge: about environment, tasks and user.

As of today, we can report results in the first two realms, in particular, we shall present our recent work on semantic mapping and task learning. In addition, we shall focus on the domain of service robotics and address performance evaluation of the systems we are developing, specifically focusing on robot competitions in this domain.

10:30-11:00Coffee Break
11:00-12:30 Session 3B: Monday morning second session
Location: MSA 4.530
11:00
Learning to Plan from Raw Data in Grid-based Games

ABSTRACT. An agent that autonomously learns to act in its environment must acquire a model of the domain dynamics. This can be a challenging task, especially in real-world domains, where observations are high-dimensional and noisy. Although in automated planning the dynamics are typically given, there are action schema learning approaches that learn symbolic rules (e.g. STRIPS or PDDL) to be used by traditional planners. However, these algorithms rely on logical descriptions of environment observations. In contrast, recent methods in deep reinforcement learning for games learn from pixel observations. However, they typically do not acquire an environment model, but a policy for one-step action selection. Even when a model is learned, it cannot generalize to unseen instances of the training domain. Here we propose a neural network-based method that learns from visual observations an approximate, compact, implicit representation of the domain dynamics, which can be used for planning with standard search algorithms, and generalizes to novel domain instances. The learned model is composed of submodules, each implicitly representing an action schema in the traditional sense. We evaluate our approach on visual versions of the standard domain Sokoban, and show that, by training on one single instance, it learns a transition model that can be successfully used to solve new levels of the game.

11:30
What if the world were different? Gradient-based exploration for new optimal policies

ABSTRACT. Planning under uncertainty assumes a model of the world that specifies the probabilistic effects of the actions of an agent in terms of changes of the state. Given such model, planning proceeds to determine a policy that defines for each state the choice of action that the agent should follow in order to maximize a reward function. In this work, we realize that the world can be changed in more ways than those possible by the execution of the agent's repertoire of actions. These additional configurations of the world may allow new policies that let the agent accumulate even more reward than that possible by following the optimal policy of the original world. We introduce and formalize the problem of planning while considering these additional possible worlds. We then present an approach that models feasible changes to the world as modifications to the probability transition function, and show that the problem of computing the configuration of the world that allows the most rewarding optimal policy can be formulated as a constrained optimization problem. Finally, we contribute a gradient-based algorithm for solving this optimization problem. Experimental evaluation shows the effectiveness of our approach in multiple problems of practical interest.

12:00
Using the Winograd Schema Challenge as a CAPTCHA

ABSTRACT. CAPTCHAs have established themselves as a standard technology to condently dis- tinguish humans from bots. Beyond the typical use for security reasons, CAPTCHAs have helped promote AI research in challenge tasks such as image classication and optical character recognition. It is, therefore, natural to consider what other challenge tasks for AI could serve a role in CAPTCHAs. The Winograd Schema Challenge (WSC), a certain form of hard pronoun resolution tasks, was proposed by Levesque as such a challenge task to promote research in AI. Based on current reports in the literature, the WSC remains a challenging task for bots, and is, therefore, a candidate to serve as a form of CAPTCHA. In this work we investigate whether this a priori appropriateness of the WSC as a form of CAPTCHA can be justied in terms of its acceptability by the human users in relation to existing CAPTCHA tasks. Our empirical study involved a total of 329 students, aged between 11 and 15, and showed that the WSC is generally faster and easier to solve than, and equally entertaining with, the most typical existing CAPTCHA tasks.

12:30-14:00Lunch Break
14:00-15:30 Session 4B: Monday afternoon invited and contributed session
Chair:
Location: MSA 4.530
14:00
New old frontiers in deep learning: curriculum learning, generative models

ABSTRACT. In the first part of the lecture I will talk about curriculum learning, where a learner is exposed to examples whose difficulty level is gradually increased. This heuristic has been empirically shown to improve the outcome of learning in various models. Our main contribution is a theoretical result, showing that learning with a curriculum speeds up the rate of learning in the context of the regression loss. Interestingly, we also show how curriculum learning and hard-sample mining, although conflicting at first sight, can coexist harmoniously within the same theoretical model. Specifically, we show that it benefits to start training with easier examples with respect to the global optimum of the model, while at the same time preferring the more difficult examples with respect to the current estimate of the model’s parameters. Finally, we show an empirical study using deep CNN models for image classification, where curriculum learning is shown to speed up the rate of learning, AND improve the final generalization performance.

In the second part of the lecture I will talk about a new GAN variant, which we call Multi-Modal-GAN. I will show how this model can be used for novelty detection, and also augment data in a semi-supervised setting when the labeled sample is small. Finally, I will show interesting unsupervised clustering results, with comparable results to state-of-the-art supervised classification using the MNIST dataset.

 

15:00
Classifier-Based Evaluation of Image Feature Importance

ABSTRACT. Significant advances in the performance of deep neural networks have created a drive for understanding how they work. Different techniques have been proposed to determine which features (e.g., CNN pixels) are most important for the classification. However, these techniques have only been judged subjectively by a human. We address the need for an objective measure to assess the quality of different feature importance measures. In particular, we propose measuring the ratio of the CNN's accuracy on the whole image compared to an image containing only the important features. We also consider scaling this ratio by the relative size of the important region in order to measure the conciseness. We demonstrate that our measures correlate well with prior subjective comparisons of important features, but importantly do not require their usability studies. We also demonstrate that the features that multiple techniques agree are important have higher impact on accuracy than those features that only one technique finds.

15:30-16:00Coffee Break
16:00-17:30 Session 7A: Monday afternoon second session
Location: MSA 4.530
16:00
Iterative Planning for Deterministic QDec-POMDPs

ABSTRACT. QDec-POMDPs are a qualitative alternative to stochastic Dec-POMDPs for goal-oriented planning in cooperative partially observable multi-agent environments. Although QDec-POMDPs share the same worst case complexity as Dec-POMDPs, previous research has shown an ability to scale up to larger domains while producing high quality plan trees. A key difficulty in distributed execution is the need to construct a joint plan tree branching on the combinations of observations of all agents. In this work, we suggest an iterative algorithm, IMAP, that plans for one agent at a time, while taking into considerations collaboration constraints about action execution of previous agents, and generating new constraints for the next agents. We explain how these constraints are generated and handled, and a backtracking mechanism for changing constraints that cannot be met. We provide experimental results on multi-agent planning domains, showing our methods to scale to much larger problems with several collaborating agents and huge state spaces.

16:30
Computing minimal subsumption modules of ELHr terminologies

ABSTRACT. In the paper we study algorithms for computing modules that are minimal w.r.t. set inclusion and that preserve the entailment of all ELHr-subsumptions over a signature of interest. We follow the black-box approach for finding one or all justifications by replacing the entailment tests with logical difference checks, obtaining modules that preserve not only a given consequence but all entailments over a signature. Such minimal modules can serve to improve our understanding of the internal structure of large and complex ontologies. Additionally, several optimisations for speeding up the computation of minimal modules are investigated. We present an experimental evaluation of an implementation of our algorithms by applying them on the medical ontologies Snomed CT and NCIt.

17:00
Discovering Causal Relations in Semantically-Annotated Probabilistic Business Process Diagrams

ABSTRACT. Business Process Diagrams (BPDs) have been used for documenting, analyzing and optimizing business processes. Business Process Modeling and Notation (BPMN) provides a rich graphical notation and it is supported by a formalization that permits automating such tasks. Stochastic versions of BPMN allows to represent the probability every possible way a process can develop. Nevertheless this support is not enough for representing conditional dependencies between events occurring during process development. We show how structural learning on a Bayesian Network obtained from a BPD is used for discovering causal relations. We illustrate our approach by detecting dishonest bidders in an on-line auction scenario. Temporal precedence between events, captured in the BDP, is used for pruning and correcting the model discovered by a Inferred Causation algorithm.