LINKING
RESEARCH
GLOBALLY
GCAI 2016 / 2nd Global Conference on Artificial Intelligence
Accepted Papers with Abstracts
Daan Apeldoorn (Technische Universität Dortmund)
Gabriele Kern-Isberner (Technische Universität Dortmund)
When Should Learning Agents Switch to Explicit Knowledge?

ABSTRACT. According to psychological models, learned knowledge can be distinguished into implicit and explicit knowledge. The former can be exploited, but cannot be verbalized easily (e.g., to explain it to another person). The latter is available in an explicit form, it often comprises generalized, rule-based knowledge which can be verbalized and explained to others. During a learning process, the learned knowledge starts in an implicit form and gets explicit as the learning process progresses, and humans profit from exploiting such generalized, rule-based knowledge when learning. This paper investigates how learning agents can profit from explicit knowledge which is extracted during a learning process from a learned implicit representation. It is clearly shown that an agent can already profit from explicit knowledge in early phases of a learning process.

Imen Boudali (University of Tunis ElManar)
Harmony Search Approach for Patient Scheduling in Emergency Laboratories

ABSTRACT. This paper deals with the problem of scheduling prioritized patient in emergency department laboratories according to a triage factor. The problem is considered as a flexible open shop scheduling problem with the objective of minimizing the total completion time of prioritized patients. In this research, patient scheduling is addressed with a harmony search algorithm with wheel-roulette selection technique. Then, a comparative study is performed by considering results of genetic algorithm. Simulations on real data from emergency department show that the proposed approach improves significantly the total completion time of patients especially those with severe conditions.

Teddy Bouziat (IRIT)
Valérie Camps (University of Toulouse - IRIT, France)
Stéphanie Combettes (IRIT- University of Toulouse)
Cooperation in Adaptive Multi-Agent Systems through System of Systems modeling.

ABSTRACT. This paper addresses the modeling and design of Systems of Systems (SoS) as well as inter multi-agent systems cooperation. It presents and illustrates a new generic model to describe formally SoS. Then, this model is used to propose a study of inter-AMAS (Adaptive Multi-Agent System) cooperation. Each AMAS, reified as a component-system of a SoS, uses a cooperative decision process in order to interact with other AMAS and collectively give rise to a relevant overall function at the SoS level. The proposed model as well as the inter-AMAS study are instantiated to a simulated problem of robots carrying colored boxes and compared to another heuristic decision.

Ivo Chichkov (Eberhard Karls Universität Tübingen)
Alexandra Kirsch (Eberhard Karls Universität Tübingen)
A Short-Term Memory for Deliberative Agents in Everyday Environments

ABSTRACT. Humans have the impressive capability to efficiently find near-optimal solutions to complex, multi-step problems. AI planning can model such problems well, but is inefficient for realistic problems. We propose to use AI planning in combination with a short-term memory, inspired by models of human short-term memory, to structure real-world problem domains and make the planning process more efficient, while still producing satisficing solutions. We evaluate the method in the domain of a household robot.

Malumbo Chipofya (Institute for Geoinformatics, University of Muenster)
Matching Qualitative Constraint Networks with Online Reinforcement Learning

ABSTRACT. Local Compatibility Matrices (LCMs) are mechanisms for computing heuristics for graph matching that are particularly suited for matching qualitative constraint networks enabling the transfer of qualitative spatial knowledge between qualitative reasoning systems or agents. A system of LCMs can be used during matching to compute a pre-move evaluation, which acts as a prior optimistic estimate of the value of matching a pair of nodes, and a post-move evaluation which adjusts the prior estimate in the direction of the true value upon completing the move. We present a metaheuristic method that uses reinforcement learning to improve the prior estimates based on the posterior evaluation. The learned values implicitly identify unprofitable regions of the search space. We also present data structures that allow a more compact implementation, limiting the space and time complexity of our algorithm.

Armin Dietz (Institute of Measurement and Automatic Control)
Stephan Schröder (Institute of Measurement and Automatic Control)
Andreas Pösch (Institute of Measurement and Automatic Control)
Klaus Frank (Institute of Measurement and Automatic Control)
Eduard Reithmeier (Institute of Measurement and Automatic Control)
Contactless Surgery Light Control based on 3D Gesture Recognition

ABSTRACT. Surgery lighting systems (SLS) aim to provide optimal lighting conditions for the surgeon in an operating room. To visually differentiate between only marginally different looking tissues, SLS offer settings for the level of illumination, color temperature and size of the illuminated field. By today’s standards, SLS are controlled with control panels that are situated at the lamp housings. The operation of the control panel is handled in an ergonomically unfavorable position with the hands above the operator’s head. To secure asepsis, it is required to extensively sterilize all equipment that the operator comes in contact with. In this paper, a contactless gesture control for a SLS is developed with the intention to offer the surgeon a sterile and ergonomically comfortable operation. A Microsoft Kinect 3D-sensor is used. The gesture control offers control of the level of illumination, the color temperature and functions of a camera for documentation requirements like zooming, rotating and taking a picture. A sophisticated procedure to logon and logoff the system prevents unintentional operation. The gesture control system and the conventional control panel were tested and compared in a usability study. The participating subjects rated the usability of the gesture control to be good to very good according to the System Usability Scale by Broke. 94% of the subjects perceived the gesture control to be ergonomically more comfortable. After simultaneous operation of the surgery light and execution of a cognitive task, 82% stated that the gesture control is less likely to distract them from that task. This data indicates that the concept of a gesture control with a 3D-sensor for a SLS is feasible and that it has a potential to receive a high degree of acceptance.

Marcus Edel (Freie Universität Berlin)
Joscha Lausch (Freie Universität Berlin)
Capacity Visual Attention Networks

ABSTRACT. Inspired by recent work in machine translation and object detection, we introduce an attention-based model that automatically learns to extract information from an image by adaptively assigning its capacity across different portions of the input data and only processing the selected regions of different sizes at high resolution. This is achieved by combining two modules: an attention sub-network which uses a mechanism to model a human-like counting process and a capacity sub-network. This sub-network efficiently identifies input regions for which the attention model output is most sensitive and to which we should devote more capacity and dynamically adapt the size of the region. We focus our evaluation on the Cluttered MNIST, SVHN, and Cluttered GTSRB image datasets. Our findings indicate that the proposed model is able to drastically reduce the number of computations, compared with traditional convolutional neural networks, while maintaining similar or better performance.

Manfred Eppe (International Computer Science Institute, Berkeley)
Sean Trott (International Computer Science Institute, Berkeley)
Vivek Raghuram (University of California at Berkeley)
Jerome Feldman (International Computer Science Institute, Berkeley)
Adam Janin (International Computer Science Institute, Berkeley)
Application-Independent and Integration-Friendly Natural Language Understanding

ABSTRACT. Natural Language Understanding (NLU) has been a long-standing goal of AI and many related fields, but it is often dismissed as intractable. NLU entails systems that take action without human intervention. This inherently involves strong semantic (meaning) capabilities to parse queries and commands correctly and with high confidence, because an error by a robot or automated vehicle could be disastrous. We describe an implemented general framework, the ECG2 system, that supports the deployment of NLU systems over a wide range of application domains. The framework is based on decades of research on embodied action-oriented semantics and efficient computational realization of a deep semantic analyzer (parser). This makes it linguistically much more flexible, general and reliable than existing shallow approaches that process language without considering its deeper semantics. In this paper we describe our work from a Computer Science perspective of system integration, and show why our particular architecture requires considerably less effort to connect the system to new applications compared to other language processing tools.

Yin-Fu Huang (Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology)
Bo-Rong Chen (Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology)
Content-Based Image Retrieval System for Real Images

ABSTRACT. With the rapid progress of network technologies and multimedia data, information retrieval techniques gradually become content-based, and not text-based yet. In this paper, we propose a content-based image retrieval system to query similar images in a real image database. First, we employ segmentation and main object detection to separate the main object from an image. Then, we extract MPEG-7 features from the object and select relevant features using the SAHS algorithm. Next, two approaches “one-against-all” and “one-against-one” are proposed to build the classifiers based on SVM. To further reduce indexing complexity, K-means clustering is used to generate MPEG-7 signatures. Thus, we combine the classes predicted by the classifiers and the results based on the MPEG-7 signatures, and find out the similar images to a query image. Finally, the experimental results show that our method is feasible in image searching from the real image database and more effective than the other methods.

Gülgün Kayakutlu (Istanbul Technical University)
Ege Sami Ünver (Istanbul Technical University)
Intelligent Forecasting for Medium Term Electricity Demand: Case Study Turkey

ABSTRACT. This study is concerned with electricity demand forecasting and tries to demonstrate a less exemplified approach for the case of Turkey. A comprehensive literature survey was conducted to observe the alternative energy forecasting approaches. After careful review of the literature, a forecasting model based on chain use of Artificial Neural Networks is constructed. The case study is designed for the demands in Turkey. Historic demand data, along with GDP and population were chosen as input and achieved from OECD database. Our model consists of a two-stage Jordan type Recurrent Artificial Neural Networks which is the originality of the study. The first stage of the model forecasts GDP and population learning from the historical values. The second stage forecasts electricity demand, using the outputs of the first stage. The proposed model will be a lead in the forecasting field.

Jochen Kerdels (FernUniversität in Hagen)
Gabriele Peters (FernUniversität in Hagen)
A Sparse Representation of High-Dimensional Input Spaces Based on an Augmented Growing Neural Gas

ABSTRACT. The growing neural gas (GNG) algorithm is an unsupervised learning method that is able to approximate the structure of its input space with a network of prototypes. Each prototype represents a local input space region and neighboring prototypes in the GNG network correspond to neighboring regions in input space. However, with an increasing dimensionality of input space the GNG network structure becomes less and less meaningful as typical distance measures like the Euclidean distance loose their expressiveness in higher dimensions. Here we investigate how a GNG augmented with local input space histograms can be used to create a sparse representation of the input space that retains important neighborhood relations discovered by the GNG while pruning erroneous relations that were introduced due to effects of high dimensionality.

Alexandra Kirsch (Eberhard Karls Universität Tübingen)
Heuristic Decision-Making for Human-aware Navigation in Domestic Environments

ABSTRACT. Robot navigation in domestic environments is still a challenge. This paper introduces a cognitively inspired decision-making method and an instantiation of it for (local) robot navigation in spatially constrained environments. We compare the method to two existing local planners with respect to efficiency, safety and legibility.

Xudong Liu (Computer Science Department, University of Kentucky)
Mirek Truszczynski (Computer Science Department, University of Kentucky)
Learning Partial Lexicographic Preference Trees and Forests over Multi-Valued Attributes

ABSTRACT. Partial lexicographic preference trees, or PLP-trees, form an intuitive formalism for compact representation of qualitative preferences over combinatorial domains. We show that PLP-trees can be used to accurately model preferences arising in practical situations, and that high-accuracy PLP-trees can be effectively computed. We also propose and study a variant of the model based on the concept of a PLP-forest, a collection of PLP-trees, where the preference order specified by a PLP-forest is obtained by aggregating the orders of its constituent PLP-trees. The motivation is that learning many PLP-trees, each from a small set of examples, often is faster than learning a single tree from a large example set yet, thanks to aggregation, yields an accurate and robust representation of the preference order being modeled. We propose and implement several algorithms to learn PLP-trees and PLP-forests. To support experimentation, we use datasets that we adapted to the preference learning setting from existing classification datasets. Our results demonstrate the potential of both approaches, with learning PLP-forests showing particularly promising behavior.

Michel Ludwig (Institut für Theoretische Informatik, TU Dresden)
Dirk Walther (TU Dresden)
Towards a Decision Procedure for Uniform Interpolants in EL-TBoxes - a Proof-Theoretic Approach

ABSTRACT. We propose a proof-theoretic decision procedure for subsumer interpolants of general TBoxes formulated in the Description Logic EL. A subsumer interpolant of a TBox for a signature and a concept is a TBox that only uses symbols from the signature and that entails exactly the subsumers over the signature of the concept as the original TBox. We show how the problem of deciding the existence of uniform interpolants of EL-TBoxes can be divided into three subproblems based on a characterisation of the logical difference between EL-TBoxes. The decision procedure for subsumer interpolants that we introduce in this paper solves one of these subproblems. We also evaluate our procedure by applying a prototype implementation on two biomedical ontologies.

Abhijeet Mohapatra (Stanford University)
Bertrand Decoster (Stanford University)
Sudhir Agarwal (Stanford University)
Michael Genesereth (Stanford University)
Invariant Projections in Games

ABSTRACT. Identification of implicit structures in dynamic systems is a fundamental problem in Artificial Intelligence. In this paper, we focus on General Game Playing where games are modeled as finite state machines. We define a new property of game states called invariant projections which strongly corresponds to humans' intuition of game boards and may be applied in General Game Playing to support powerful heuristics, and to automate the design of game visualizations. We prove that the computation of invariant projections is Pi_{2}^{P}-complete in the size of the game description. We also show that invariant projections form a lattice, and the lattice ordering may be used to reduce the time to compute invariant projections potentially by a factor that is exponential in the schema size of game states. To enable competitive general game players to efficiently identify boards, we propose a sound (but incomplete) heuristic for computing invariant projections and evaluate its performance.

Alan Mosca (Birkbeck, University of London)
George Magoulas (Birkbeck College)
Deep Incremental Boosting

ABSTRACT. This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methods in Deep Learning.

Tobias Philipp (TU Dresden)
Certificates for Parallel SAT Solver Portfolios with Clause Sharing and Inprocessing

ABSTRACT. Contemporary sequential SAT solvers construct certificates in the DRAT format to increase the confidence of their answers for unsatisfiable instances. However, the DRAT format is inadequate to model parallel SAT solvers, such as plingeling, that are based on the portfolio approach with restricted clause sharing and inprocessing. In this paper, we develop a formal model of parallel portfolio solvers that cooperate by restricted clause sharing and apply all known formula simplification techniques. We show that the formalism is sound, complete, and models the computation performed by the award-winning system plingeling. Moreover, we present a new proof format, PDRAT, that can be used to certify UNSAT answers from such portfolio solvers.

Steve Prestwich (Insight Centre for Data Analytics)
S. Armagan Tarim (Hacettepe Univ)
Roberto Rossi (University of Edinburgh)
Constraint Problem Specification as Compression

ABSTRACT. Constraint Programming is a powerful and expressive framework for modelling and solving combinatorial problems. It is nevertheless not always easy to use, which has led to the development of high-level specification languages. We describe a new modelling approach inspired by an idea from Algorithmic Information Theory: that for any scientific or mathematical theory "understanding is compression". That is, the more compactly we can express a theory, the more we have understood it. We use Constraint Logic Programming as a meta-language to describe itself more compactly via compression techniques. We show that this approach can produce short, clear descriptions of standard constraint problems. In particular, it allows a simple and natural description of compound variables and channeling constraints. Moreover, for a problem whose specification requires the solution of an auxiliary problem, a single specification can unify the two problems. We call our specification language KOLMOGOROV.

Giles Reger (University of Manchester)
Nikolaj Bjorner (Microsoft Research)
Martin Suda (Technische Universität Wien)
Andrei Voronkov (The University of Manchester)
AVATAR Modulo Theories

ABSTRACT. This paper introduces a new technique for reasoning with quantifiers and theories. Traditionally, first-order theorem provers (ATPs) are well suited to reasoning with first-order problems containing many quantifiers and satisfiability modulo theories (SMT) solvers are well suited to reasoning with first-order problems in ground theories such as arithmetic. A recent development in first-order theorem proving has been the AVATAR architecture which uses a SAT solver to guide proof search based on a propositional abstraction of the first-order clause space. The approach turns a single proof search into a sequence of proof searches on (much) smaller sub-problems. This work extends the AVATAR approach to use a SMT solver in place of the SAT solver, with the effect that the first-order solver only needs to consider ground-theory-consistent sub-problems. The new architecture has been implemented using the Vampire theorem prover and Z3 SMT solver. Our experimental results, and the results of recent competitions, show that such a combination can be highly effective.

Giles Reger (University of Manchester)
Martin Suda (Technische Universität Wien)
Andrei Voronkov (The University of Manchester)
New Techniques in Clausal Form Generation

ABSTRACT. In automated reasoning it is common that first-order formulas need to be translated into clausal normal form for proof search. The structure of this normal form can have a large impact on the performance of first-order theorem provers, influencing whether a proof can be found and how quickly. It is common folklore that translations should ideally minimise both the size of the generated clause set and extensions to the signature. This paper introduces a new top-down approach to clausal form generation for first-order formulas that aims to achieve this goal in a new way. The main advantage of this approach over existing bottom-up techniques is that more contextual information is available at points where decisions such as subformula-naming and skolemisation occur. Experimental results show that our implementation of the translation in Vampire can lead to clausal forms which are smaller and better suited to proof search.

Pascal Richter (RWTH Aachen University)
David Laukamp (RWTH Aachen University)
Levin Gerdes (RWTH Aachen University)
Martin Frank (RWTH Aachen University)
Erika Ábrahám (RWTH Aachen University)
Heliostat Field Layout Optimization with Evolutionary Algorithms

ABSTRACT. The exploitation of solar power for energy supply is of increasing importance. While technical development mainly takes place in the engineering disciplines, computer science offers adequate techniques for optimization. This work addresses the problem of finding an optimal heliostat field arrangement for a solar tower power plant. We propose a solution to this global, non-convex optimization problem by using an evolutionary algorithm. We show that the convergence rate of a conventional evolutionary algorithm is too slow, such that modifications of the recombination and mutation need to be tailored to the problem. This is achieved with a new genotype representation of the individuals.

Behnam Sabeti (Sharif University of Technology)
Pedram Hosseini (University of Guilan)
Gholamreza Ghassem-Sani (Sharif University of Technology)
Sَeyed Abolghasem Mirroshandel (University of Guilan)
LexiPers: An ontology based sentiment lexicon for Persian

ABSTRACT. Sentiment analysis refers to the use of natural language processing to identify and extract subjective information from textual resources. One approach for sentiment extraction is using a sentiment lexicon. A sentiment lexicon is a set of words associated with the sentiment orientation that they express. In this paper, we describe the process of generating a general purpose sentiment lexicon for Persian. A new graph-based method is introduced for seed selection and expansion based on an ontology. Sentiment lexicon generation is then mapped to a document classification problem. We used the K-nearest neighbors and nearest centroid methods for classification. These classifiers have been evaluated based on a set of hand labeled synsets. The final sentiment lexicon has been generated by the best classifier. The results show an acceptable performance in terms of accuracy and F-measure in the generated sentiment lexicon.

Matching Jobs and Resumes: a Deep Collaborative Filtering Task

ABSTRACT. This paper tackles the automatic matching of job seekers and recruiters, based on the logs of a recruitment agency (CVs, job announcements and applications clicks). Preliminary experiments reveal that good recommendation performances in collaborative filtering mode (emitting recommendations for a known recruiter using the clicks) co-exist with poor performances in cold start mode (emitting recommendations based on the job announcement only). A tentative interpretation for these results is proposed, claiming that job seekers and recruiters - whose mother tongue is French - yet do not speak the same language. As first contribution, this paper shows that the information inferred from their interactions differs from the information contained in the CVs and job announcements.

The second contribution is the hybrid system Majore, where a deep neural net is trained to match the collaborative filtering representation properties. The experimental validation demonstrates Majore merits, with good matching performances in cold start mode.

Nada Sharaf (The German University in Cairo)
Slim Abdennadher (German University in Cairo)
Thom Fruehwirth (University of Ulm)
Daniel Gall (Ulm University)
Animating Cognitive Models and Architectures: A Rule-based Approach

ABSTRACT. Computational psychology provides computational models exploring different aspects of cognition. A cognitive architecture includes the basic aspects of any cognitive agent. It consists of different correlated modules. In general, cognitive architectures provide the needed layouts for building intelligent agents. The paper presents the a rule-based approach to visually animate the simulations of models done through cognitive architectures. As a proof of concept, simulations through Adaptive Control of Thought-Rational (ACT-R) were animated. ACT-R is a well-known cognitive architecture. It was deployed to create models in different fields including, among others, learning, problem solving and languages.

Hara Skouteli (Open University of Cyprus)
Loizos Michael (Open University of Cyprus)
Empirical Investigation of Learning-Based Imputation Policies

ABSTRACT. Certain approaches for missing-data imputation propose the use of learning techniques to identify regularities and relations between attributes, which are subsequently used to impute some of the missing data. Prior theoretical results suggest that the soundness and completeness of such learning-based techniques can be improved by applying rules anew on the imputed data, as long as one is careful in choosing which rules to apply at each stage. This work presents an empirical investigation of three natural learning-based imputation policies: training rules once and applying them repeatedly; training new rules at each iteration; continuing the training of previous rules at each iteration. We examine how the three policies fare across different settings. In line with the predictions of the theory, we find that an iterative learn-predict approach is preferable.

Ying Zhu (University of Kentucky)
Mirek Truszczynski (Computer Science Department, University of Kentucky)
Learning Importance of Preferences

ABSTRACT. We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.