LINKING
RESEARCH
GLOBALLY
GCAI 2016 / 2nd Global Conference on Artificial Intelligence
PROGRAM FOR SUNDAY, OCTOBER 2ND, 2016
Days:
previous day
all days

View: session overviewtalk overview

11:00-12:30 Session 12: Cognition and Constraints
11:00
Animating Cognitive Models and Architectures: A Rule-based Approach
SPEAKER: unknown

ABSTRACT. Computational psychology provides computational models exploring different aspects of cognition. A cognitive architecture includes the basic aspects of any cognitive agent. It consists of different correlated modules. In general, cognitive architectures provide the needed layouts for building intelligent agents. The paper presents the a rule-based approach to visually animate the simulations of models done through cognitive architectures. As a proof of concept, simulations through Adaptive Control of Thought-Rational (ACT-R) were animated. ACT-R is a well-known cognitive architecture. It was deployed to create models in different fields including, among others, learning, problem solving and languages.

11:30
Matching Qualitative Constraint Networks with Online Reinforcement Learning

ABSTRACT. Local Compatibility Matrices (LCMs) are mechanisms for computing heuristics for graph matching that are particularly suited for matching qualitative constraint networks enabling the transfer of qualitative spatial knowledge between qualitative reasoning systems or agents. A system of LCMs can be used during matching to compute a pre-move evaluation, which acts as a prior optimistic estimate of the value of matching a pair of nodes, and a post-move evaluation which adjusts the prior estimate in the direction of the true value upon completing the move. We present a metaheuristic method that uses reinforcement learning to improve the prior estimates based on the posterior evaluation. The learned values implicitly identify unprofitable regions of the search space. We also present data structures that allow a more compact implementation, limiting the space and time complexity of our algorithm.

12:00
Constraint Problem Specification as Compression
SPEAKER: unknown

ABSTRACT. Constraint Programming is a powerful and expressive framework for modelling and solving combinatorial problems. It is nevertheless not always easy to use, which has led to the development of high-level specification languages. We describe a new modelling approach inspired by an idea from Algorithmic Information Theory: that for any scientific or mathematical theory "understanding is compression". That is, the more compactly we can express a theory, the more we have understood it. We use Constraint Logic Programming as a meta-language to describe itself more compactly via compression techniques. We show that this approach can produce short, clear descriptions of standard constraint problems. In particular, it allows a simple and natural description of compound variables and channeling constraints. Moreover, for a problem whose specification requires the solution of an auxiliary problem, a single specification can unify the two problems. We call our specification language KOLMOGOROV.

14:00-15:30 Session 13: Machine Learning
14:00
Deep Incremental Boosting
SPEAKER: unknown

ABSTRACT. This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methods in Deep Learning.

14:30
A Sparse Representation of High-Dimensional Input Spaces Based on an Augmented Growing Neural Gas
SPEAKER: unknown

ABSTRACT. The growing neural gas (GNG) algorithm is an unsupervised learning method that is able to approximate the structure of its input space with a network of prototypes. Each prototype represents a local input space region and neighboring prototypes in the GNG network correspond to neighboring regions in input space. However, with an increasing dimensionality of input space the GNG network structure becomes less and less meaningful as typical distance measures like the Euclidean distance loose their expressiveness in higher dimensions. Here we investigate how a GNG augmented with local input space histograms can be used to create a sparse representation of the input space that retains important neighborhood relations discovered by the GNG while pruning erroneous relations that were introduced due to effects of high dimensionality.

15:00
Learning Partial Lexicographic Preference Trees and Forests over Multi-Valued Attributes
SPEAKER: unknown

ABSTRACT. Partial lexicographic preference trees, or PLP-trees, form an intuitive formalism for compact representation of qualitative preferences over combinatorial domains. We show that PLP-trees can be used to accurately model preferences arising in practical situations, and that high-accuracy PLP-trees can be effectively computed. We also propose and study a variant of the model based on the concept of a PLP-forest, a collection of PLP-trees, where the preference order specified by a PLP-forest is obtained by aggregating the orders of its constituent PLP-trees. The motivation is that learning many PLP-trees, each from a small set of examples, often is faster than learning a single tree from a large example set yet, thanks to aggregation, yields an accurate and robust representation of the preference order being modeled. We propose and implement several algorithms to learn PLP-trees and PLP-forests. To support experimentation, we use datasets that we adapted to the preference learning setting from existing classification datasets. Our results demonstrate the potential of both approaches, with learning PLP-forests showing particularly promising behavior.

16:00-18:00 Session 14: Natural Language Processing, Automated Reasoning
16:00
LexiPers: An ontology based sentiment lexicon for Persian
SPEAKER: unknown

ABSTRACT. Sentiment analysis refers to the use of natural language processing to identify and extract subjective information from textual resources. One approach for sentiment extraction is using a sentiment lexicon. A sentiment lexicon is a set of words associated with the sentiment orientation that they express. In this paper, we describe the process of generating a general purpose sentiment lexicon for Persian. A new graph-based method is introduced for seed selection and expansion based on an ontology. Sentiment lexicon generation is then mapped to a document classification problem. We used the K-nearest neighbors and nearest centroid methods for classification. These classifiers have been evaluated based on a set of hand labeled synsets. The final sentiment lexicon has been generated by the best classifier. The results show an acceptable performance in terms of accuracy and F-measure in the generated sentiment lexicon.

16:30
Application-Independent and Integration-Friendly Natural Language Understanding
SPEAKER: unknown

ABSTRACT. Natural Language Understanding (NLU) has been a long-standing goal of AI and many related fields, but it is often dismissed as intractable. NLU entails systems that take action without human intervention. This inherently involves strong semantic (meaning) capabilities to parse queries and commands correctly and with high confidence, because an error by a robot or automated vehicle could be disastrous. We describe an implemented general framework, the ECG2 system, that supports the deployment of NLU systems over a wide range of application domains. The framework is based on decades of research on embodied action-oriented semantics and efficient computational realization of a deep semantic analyzer (parser). This makes it linguistically much more flexible, general and reliable than existing shallow approaches that process language without considering its deeper semantics. In this paper we describe our work from a Computer Science perspective of system integration, and show why our particular architecture requires considerably less effort to connect the system to new applications compared to other language processing tools.

17:00
A Clausal Normal Form Translation for FOOL
SPEAKER: unknown

ABSTRACT. Formal verification and analysis of software heavily uses theorem provers for various logics to automatically check properties of programs. Theorem provers first translate software properties expressed as formulas in these logics into a normal form, usually a clausal normal form (CNF), and then search for proofs or models. The translation of arbitrary formulas to a CNF can crucially affect the performance of a theorem prover. In our recent work we introduced a modification of first-order logic extended by the first class boolean sort and syntactical constructs that mirror features of programming languages. We called this logic FOOL and argued that one can directly express program properties in FOOL. Formulas in FOOL can be translated to ordinary first-order formulas and checked by first-order theorem provers. While this translation is straightforward, it does not result in a CNF that can be efficiently handled by state-of-the-art theorem provers which use the superposition calculus. In this paper we present a new CNF translation algorithm for FOOL that is friendly and efficient for superposition-based first-order provers. We implemented the algorithm in the Vampire theorem prover and evaluated it on a large number of problems coming from formalisation of mathematics and program analysis. Our experimental results show an increase of performance of the prover with our CNF translation compared to the naive translation.