BICA 2015: 2015 ANNUAL INTERNATIONAL CONFERENCE ON BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES
PROGRAM FOR SUNDAY, NOVEMBER 8TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-10:00 Session 10: Plenary 4
08:30
True Machine Understanding: Implementing Cognitive Phenomenology?

ABSTRACT. It still seems correct to say that despite 60 years of AI and Cognitive Systems design, computing machinery, while performing competent acts of scene or language interpretation for action, still cannot be said to 'understand' perceptual input. This may not be the fault of incompetent computer scientists, but it may be, that alongside human concepts of consciousness and awareness, 'understanding' is hard to define. This paper examines a recent emergence in philosophy of a somewhat controversial concept called 'cognitive phenomenology' (CP) (Bayne and Montague , 2011), which is distinguished from classical phenomenology (characterised by 'there being something it is like' to be conscious of something - a rose, a pain or an emotion). Cognitive Phenomenology refers to understanding, thought and meaning experiences. For example, it argued that 'there is something it is like' to *understand*' which is independent of what it is that is being understood. In this paper it is posited that CP may be related in neural machinery to degrees of integration between perceptual inputs and internal state trajectories that are due to learning. This can be measured and an example is given in a simulation of visual consciousness. The result of this is that such measurements can provide a quality measure for 'mental states' in terms of how well they relate to material internalised by learning, that is, how well this is understood. It is suggested here, that this may be the road to true understanding in artificial systems and should be studied further. Bayne, T. and Montague, M, (2011) Cognitive Phenomenology, OUP

09:00
Cognitive robotics towards real world applications

ABSTRACT. This talk will present several cognitive robotic approaches targeting real world applications in human-populated environments. On the one hand, a top-down, model-based approach based on the integration of robot functionalities. This approach have been applied to networked robot systems for edutainment activities in pediatric hospitals and in terapeutic activities with children with autism syndrome spectrum. We will also discuss the integration of verbal instructions with sensorimotor functionalities, employing probabilistic planning jointly with affordance models. And on the other, a data-driven approach, combining deep learning with reinforcement learning. Here we are researching how to apply reinforcement learning methods to low-dimensional representations of high-dimensionality perception spaces. The formation of these representations is driven by deep learning methods.

10:00-12:00 Session 11: Plenary 5
10:00
Cooperative Inference: Features, objects, and collections

ABSTRACT. Cooperation plays a central role in theories of development, learning, cultural evolution, and education. We argue that existing models of learning from cooperative informants have fundamental limitations that prevent them from explaining how cooperation benefits learning. First, existing models are shown to be computationally intractable, suggesting that they cannot apply to realistic learning problems. Second, existing models assume a priori agreement about which concepts are favored in learning, which leads to a conundrum: learning fails without precise agreement on bias yet there is no single rational choice. We introduce Cooperative Inference, a novel framework for cooperation in concept learning, which resolves these limitations. Cooperative Inference generalizes the notion of cooperation used in previous models from omission of labeled objects to the omission values of features, labels for objects, and labels for collections of objects. The result is an approach that is computationally tractable, does not require a priori agreement about biases, applies to both Boolean and first-order concepts, and begins to approximate the richness of real-world concept learning problems. We conclude by discussing relations to and implications for existing theories of cognition, cognitive development, and cultural evolution.

10:30
A Comparison among Cognitive Architectures: A Theoretical Analysis

ABSTRACT. In this paper we present a theoretical comparison among three of the most popular cognitive architectures: SOAR, LIDA and CLARION. These architectures are compared based on a set of cognitive functions supposed to exist in the human cognitive cycle, and how each architecture deals with them. The comparison emphasizes similarities and differences among the architectures, with the purpose to advise a potential user how to identify the best architecture to employ, depending on the situation.

11:00
Cognitive Systems For Cooperative Human-Robot Interaction

ABSTRACT. I will present results on the progressive development of a brain inspired cognitive system for human-robot interaction.  The successive phases we address are language-based interaction, cooperative shared plans, embodied meaning, and the representation of self in different spatial and temporal scales.

11:30
Holons, intentions and system adaptation

ABSTRACT. Holons are the basis for building very scalable yet simple architectures. They spring from the observation made by Koestler that the concepts of ‘whole’ and ‘part’ have no absolute meaning in the reality. A whole or a part can be easily identified in many contexts but at the same time they can be seen as opposite. This philosophical concept has a perfect correspondence with software architecture. Nowadays, it is very diffused to approach complex systems as systems of systems. They can be seen as intrinsically recursive when considering that each of the composing systems may be decomposed into its components that in turn may be individually addressed or regarded as an assembly of (sub-) systems/components/classes. Each of the parts at whatever level of abstraction has the dignity of a complete entity (a whole) but at the same time it may be further exploded at finer level of details (as parts). Holons offer a great way for representing complex systems and solving several real-world problems but their recursive, dynamic nature may be a challenge at design time. In this talk, holons will be the common denominator of a path that discusses the design of holonic systems and their great contribution in achieving runtime system-level adaptation of cognitive multi-agent systems, for instance during the execution of norm-constrained workflows. The presented contribution of holons towards system adaptation lies in the hierarchical self-similar structure of the holonic architecture. They allow the decomposition and representation of intentional systems that achieve effective goal-oriented solutions, at the same time they become a proficient structure to be learnt for future reuse.

13:30-15:10 Session 12A: Sensorimotor Learning
13:30
Human robot interaction in the absence of visual and aural feedback, Exploring the haptic and tactile senses

ABSTRACT. The potential of robot swarms for Search and Rescue has been shown by the Guardians project (EU, 2006-2010); however the project also showed the problem of human robot interaction in smoky (non-visibility) and noisy conditions. The lecture will examine what we have achieved since then in the REINS project (UK, 2011-2015) concerning human robot interaction and indicate how these results may be applicable, not only for search and rescue, but also in areas such as health care.

13:52
Estimating Human Movements Using Memory of Errors
SPEAKER: Daqi Dong

ABSTRACT. Humans estimate their movements based on their knowledge of the dynamics of the environment, and on actual sensory data. Wolpert and colleagues have incorporated this understanding into a model that simulates this estimation using the Kalman filter (1995). Inspired by a recent study in neuroscience (Herzfeld, Vaswani et al. 2014), we here introduce a new factor—memory of errors—into this simulation of the movement estimation. These historical errors help humans determine the quality of the environment, which could be either steady or rapidly changing. This quality controls the rate at which a given error will be learned, so as to affect the estimates of future movements. We here apply our new model, a modified Kalman filter incorporating memory of errors, to the simulation of a hand lifting movement, and compare the simulated estimation process with its human counterpart.

14:14
Model-based Behavioral Causality Analysis of Handball with Delayed Transfer Entropy
SPEAKER: Kota Itoda

ABSTRACT. In goal-type ball games, such as handball, basketball, hockey or soccer, teammates and opponents share a same one field. They switch dynamically their behaviors and relationships based on other players' behaviors or intentions. Interactions between players are highly complicated and hard to comprehend, but recent technological developments have enabled us to acquire positions or velocities of their behaviors. We focus on handball game as an example of goal-type ball games and analyze causality between teammates' behaviors from tracking data with Hidden Semi-Markov Model(HSMM) and delayed Transfer Entropy(dTE). Despite 'off-the-ball' behaviors are a crucial component of cooperation, many research tend to focus on 'on-the-ball' behaviors, and relations of behaviors are only known as tacit knowledge of coaches or players. On the other hand, our approach quantitatively reveal player's relationships of 'off-the-ball' behaviors. The extracted causality are compared to the corresponding video scenes, and it is suggested that our approach extract causal relationships between teammates' behaviors or intentions, and clarify roles of the players in both attacking and defensing phase.

14:36
Respective advantages and disadvantages of model-based and model-free reinforcement learning in a robotics neuro-inspired cognitive architecture
SPEAKER: Erwan Renaudo

ABSTRACT. Combining model-based and model-free reinforcement learning systems in robotic cognitive architectures appears as a promising direction to endow artificial agents with flexibility and decisional autonomy close to mammals. In particular, it could enable robots to build an internal model of the environment, plan within it in response to detected environmental changes, and avoid the cost and time of planning when the stability of the environment is recognized as enabling habit learning. However, previously proposed criteria for the coordination of these two learning systems do not scale up to the large, partial and uncertain models autonomously learned by robots. Here we precisely analyze the performances of these two systems in an asynchronous robotic simulation of a cube-pushing task requiring a permanent trade-off between speed and accuracy. We propose solutions to make learning successful in these conditions. We finally discuss possible criteria for their efficient coordination within robotic cognitive architectures.

13:30-15:10 Session 12B: Emotion
13:30
NEUCOGAR: A Neuromodulating Cognitive Architecture towards the implementation of emotions in a computational system
SPEAKER: Max Talanov

ABSTRACT. This paper introduces a new model of artificial cognitive architecture for intelligent systems, the Neuromodulating Cognitive Architecture (NEUCOGAR). The model is biomimetically inspired and adapts the neuromodulators role of human brains into computational environments. This way we aim at achieving more efficient Artificial Intelligence solutions based on the biological inspiration of the deep functioning of human brain, which is highly emotional. Analysis of new data obtained from neurology allows us to find a mapping of monoamine neuromodulators to emotional states and apply it to computational systems' parameters. Artificial cognitive systems can then better perform complex tasks (regarding information selection and discrimination, attention, innovation, creativity,…) as well as engage in affordable emotional relationships with human users.

13:52
Integrating Human Emotions with Spatial Speech Using Optimized Selection of Acoustic Phonetic Units

ABSTRACT. Synthesis of natural sounding speech is the state of art in the field of speech technology. Imitation of dynamic human voice is required to generate this. The aim of this work is to develop and deploy the natural speech synthesizer for visually impaired persons. The synthesizer has been developed via integrated approach of adding localization in expressive speech using personalized speech corpus. Genetic algorithm has been implemented for optimal selection of acoustic phonetic units of speech. This concept has many applications, among these one is deployed for testing in different aspects. The performance is compared on various categories of listeners using subjective listening test. Encouraging results are received from visually impaired listeners on various parameters.

14:14
A Computational Cognitive Model Integrating Different Emotion Regulation Strategies
SPEAKER: Adnan Manzoor

ABSTRACT. In this paper a cognitive model is introduced which integrates a model for emotion generation with models for three different emotion regulation strategies. Given a stressful situation, humans often apply multiple emotion regulation strategies. The presented computational model has been designed based on principles from recent neurological theories based on brain imaging, and psychological and emotion regulation theories. More specifically, the model involves emotion generation and integrates models for the emotion regulation strategies reappraisal, expression suppression, and situation modification. The model was designed as a dynamical system. Simulation experiments are reported showing the role of the emotion regulation strategies. The simulation results show how a potential stressful situation in principle could lead to emotional strain and how this can be avoided by applying the emotion regulation strategies decreasing the stressful effects.

14:36
Evolving Synthetic Pain into Adaptive Self-Awareness Framework for Robot
SPEAKER: Muh Anshar

ABSTRACT. In human robot interaction, physical contact is the most common medium to be used, and the more physical interaction occurs, at certain times, the higher possibilities of causing humans to experience pain. Humans, at times, send this message out through social cues, such as verbal and facial expressions in which requires robots to have the skill to capture and translate these cues into useful information. It is reported that the concept of human pain is strongly related to the concept of self. Hence, evolving appropriate self-awareness and pain concepts for robots plays a dominant factor in allowing robots to acquire this social skill. This paper focuses on imitating the concept of pain into a synthetic pain model to justify the integration and implementation an adaptive self-awareness into a real robot design framework, named ASAF. The framework develops an appropriate robot cognitive system-"self-consciousness" which includes two primary levels of self concept, namely subjective and objective. Novel experiments designated to measure whether a robot is capable of generating appropriate synthetic pain; whether the framework's reasoning skills support an accurate "pain" acknowledgement, and at the same time, develop appropriate counter responses. We find that the proposed framework enhances the awareness of robot's own body parts and prevent further catastrophic impact on robot hardware.

13:30-15:00 Session 12C: Neural Networks
13:30
Columnar Machine: Fast estimation of structured sparse coding

ABSTRACT. Ever since the discovery of columnar structures, their function remained enigmatic. As a potential explanation for this puzzling function, we introduce the `Columnar Machine'. We join two neural networks, structured sparse coding (SSC) and multilayer perceptrons (MLP) into one architecture. Memories supporting recognition can be quickly loaded into SSC via supervision. However, SSC evaluation is slow. An MLP is trained for predicting the sparse groups and then the representation is computed by pseudoinverse method. This two step procedure enables fast estimation of the overcomplete representation by exploiting undercomplete techniques. Further improvements of the estimation are possible by associative methods over the representation or by sparse methods in case of error. The suggested architecture works fast and it is biologically plausible. Beyond the function of the minicolumnar structure it may shed light onto the role of fast feedforward inhibitory thalamocortical channels and inhibitory cortico-cortical feedback connections. We demonstrate the method in a cognitive task; we explain the meaning of unknown words from their contexts.

13:52
Better Cell Assemblies

ABSTRACT. The paper is an abstract

14:14
An approach for the binding problem based on brain-oriented autonomous adaptive system with object handling functions

ABSTRACT. An approach for the binding problem is proposed, based on an autonomous adaptive system designed using artificial neural networks with object handling functions. Object handling functionality, such as object files, has been reported to have a relationship with perception, and working memory. However, in order for a brain-oriented system to decide actions based on object handling, the system must clarify the “binding problem”, or the problem of processing different attributes such as shape, color and location in parallel, then binding these multiple attributes as a single object. The proposed system decides semi-optimum actions by combining nonlinear programming and reinforced learning. By the introduction of artificial neural networks based on dendritic structures of pyramidal neurons in the cerebral cortex, together with a mechanism for dynamically linking nodes to objects, it is shown that deciding actions and learning as a whole system, based on binding object attributes and location, is possible. The proposed features are verified through computer simulation results.

14:36
Biologically Inspired Perception for Robotics in Hostile Environments

ABSTRACT. Navigation and localization in extreme or hostile environments such as deep ocean, disaster scenes and underground environments where darkness, pollution, and dust render cameras, laser scanners, and other sensors ineffective is a task that is challenging to robotics. Nature, however, equipped cave-dwelling creatures with echo acoustic perception that allowed them to thrive in such environments. Visually impaired people have been successful in using Ultrasonic (echo acoustic) mobility aids for day-to-day safe navigation. This work offers some insights on Ultrasonic sensing, and improved techniques for the use of ultrasonic perception for robotics in hostile environments.