View: session overviewtalk overview
08:30 | Welcome SPEAKER: Vladimir V. Uzhva |
08:45 | Opening talk SPEAKER: Alexei V. Samsonovich |
09:15 | Functional systems network for learning in stochastic environment SPEAKER: Mikhail Burtsev ABSTRACT. Goal-directed context aware action selection is a staple of animal behavior. Adaptive behavior involves planing, execution and monitoring of actions sequences that allow robust recurrent acquisition of evolutionary important outcomes in changing environment. Moreover, animals solve unexpected problems and reuse acquired solutions in the future. It is clear that today robots are far behind animals in their autonomous intelligence. Current state of the art research on deliberate action in robotics is commonly related to the high-level planing. To control a robot these plans should be refined to the level of elementary commands and executed. The problem is that a real environment is dynamic and demands continuous re-refinement of failed commands and instant re-planing. Hence, effective action requires monitoring and ongoing bottom-up feedback as well as feed forward prediction on each level. A suitable solution for such multilevel goal modulated perception-action loop closure is an open issue for the modern robotics. Artificial Neural Networks (ANN) are successfully applied to many real-world problems especially in the domain of classification with the advent of deep learning architectures. Today applications of ANNs in robotics are generally reserved to the areas of visual processing and locomotion control. This is unsurprising given the substantial progress in unsupervised and supervised learning compared to modest advances in integration of neural networks with reinforcement learning (RL). Is it possible to use neural networks as a foundation for the development of effective hierarchical architectures embracing both a goal directed planning and subsequent controlled action execution? Distributed and parallel nature of ANNs in combination with convenience of creation of modular and hierarchical structures makes a strong argument for the positive answer. The most obvious avenue toward this goal is a search for extension of existing deep and reinforcement learning to deep control algorithms. In this lecture results of application of bioinspired functional systems network (FSN) architecture to the problem of goal-directed behavior in stochastic environment will be presented. |
10:30 | The Distributed Adaptive Control of Consciousness in Animals and Machines SPEAKER: Paul Verschure ABSTRACT. tbc |
11:20 | An Architecture of Narrative Memory SPEAKER: Carlos León ABSTRACT. Narrative is ubiquitous. According to some models, this is due to the hypothesis that narrative is not only a successful way of communication, but a specific way of structuring knowledge. While most cognitive architectures acknowledge the importance of narrative, they usually do so from a functional point of view and not as a fundamental way of storing material in memory. The presented approach takes one step further towards the inclusion of narrative-aware structures in general cognitive architectures. In particular, the presented architecture studies how episodic memory and procedures in semantic memory can be redefined in terms of narrative structures. A formal definition of narrative for cognition and its constituents are presented, and the functions that an implementation of the architecture needs are described. The relative merits and the potential benefits with regard to general cognitive architectures are discussed and exemplified. |
13:00 | Deep learning in a Brain SPEAKER: Sergey Shumsky ABSTRACT. What makes our mind deep? Recent advances in “deep learning” shed some light on the cognitive architecture of our brain. Namely, unsupervised learning in our large neocortex resembles learning in Deep Belief Networks, while Long Short-Term Memory model describes deep reinforcement learning, provided by basal ganglia and dopamine system. |
13:45 | Implementing Trace-Based Reasoning in a cognitive architecture with the aim of achieving developmental learning SPEAKER: Olivier Georgeon ABSTRACT. I will present what Trace Based Reasoning (TBR, e.g., Cordier, Lefevre, Champin, Georgeon, & Mille 2013)—a new technique of Knowledge Engineering—can bring to research on Biologically Inspired Cognitive Architectures. TBR is a sort of Case-Based Reasoning (e.g., Aamodt & Plaza 1994) applied to learning from initially un-segmented and possibly un-interpreted sequences of events of interaction. In particular, TBR techniques proved suited to designing a cognitive architecture that avoids making common assumptions; namely, that the environment is stationary, deterministic, or discrete, or that input data is Markovian or representative of a predefined model of the environment (Georgeon, Marshall, & Manzotti 2013). A TBR system incrementally discovers, records, hierarchically abstracts, and reuses interesting episodes of interaction at different levels of abstraction. Those progressively learned episodes of interaction work as small programs that can be subsequently re-executed. As a result, TBR provides a solution to implementing agents that can self-program in a buttom-up fashion while being driven by self-motivational principles. Self-programming leads to constitutive autonomy, which theoretician of cognition consider being a crucial feature of cognitive systems (e.g. Froese & Ziemke 2009). With this technique, we seek implementing developmental learning by “sedimentation of habitudes”, as some philosophers of mind have stated and explained since the enlightenments (e.g., David Hume). |
P.Verschure, A.Chella
15:30 | Cognitome: The Biological Cognitive Architecture SPEAKER: Konstantin Anokhin |
16:30 | Interaction between Sign-based model of the World and AI Methods in behavior modeling SPEAKER: Gennady Osipov |