View: session overviewtalk overview
08:30 | Biological and Brain Foundations of Reservoir Computing SPEAKER: Peter Ford Dominey ABSTRACT. This talk will set the organization of workshop. We ill first introduce the basic principals of reservori computing and the underlying neurscience motivation. We will then introduce the topics of the presenting speakers, and the qfundamental questions that we want to answer, related to cortical dynamics, computation in the contect of recurrent networks. |
09:00 | A cognitive neural model of executive functions in natural language processing SPEAKER: Bruno Golosio ABSTRACT. Although extensive research has been devoted to cognitive models of human language, the role of executive functions in language processing has little been explored. In this work we present a neural-network-based cognitive architecture which models the development of the procedural knowledge that underpin language processing. The large scale organization of the architecture is based on a multi-component working memory model, with a central executive that controls the flow of information among the slave systems through neural gating mechanisms. The system was validated, starting from a tabula rasa condition, on a on a corpus of five datasets, each devoted to a thematic group, based on literature on early language assessment, at the level of a preschool child. The results show that the system is capable of learning different word classes, and to use them in expressive language, through an open-ended incremental learning process, expressing a broad range of language processing functionalities. The model described in this work, called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), is a cognitive neural architecture that was designed to help understand the cognitive processes involved in early language development. A detailed description of the model and of the database used for its validation is provided in http://arxiv.org/abs/1506.03229 (submitted to PLOS ONE). The source code of the software, the User Guide and the datasets used for its validation are available in the ANNABELL web site at https://github.com/golosio/annabell/wiki . The global organization of the architecture is based on a multi-component working memory model. The model comprises four main components: a central executive (CE), a verbal short-term memory (STM), a verbal long-term memory (LTM) and a reward structure. The CE controls all decision-dependent processes. It includes a state-action association system, a set of action neurons and a set of gatekeeper neurons. The state-action association system is a neural network that is trained by a rewarding procedure to associate mental actions to the internal states of the system. The STM includes a phonological store, a focus of attention, a goal stack and a comparison structure. The LTM includes a structure for memorizing the working phrases, and a retrieval structure that uses the focus of attention as a cue for retrieving memorized phrases. The reward structure is a system that memorizes the sequences of the internal states and of the mental actions performed by the system (state-action sequences) during the exploration phases. When the exploration produces a target output, the reward structure retrieves the state-action sequence and rewards the association between each internal state and the corresponding mental action, by triggering synaptic changes of the state-action association connections. At the lowest level, the system is entirely composed by interconnected artificial neurons. The learnable connections among neurons are updated by a discrete version of the Hebbian learning rule. The inhibitory competition among groups of neurons is modeled by the k-winner-take-all rule. The validation show that, compared to previous cognitive neural models of language, the model is able to develop a broad range of functionalities, starting from a tabula rasa condition. Those results support the hypothesis that executive functions play a fundamental role for the elaboration of verbal information. Our work emphasizes that the decision processes operated by the central executive are are statistical decision processes, which are learned by exploration-reward mechanisms, rather than being based on pre-coded rules. |
09:30 | Partially embodied motor control: towards a natural collaboration between body and brain. SPEAKER: Joni Dambre ABSTRACT. Motor control systems in the brain humans and mammals are hierarchically organised, with each level controlling increasingly complex motor actions. Each level is controlled by the higher levels and also receives sensory and/or proprioceptive feedback. Through learning, this hierarchical structure adapts to its body, its sensors and the way these interact with the environment. An even more integrated view is taken in morphological or embodied computation. On the one hand, there is both biological and mechanical (robotics) evidence that a properly chosen body morphology can drastically facilitate control when the body dynamics naturally generate low level motion primitives. On the other hand, several papers have used physical bodies as reservoirs in a reservoir computing setup. In some cases, reservoir computing was used as an easy way to obtain robust linear feedback controllers for locomotion. In other cases, the body dynamics of soft robots were shown to perform general computations in response to some input stimulation. In general, very specific highly compliant bodies were used. We present recent results on two open questions regarding the way morphological computation could be exploited in biological motor control. Generally, when reservoir computing has been used to exploit body dynamics for computation, the desired output signals were known. Clearly, in biological locomotion, the learning does not enforce specific muscle actuation signals. Instead, it rewards desirable forms of motion and penalizes undesirable ones. We show how a biologically plausible learning rule, reward modulated Hebbian learning, can enable the incorporation of compliant body dynamics into the control hierarchy, resulting in robust motor control. Despite the many successes with using physical bodies as reservoirs, the relationship between compliance and computational power has hardly been investigated. Although biological bodies are partially compliant, they also have a very specific structure and many rigid parts. It therefore remains unclear to what extent this type of bodies can help in motor control. In our research, we use compliant four legged robots to address this issue. We present first results that indicate that for such robots, linear feedback of proprioceptive signals alone is often not sufficient to result in stable gait control. In addition, a first comparison of different levels of compliance indicate that a well chosen level of compliance can drastically simplify motor control, compared to both, too little and too much compliance, and that the body should therefore be considered as an integral part of the control. |
08:30 | Human Cognition in Preparation for Problem Solving SPEAKER: Alexei V. Samsonovich ABSTRACT. College students were asked to solve problems in mathematics using a software tool assisting their preparation for problem solving at a metacognitive level. Students selected relevant steps, facts and strategies represented on the screen and connecting them by arrows, indicating their plan of solution. Only after the diagram was completed, students were allowed to solve the problem. The findings are: (i) forward chaining is significantly more predominant, and backward chaining is significantly less frequent, compared to other possibilities or arrow entering. This result is unexpected, because classical planning methods produce backward chaining in this task. (ii) Students scoring in the middle are more likely to enter convergent pairs of arrows compared to students who scored low or high. This finding enables diagnosing student problem solving. Both findings imply constraints on selection of cognitive architectures used for modeling student problem solving. |
09:00 | Towards Integrated Neural-Symbolic Systems for Human-Level AI: Two Research Programs Helping to Bridge the Gaps SPEAKER: Tarek Richard Besold ABSTRACT. After a Human-Level AI-oriented overview of the status quo in neural-symbolic integration, two research programs aiming at overcoming long-standing challenges in the field are suggested to the community: The first program aims at a better understanding of foundational differences and relationships on the level of computational complexity between symbolic and subsymbolic computation and representation, potentially providing explanations for the empirical differences between the paradigms in application scenarios and a foothold for subsequent attempts at overcoming these. The second program suggests a new approach and computational architecture for the cognitively-inspired anchoring of an agent's learning, knowledge formation, and higher reasoning abilities in real-world interactions through a closed neural-symbolic acting/sensing--processing--reasoning cycle, potentially providing new foundations for future agent architectures, multi-agent systems, robotics, and cognitive systems and facilitating a deeper understanding of the development and interaction in human-technological settings. |
09:30 | Modeling Sensorimotor Learning in LIDA Using a Dynamic Learning Rate SPEAKER: Stan Franklin ABSTRACT. We present a new model of sensorimotor learning in a systems-level cognitive model, LIDA. Sensorimotor learning helps an agent properly interact with its environment using past experiences. This new model stores and updates the rewards of pairs of data, motor commands and their contexts, using the concept of reinforcement learning; thus the agent is able to generate (output) effective commands in certain contexts based on its reward history. Following Global Workspace Theory, the primary basis of LIDA, the process of updating rewards in sensorimotor learning is cued by the agent’s conscious content—the most salient portion of the agent’s understanding of the current situation. Furthermore, we added a dynamic learning rate to control the extent to which a newly arriving reward may affect the reward update. This learning rate control mechanism is inspired by a hypothesis from neuroscience regarding memory of errors. Our experimental results show that sensorimotor learning using a dynamic learning rate improves performance in a simulated movement of pushing a box. |
08:30 | Motor memory: representation, learning and consolidation SPEAKER: Jure Žabkar ABSTRACT. An efficient representation of motor system is vital to robot control and its ability to learn new skills. While the increasing sensor accuracy and the speed of signal processing failed to bridge the gap between the performance of artificial and human sensorimotor systems, the motor memory architecture seems to remain neglected. Despite the advances in robot skill learning, the latter remains limited to predefined tasks and pre-specified embodiment. We propose a new motor memory architecture that enables information sharing between different skills, on-line learning and off-line memory consolidation. We develop an algorithm for learning and consolidation of motor memory and study the space complexity of the representation in the experiments with humanoid robot Nao. Finally, we propose the integration of motor memory with sensor data into a common sensorimotor memory. |
08:50 | Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (including Humans) SPEAKER: Mark Waser ABSTRACT. Recent months have seen dire warnings from Stephen Hawking, Elon Musk and others regarding the dangers that highly intelligent machines could pose to humanity. Fortunately, even the most pessimistic agree that the majority of danger is likely averted if AI were “provably aligned” with human values. Problematical, however, are proposals for pure research projects entirely unlikely to be completed before their own predictions for the expected appearance of super-intelligence [1]. Instead, with knowledge already possessed, we propose engineering a reasonably tractable and enforceable system of ethics compatible with current human ethical sensibilities without unnecessary intractable claims, requirements and research projects. |
09:10 | A First Look at the Visual Attention Executive for STAR: The Selective Tuning Attentive Reference Model SPEAKER: John Tsotsos ABSTRACT. After many years of development and significant supporting experimental evidence, the Selective Tuning (ST) model of visual attention (Tsotsos 2011) is now in its next phase of development. The goal was always for this model to be embedded into a larger-scale architecture with predictive power for furthering our understanding of human visual processes. For this larger-scale system, it was easily apparent that many of the classical components of cognitive architectures importantly play a role. However, the level of detail required by ST's breadth and depth of attentional functionality is greater than that usually considered. This presentation will overview ST and its supporting evidence, detail the kinds of control signals, parameter settings and other forms of interaction its operation requires from its embedding architecture, and will introduce a design for its executive controller. The STAR architecture that provides the embedding substrate for ST will also be briefly described (Tsotsos & Kruijne 2014). Tsotsos, J.K. (2011). A Computational Perspective on Visual Attention. MIT Press, Cambridge MA. Tsotsos JK and Kruijne W (2014) Cognitive programs: software for attention's executive. Frontiers in Psychology: Cognition 5:1260. doi: 10.3389/fpsyg.2014.01260 |
10:30 | Interregional and interlevel connections for active perception SPEAKER: Paul Robertson ABSTRACT. Perception is performed in biological systems in order to support action that takes place in the context of goals. The origins of the contexts and goals are themseves the result of other closed-loop systems running on different timescales, involving different sensing capabilities using brain structures of differing evolutional eras, but all integrated to a greater or lesser extent. In this paper we describe an architectural approach and its motivation for an artificial system that is inspired by biological counterparts. The work described in this paper described research conducted on closed loop computer vision. |
11:00 | Rapid path planning in maze–like environments using attractor networks SPEAKER: Dane Corneil ABSTRACT. Animals navigating in a well–known environment can rapidly learn and revisit observed reward locations, often after a single trial. The mechanism for rapid path planning remains unknown, though evidence suggests that the CA3 region in the hippocampus is important, with a potential role for “preplay” of navigation–related activity. Here, we consider an neural attractor network model of the CA3 region, and show how this model can be used to represent spatial locations in realistic environments with walls and obstacles. The synaptic weights in the network model are optimized for stable bump formation, so that neurons tend to excite other neurons with nearby place field centers and inhibit neurons with distant place field centers. Using these simple assumptions, we initialize the activity in the network to represent an initial location in the environment, and weakly stimulate the network with a bump at an arbitrary goal location. We find that, in networks representing large place fields, the network properties cause the bump to move smoothly from its initial location to the goal location along the shortest path, around obstacles or walls. Reward–modulated Hebbian plasticity during the first visit to a goal location enables a later activation of the goal location with a broad, unspecific external stimulus, representing input to CA3. These results illustrate that an attractor network that produces stable spatial memories, when augmented to represent large scale spatial relationships, can be parsimoniously extended to rapid path planning. |
11:30 | Mirroring Autobiographical Memory by Cognitive Architecture SPEAKER: Junya Morita ABSTRACT. Assuming that photographs accumulated on a personal computer reflect the life history of a person, a model of that person's autobiographical memory could be constructed. Such a model would be useful to overcome memory problems caused by factors such as aging. On the basis of this idea, we constructed an image recommender system comprising an ACT-R model. We built the model using a private photo library, consisting of 3,202 photos, and ran a simulation manipulating the activation noise of the declarative chunks. The noise was found to strongly influence the memory retrieval. When the noise level was low, the model retrieved a few memory items that occurred recently. On the other hand, when the noise level was high, the retrieval process was like a random walk over a memory network, with repeated recalls of old photos. The results suggest a condition of an ACT-R model can facilitate mental time travel into the distant past.
|
10:30 | Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex SPEAKER: Peter Dominey ABSTRACT. Primates adapt flexibly to novel situations. A key to adaptation is the capacity to represent these situations. It has been proposed that mixed selectivity may universally represent any situation defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected networks. In the reservoir computing framework, networks are random AND recurrent, thus allowing them to recombine present and past stimuli that are reverberated thanks to recurrent connectivity. We argue that reservoir computing is a suitable framework to model the generation of complex and dynamic representations locally in the cortex, whose common property is its highly recurrent connectivity. Training a reservoir to perform a complex cognitive task, we demonstrate its rich representational power, and compare it monkey data. |
11:00 | Structuring Autobiographical Experience for a Narrative Companion Robot SPEAKER: Grégoire Pointeau ABSTRACT. To free today’s robots from their classical history-log representation of the past, the robot of tomorrow should be able to represent and express their life-story in a more human-like narrative format. To do so, we present here a cognitive system for a humanoid robot, based on the structure of human memory (with a division between episodic and semantic memory coupled to a system of language through reservoir computing based on the neuronal system situated in the cortex and basal ganglia. The novelty of the present study is the linking between a bio-inspired memory module to encode experience over the robot’s lifetime, a reasoning system to create knowledge based on this memory content, and a module of language processing that will provide a natural language interface to this human-like memory. We can consider the resulting system in terms of Neisser’s Narrative or Temporally Extended Self. |
10:30 | The Distributed Adaptive Control of Consciousness in Animals and Machines SPEAKER: Paul Verschure ABSTRACT. The brain evolved to maintain a dynamic equilibrium between an organism and its environment. We can define the fundamental questions that such a brain has to solve in order to deal with the how of action in a physical world as: why (motivation), what (objects), where (space), when (time). I call this the H4W problem. Post the Cambrian explosion a second factor became of great importance for survival: who and now brains adapted to the H5W challenge. I will present the hypothesis that consciousness evolved to enhance fitness in the face of H5W. The Distributed Adaptive Control (DAC) theory of mind and brain shows how H5W can be solved through the interaction across multiple layers of neuronal organization and assigns a specific role to consciousness in the optimization of the real-time control of action. DAC makes specific predictions on both the structure and function of the neuronal correlate of consciousness that I will discuss with respect to memory, decision making and attentional processing. Each example will be illustrated by means of concrete robot experimentation. |
11:00 | The need for high level compilers for generating low level behaviors SPEAKER: Frank Ritter ABSTRACT. There is a need for high level languages to help create low level BICA behaviour. I'll present an example compiler for creating ACT-R models from hierarchical task analyses for a non-iterative, 30 min. task, where we created models of 11 levels of expertise in an afternoon. The models start with about 600 rules each, and learn out to 100 trials about another 600 rules. We compared these models to human data over four trials (N=30) and both the aggregate and individual data fit the novice best (or nearly best). This work shows that high level compilers can help manage the complexity of large models. I'll then note some future work including microgenetic analysis and modeling of learning curves on the individual subtasks and also look at forgetting of these tasks after delays ranging from 6 to 18 days. |
11:30 | Using a Distributional Semantic Vector Space with a Knowledge Base for Reasoning in Uncertain Conditions SPEAKER: Douglas Summers-Stay ABSTRACT. The inherent inflexibility and incompleteness of commonsense knowledge bases (KB) has limited their usefulness. We describe a system called Displacer for performing KB queries extended with the analogical capabilities of the word2vec distributional semantic vector space (DSVS). This allows the system to answer queries with information which was not contained in the original KB in any form. By performing analogous queries on semantically related terms and mapping their answers back into the context of the original query using displacement vectors, we are able to give approximate answers to many questions which, if posed to the KB alone, would return no results. We also show how the hand-curated knowledge in a KB can be used to increase the accuracy of a DSVS in solving analogy problems. In these ways, a KB and a DSVS can make up for each other's weaknesses. |
14:00 | Does the cerebral cortex exploit the computational power of delay coupled recurrent networks? SPEAKER: Wolf Singer ABSTRACT. A hall mark of cortical architectures is the dense and specific reciprocal coupling among distributed feature specific neurons. This network engages in high dimensional non-linear dynamics that is characterized by oscillatory activity in widely differing frequency ranges and the transient synchronisation of neuronal discharges. Analysis of simultaneously recorded neuronal responses to sequences of light stimuli suggests that visual cortex shares features with liquid state machines such as fading memory and superposition of information of different stimuli. A major difference is that the coupling connections among cortical neurons are susceptible to activity dependent modifications of their synaptic gain, which allows the network to store priors about the statistical contingencies of the outer world. It is proposed that the cerebral cortex exploits the high dimensional dynamic space offered by recurrent networks for the encoding, classification and storage of information. |
14:30 | Deciphering the brain's navigation system SPEAKER: Dori Derdikman ABSTRACT. Recently there have been major leaps in the scientific understanding of the brain's internal navigation system. Several related cell types have been discovered in the brain: Place cells, grid cells, head-direction cells and border cells. These cells are believed to be part of a cognitive map responsible for representation of the brain's internal sense of space. This brain system exemplifies one of the rare cases in which the internal algorithm of a mammalian neural network could be deciphered. While the phenomenology of these cells is now quite well understood, many questions remain: How are these cells connected into a network? How are they generated? How could they be read out? In this lecture I will describe these major questions and suggest some avenues connecting between the theory of these cells and the growing bulk of experimental evidence about them. |
15:00 | How a naïve agent can construct the notion of space SPEAKER: unknown ABSTRACT. As noted by Poincaré, Helmholz and Nicod, the only way our brains can know about the existence, dimensionality, and structure of physical space is by sampling the effects of our actions on our senses. In this talk we show how a simple algorithm based on coincidence detection will naturally extract the notion of space. It can do this without any a priori knowledge about how the brain is connected to the sensors or body, and for arbitrary sensors and effectors. Such a mechanism may be the method by which animals’ brains construct spatial notions during development, or it may have evolved over evolutionary time to allow animals to act in the world. The algorithm has applications for self-repairing robotics and sensor calibration in unknown hostile environments. |
15:30 | No Direct Ontological Access – The Feature we Share SPEAKER: Peter Boltuc ABSTRACT. Thematic areas: Learning: how a system that has no direct ontological access to reality can construct knowledge about reality based on regularities of interaction? Fundamental academic, practical and theoretical questions in BICA research and technology All cognitive architectures – humans, other animals and robots – have merely an indirect ontological access to reality but levels of such access differ. This problem is well-known to philosophical epistemology; in AI it piggybacks on the symbol-grounding problem. Strictly reactive systems (stimulus response) may be said to have the most direct ontological access; yet, they are hardly worth the name of cognitive architectures. Indirectly reactive systems (such as muscular memory and other simple neural networks) have close indirect access to the ontological reality – such access is mediated by simple memory and other structural features of the network. As more specific cognitive features emerge in more complex systems the level of mediation becomes more and more indirect. Simple artificial intelligence systems usually operate at the level of syntactic structures (Searle). Biological and non-biological advanced cognitive architectures create ‘mind maps’ (Damasio) which should count as the basis of semantics. Specificities of any given syntactic and semantic structures provide further distance from direct ontological access. In human level cognitive systems various intermediary cognitive structures such as scientific theories and paradigms provide an additional filter. Importantly, creativity – understood as transformation of reality in accordance with some goals (Thaler) – provides further distance from the ontological access to reality. Paradoxically, the more advanced a given cognitive function is, the sharer its epistemic grasp of certain features of the reality, the more pronounced its ontological independence, and thereby ontological remoteness from the so-called ontological reality. Epistemic capacities, the capacities to grasp the structure of reality, require ontological remoteness mediated through the elements specific to the cognitive apparatus and its theoretical underpinnings. The above claims, however important, rely on the philosophically pre-critical view of the reality. Critical philosophers, from Kant, through Hegel, to Marx, view the reality as always already mediated through the epistemic, historical and social interactions between the subject and the world. Hence, the notion of direct access to the ontological reality is only the limit [mathematical limes] of an idealization, never actually reachable. In this context we notice how the gap erected by Searle, and most dualists, between the direct access possessed by humans and the supposedly merely syntactic access available to the computing machines is a hypothesis ill grounded. This conclusion, if correct, supports the gist of BICA philosophy, which I understand to be the idea that there is hardly anything in human and animal cognitive architectures that cannot be instantiated (not just merely replicated, whatever the difference is supposed to mean) in a sufficiently advanced biologically inspired cognitive architecture. I end my presentation by presenting the idea of first person consciousness as hardware. I build on my BICA2014 presentation where I defined non-reductive consciousness. Now I develop my recent argument that the stream of consciousness is more like the stream of light, a film-tape or the canvas of a painting rather than like the content put on those canvas, or the tape. First-person stream of awareness or the so-called non-reductive consciousness is in fact a biological product of CNR. However, once we learn how it is generated in animal brains we should be able to engineer it in other cognitive systems. |
16:00 | Modeling Biological Agents Beyond the Reinforcement Learning Paradigm SPEAKER: Olivier Georgeon ABSTRACT. It is widely acknowledged that biological agents are not Markov: they do not receive an effective representation of their environment’s state as input data. We claim that they cannot recognize rewarding Markov states of their environment either. Therefore, we model their behavior as if they were trying to perform rewarding interactions with their environment (interaction-driven tasks), but not as if they were trying to reach rewarding states of their environment (state-driven tasks). We review two interaction-driven tasks: the AB and AABB task, and implement a non-Markov Reinforcement-Learning (RL) algorithm based upon historical sequences and Q-learning. Results show that this RL algorithm takes significantly longer than a constructivist algorithm implemented previously by Georgeon, Ritter, & Haynes (2009). This is because the constructivist algorithm directly learns and repeats hierarchical sequences of interactions, while the RL algorithm spends time learning Q-values. Along with theoretical arguments, these results support the constructivist paradigm for modeling biological agents. |
16:20 | Constructing Phenomenal Knowledge in an Unknown Noumenal Reality SPEAKER: Florian Bernard ABSTRACT. In 1781, Immanuel Kant argued that cognitive agents ignored the underlying structure of their world "as such" (the noumenal reality), and could only know phenomenal reality (the world "as it appears" through their experience). We introduce design principles to implement these theoretical ideas. Our agent's input data is not a direct function of the environment's state as it is in most symbolic or reinforcement-learning models. The agent is designed to discover and learn regularities in its stream of experience and to construct knowledge about phenomena whose hypothetical presence in the environment explains these regularities. We report a proof-of-concept experiment in which the agent constructs categories of phenomena, and exploits this knowledge to satisfy innate preferences. This work suggests a new approach to cognitive modeling that focuses on the agent's internal stream of experience. We argue that this approach complies with theories of embodied cognition and enaction. |
16:40 | Managing the observation of agents activity as an interpretation process: a Modeled Traces approach SPEAKER: Alain Mille ABSTRACT. One way to assess a cognitive architecture is to implement it in an agent, and observe the level of intelligence exhibited by this agent in various activities. Observing activities of agents, however, is a complex task : what elements of the activity can we observe? How to interpret the observed activity? How to account for time and space? How to describe/report the observation? How to demonstrate the validity of the observation? How to manage datasets of observations? For several years, we have been developing an original approach to make explicit the process of observing an activity. This work led us to develop a theory of "Modeled Traces". A Modeled Trace is a trace of activity formally encoded in a knowledge-based system. The theory of Modeled Traces allows us to design software tools to facilitates the process of observation and -- perhaps above all-- to consider an observation as an interpretation of the observed activity according to a specific expertise of observation (as opposed to considering the observation as an "objective" fact). This talk is the opportunity to show the principles, the theory, the models, and the tools that we have been developing. We explain how modeled-trace systems can help design and assess biologically inspired cognitive architectures. |
17:10 | Autonomous object modeling based on affordances in a dynamic environment SPEAKER: Simon Gay ABSTRACT. We present an architecture for self-motivated agents to generate behaviors in a dynamic environment according to its possibilities of interactions. Some interactions have predefined valences that specify inborn behavioral preferences. Over time, the agent learns to recognize affordances in its surrounding environment under the form of structures called signatures of interactions. The agent keeps track of enacted interactions in a spatial memory to generate a completed context in which it can use signatures to recognize and localize distant possibilities of interactions, and generates behaviors that satisfy its motivation principles. |
16:00 | How could the enactive paradigm inspire computer science? SPEAKER: Pierre De Loor ABSTRACT. During this presentation, I will give you a short overview of the enactive paradigm origin - the work of Francisco Varela and Humberto Maturana - to position it within the field of embodied cognition. Then, I will present different studies in neuroscience and psychology that are in line with this paradigm. The second part of my presentation will focus on the implications of this paradigm for research in computer science. There are two orientations : the first one is on artificial intelligence or artificial life, in particular, the enactive field could provide directions for developmental approaches. The second one is on interactive systems. The enactive paradigm could help us design interactive systems that are better coupled with Humans to favor an enactive loop and then increase the relevance of technological progress. I will illustrate these points with some examples from my research group at Brest, France. |
16:30 | Origins and Evolution of Enactive Cognitive Science: Toward an Enactive Cognitive Architecture SPEAKER: Leonardo Lana De Carvalho ABSTRACT. This paper presents a historical perspective on the origin of the enactive approach to cognitive science, starting chronologically from cybernetics, with the aim of clarifying its main concepts, such as enaction, autopoiesis, structural coupling and natural drift, showing their influences in computational approaches and models of cognitive architecture. Works of renowned authors, as well as some of their main commentators were addressed to report the development of enactive approach. We indicate that the enactive approach transcends its original context within biology, and at a second moment within connectionism, changing the understanding of the relationships so far established between body and environment, and the ideas of conceptual relationships between mind and body. The influence on computational theories is of great importance, leading to new artificial intelligence systems as well as the proposition of complex, autopoietic and alive machines. Finally, the article stresses the importance of enactive approach in the design of agents, understanding that previous approaches have very different cognitive architectures and that a prototypical model of enactive cognitive architecture is one of the largest challenges today. |
17:00 | Evolving Conceptual Spaces for Symbol Grounding in Language Games SPEAKER: Ricardo Gudwin ABSTRACT. A standard approach in the simulation of language evolution is the use of Language Games to model communicative interactions between intelligent agents. Usually, in such language games, the meaning assignment of symbols to parts of reality comprising the agents environment is simplified and given ``a priori" to the experiment. In this paper, we develop an approach where the decomposition of reality in meaningful experiences is co-evolved with the lexicon formation in the language games, bringing some insights on how meaning might be assigned to symbols, in a dynamic and continuously changing environment, being experienced by an agent. In order to do that, we use Barsalou's notion of mental simulation and Gardenfors' notion of conceptual spaces such that, together with ESOM neural networks, a cognitive architecture can be developed, where mental concepts formation and lexicon formation are able to co-evolve during a language game. The performance of our cognitive architecture is evaluated and the results show that the architecture is able to fulfill its semantics function, by allowing a population of agents to exchange the meaning of linguistic symbols during a naming game, without relying on "a priori" categorization scheme provided by an external expert. These results, beyond bringing evidence on potential ways for symbols to get meaning on a biologically realistic way, open a set of possibilities for further uses of conceptual spaces on a much more complex problem: the grounding of a grammatical language. |
17:30 | Multi-dimensional memory frames and action generation in the MHP/RT cognitive architecture SPEAKER: Muneo Kitajima ABSTRACT. We have developed a cognitive architecture, MHP/RT, that is capable of simulating people’s daily action selection processes. It consists of the processes for generating behavior and multi-dimensional memory frames, MD-frames, that store the results of behavior and are used to generate behavior. The behavior generation processes include the autonomous perceptual system associated with sensory neurons and the autonomous motor system associated with motor neurons. In between them are interneurons that process the input from the perceptual system with the conscious decision making process or the unconscious automatic action selection process. Each process in behavior generation is associated with an MD-frame. As such, the behavior generation processes and the MD-frames are intimately connected with each other and the amount of the contents stored in the MD-frames are accumulated incrementally as the time goes by and the stored entities are strongly influenced by the detailed experience each individual has at each moment. The types of information to be memorized are not the 4-dimensional values of objects but a set of differential features of objects associated with strong variations, phase change points, or boundaries, and mutual relationships among them. Such quantities as time and distance, direct derivatives of the 4-dimensional values of the objects, are reconstructed from the memory when the memory of the objects are needed in such a way that they are consistent with the environmental conditions at that time. What people would do in a specified situation is dictated by the contents stored in the MD-frames and how people would do is determined by the behavior generation processes of MHP/RT. In this paper, we address such issues as how consciousness is formed and language emerges by tracing MHP/RT’s simulation of people’s interaction processes with their environment. |
16:00 | Narrative Effects and Lessons for BICA SPEAKER: Mark Finlayson ABSTRACT. Narrative is a ubiquitous language phenomenon that engages cognitive capabilities at multiple levels. I outline a number of observed effects that narrative has on cognitive processing, including improvements in comprehension, memory, and logical reasoning. Furthermore, an ability to understand narrative is critical to social reasoning. I connect these capabilities to recent results from the computational study of narrative, and draw a number of suggestions for biologically inspired cognitive architectures related to potential task domains, measurements of validity, and predicted cross-interactions among cognitive architectural components. |
16:30 | Towards narratologically inspired cognitive architectures SPEAKER: Nicolas Szilas ABSTRACT. For several decades, the hypothesis according to which narrative is not only a prominent form of human communication but also a fundamental way to represent knowledge and to structure the mind has been proposed and discussed. But surprisingly, this has not yield to any NICA (narratologically inspired cognitive architectures) and the hypothesis remains a fuzzy one with limited implications. Perhaps this is due to the fact that the few attempts to bridge the gap from narrative theory to cognitive architectures, namely the scripts and cases in artificial intelligence (AI), have considered only a small set of facets of narrative. Historically indeed, when AI and cognitive Science researchers tackled narrative in the 70s and 80s, they tended to reinvent narrative theories, ignoring the centuries of studies in the domain. In this contribution, we propose to study further the above hypothesis by identifying differentiating features of narratives that contrast with the classical problem solving AI and that may inspire new cognitive architectures. Potential applications of NICAs include better communicating machines, improved intelligent tutoring systems and robust knowledge bases. |
17:00 | Evolution-inspired Construction of Stories: Iterative Refinement of Narrative Drafts as a Social Cycle SPEAKER: Carlos León ABSTRACT. Narrative creation happens not only as an internal process in the writer's mind, but also as a social phenomenon in which several individuals influence each other by creating, telling and evaluating the stories told in the community. As such, stories evolve over time under the influence of many activities: inventing new parts or rejecting old ones, changing the discourse, telling the plot in a different way, and changing the way the story is understood and accepted, possibly by other changes in the society. We propose a formal computational model based on the cognitive behavior of individuals inventing, telling and refining narrative structures based on the ICTIVS model. This new version of the model, Social-ICTIVS, adapts the previous model by considering each of the steps and re-defining them as a social activity of narrative evolution. |