BICA 2017: 2017 ANNUAL INTERNATIONAL CONFERENCE ON BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES
PROGRAM FOR THURSDAY, AUGUST 3RD
Days:
previous day
next day
all days

View: session overviewtalk overview

08:00-10:00 Session 11A: Fierces F9
Location: Vladimir
08:00
The Control System Based on Extended BCI for a Robotic Wheelchair

ABSTRACT. In most cases, the movement of wheelchairs is controlled by disabled people using a joystick or by an accompanying person. Significantly disabled patients need alternative control methods without using the wheelchair joystick, because it is undesirable or impossible for these patients. In this article we present the implementation of a robotic wheelchair based on a powered wheelchair which is controlled not by the joystick but by the on-board computer that receives and processes data from the extended brain-computer interface (extended BCI). Under this term we understand the robotic complex control system with simultaneous independent alternative control channels. In this robotic wheelchair version the BCI works with voice and gesture control channels.

08:30
A Robust Cognitive Architecture for Learning from Surprises

ABSTRACT. Learning from surprises is a cornerstone for building bio-inspired cognitive architectures that can autonomously learn from interactions with their environments. However, distinguishing true surprises -- from which useful information can be extracted to improve an agent's world model -- from environmental noise arising from uncertainty is a fundamental challenge. This paper proposes a new and robust approach for actively learning a predictive model of discrete, stochastic, partially-observable environments based on a concept called the Stochastic Distinguishing Experiment (SDE). SDEs are conditional probability distributions over the next observation given a variable-length sequence of ordered actions and expected observations up to the present that partition the space of possible agent histories, thus forming an approximate predictive representation of state. We derive this SDE-based learning algorithm and present theoretical proofs of its convergence and computational complexity. Theoretical and experimental results in small environments with important theoretical properties demonstrate the algorithm's ability to build an accurate predictive model from one continuous interaction with its environment without requiring any prior knowledge of the underlying state space, the number of SDEs to use or even a bound on SDE length.

09:15
Representing Social Intelligence: an Agent-Based Modeling Application

ABSTRACT. Intelligent systems are composed of autonomous components that interact each others, with and through the environment in order to give intelligent support for reaching specific objectives. In such kind of systems the environment is an active part of the system itself and provide input for runtime changing and adaptation. Modeling and representing systems like this is a hard task. In this paper we propose a biologically inspired approach that combined with the use of Agent-Based Modeling allows to create a means for analyzing emergent needs of the system at runtime and convert them into useful intelligent services to be provided. The experiment proposed for validating and illustrating the approach refers to the construction of smart university campus.

08:00-10:00 Session 11B: Demos

126:A.Kotov

126:A.Filatov

126:L.Zaidelman

165:D.Azarnov

165:A.Chubarov

019:E.Chepin

Location: Suzdal-Palekh
10:00-10:30coffee, posters
10:30-12:30 Session 12A: Fierces F10
Location: Vladimir
10:30
Semantic Comprehension System for F-2 Emotional Robot
SPEAKER: Artemy Kotov

ABSTRACT. Within the project of F-2 personal robot we design a system for automatic text comprehension (parser). It enables the robot to choose “relevant” emotional reactions (output speech and gestures) to an incoming text – currently in Russian. The system executes morphological and syntactic analysis of the text and further constructs its semantic representation. This is a shallow representation where a set of semantic markers (lexical semantics) is distributed between a set of semantic roles – structure of the situation (fact). This representation may be used as (a) fact description – to search for facts with a given structure and (b) basis to invoke emotional reactions (gestures, facial expressions and utterances) to be performed by the personal robot within a dialogue. We argue that the execution of a relevant emotional reaction can be considered as a characteristic of text comprehension by computer systems.

11:00
Rethinking BICA’s R&D challenges: Grief revelations of an upset revisionist

ABSTRACT. Biologically Inspired Cognitive Architectures (BICA) is a subfield of Artificial Intelligence aimed at creating machines that emulate human cognitive abilities. What distinguish BICA from other AI approaches is that it based on principles drawn from biology and neuroscience. There is a widespread conviction that nature has a solution for almost all problems we are faced with today. We have only to pick up the solution and replicate it in our design. However, Nature does not easily give up her secrets. Especially, when it is about human brain deciphering. For that reason, large Brain Research initiatives have been launched around the world. They will provide us with knowledge about brain workflow activity in neuron assemblies and their interconnections. But what is being “flown” (conveyed) via the interconnections the research programme does not disclose. It is implied that what flows in the interconnections is information. But what is information? – that remains undefined. Having in mind BICA’s interest in the matters, the paper will try to clarify the issues.

11:30
Deep, largely unnoticed, gaps in current AI, and what Alan Turing might have done about them.
SPEAKER: Aaron Sloman

ABSTRACT. Gaps at present include the inability of current AI systems to make discoveries made by Euclid, Archimedesw1 and other ancient mathematicians, including discoveries in geometry and topology, long before the development of modern logic, algebra, formal logic, and proof theory. The information-processing abilities required seem to be closely related to the abilities of pre-verbal human toddlers to make proto-mathematical discoveries including topological discoveries, and also forms of intelligence in nest-building birds, squirrels, elephants, orangutans and other species. Current AI vision systems cannot support the uses of vision in discovery of deep mathematical features of geometry and topology, including discovery of impossibilities and necessary connections (related to but different from perception of positive and negative action-affordances). They also lack the meta-cognitive, reflective abilities required to organise, communicate and defend such discoveries if challenged -- precursors to mathematical proof. Current AI language *learning* mechanisms cannot match the language *creating* mechanisms used by young humans, demonstrated dramatically by deaf children in Nicaragua. (See https://www.youtube.com/watch?v=pjtioIFuNf8) Current AI models of emotion and motivation support only shallow mimicry of affective states: they lack the depth and variety of biological mechanisms involved in passionate interest in mathematics, long term grief, deep patriotism, finding something hilariously funny, and many other short and long term states and processes relating to things cared about. The CogAff project attempts to address some of these issues. (http://www.cs.bham.ac.uk/research/projects/cogaff/) This is very different from, and much more difficult than, producing machines with shallow mimicry of human responses. Moreover, emphasis on "embodied cognition", "enactivism", and "situated cognition", focuses on real but shallow products of evolution, ignoring requirements for increasingly *disembodied* forms of cognition to meet increasingly complex and varied challenges in sophisticated organisms inhabiting complex, extended, multi-faceted terrain. The emphasis on embodiment also ignores requirements to apply meta-cognitive processes to oneself and to others, and abilities to invent, implement, test, debug, and modify novel and increasingly complex engineering solutions to practical problems. Designers of ancient pyramids could not plan a new creation by physically interacting with the materials tools, labourers and temporary structures used during construction. It is likely that the vast majority of important evolutionary transitions in information processing, and the products in animal brains, have not yet been discovered, and that some of them cannot be detected by current scanning mechanisms (which don't reveal subneural processes, e.g. the chemistry of a synapse). Inspired by a conjecture about what Turing might have worked on if he had not died two years after publishing his paper on morphogenesis, the tutorial will present the conjectured roles of both the fundamental construction kit provided by physics and chemistry and multiple derived construction kits produced by evolution, often straddling different species. Without the later construction kits, current species could not exist. Some of the construction-kits are required mainly for physical (physiological) structures. Others are required for construction of new forms of information processing. There may be important examples that human scientists have not yet (re-)discovered.

10:30-12:30 Session 12B: Demos

126:A.Kotov

126:A.Filatov

126:L.Zaidelman

165:D.Azarnov

165:A.Chubarov

019:E.Chepin

Location: Suzdal-Palekh
12:30-14:00lunch (Baltschug), posters
14:00-15:00 Session 13A: Plenary BICA P1
Location: Vladimir
14:00
BICA 2017 Opening
SPEAKER: V.V. Uzhva
14:30
Pavlov Principle in Neuronic Systems, Updated

ABSTRACT. We use the label "neuronic systems" for networks of both real and modeling neurons. Recent successes of artificial neuronal systems in obtaining previously considered to be human-only abilities (complex pattern recognition, text translation from one language to another, generation of figure captions, etc.) forces for formulation of general principles, explaining smartness potential of neuronic systems. Up to the end of 2014 there was a general conviction that in artificial neural systems only error back propagation (known as physiologically impossible) provides obtaining of smart functions. The work of Timothy Lillicrap et al. [1] has effectively overturned that idea. Following these impressive results we have proposed in [2, 3] a general formula, Pavlov Principle (PP), which summarizes the contemporary knowledge of neuronic systems abilities. In lecture we consider further justification of PP and discuss its consequences for understanding brain mechanisms and for elaboration of new AI systems. The work is supported by RFBR grant № 16-07-01059 and by National Technological Initiative Project "Artificial Neural Intelligence iPavlov".

References: 1. Lillicrap T.P., Cownden D., Tweed D.B., Akerman C.J. Random feedback weights support learning in deep neural networks. - arXiv:1411.0247v1 [q-bio.NC] 2 Nov 2014, 27 p. 2. Solovyeva K.P., Shchukin T.N., Ivashchenko A.A., Dunin-Barkowski W.L. Basic Principles of Neural Processing. – III All-Russian Conference with International Participation “Hippocampus and Memory: Norm and Pathology”, September 7-11, 2015, Pushchino, Russia, pp. 33-34. 3. Dunin-Barkowski W.L., Solovyeva K.P. Pavlov principle in problems of brain reverse engineering. - XVIII International Conference Neuroinformatika 2016. Proceedings, Part 1. MEPHI, 2016, pp. 11-23.

14:00-16:00 Session 13B: Demos

126:A.Kotov

126:A.Filatov

126:L.Zaidelman

165:D.Azarnov

165:A.Chubarov

019:E.Chepin

Location: Suzdal-Palekh
16:00-16:30coffee, posters
16:30-18:45 Session 15: Plenary BICA P2
Location: Vladimir
16:30
Context-driven Active-sensing for Repair Tasks

ABSTRACT. The CART project aims to enable robotic helpers for repair mission the use models of repair missions, to guide active perception in supporting the understanding of the steps being taken by a human operator and to offer assistance in a timely manner based on story understanding.

18:00
Tests, metrics and challenges for HLAI

ABSTRACT. This discussion panel will brainstorm for key steps on the roadmap to HLAI in terms of metrics and challenges. Here HLAI stands for "human-level AI", or rather "human-like artificial intelligence". In particular, we will focus on the ultimate goal: this topic appears to be largely dismissed in recent AI and AGI research. The BICA Challenge put forward by this community is just not specific enough: one cannot tell precisely whether the goal is reached. In contrast, the best-known AI challenge, the unlimited Turing test, is specific and remains the ultimate goal in the field, despite that it did not work efficiently as a drive in research, and has been harshly criticized numerous times. But most importantly, "passing" it means a negative result: i.e., the null hypothesis cannot be rejected. This does not sound like a proof of a breakthrough at all. Neither it guarantees that the outcome will remain negative in a bigger sample. Therefore, it seems like we need a better ultimate goal, formulated as a positive outcome. And it turns out that the Turing Test paradigm can be slightly modified to satisfy this demand. It can be called "The Overman Challenge", because the idea is to create an artifact which is more human than an average human, and eventually - an artifact that is more human than any of the humans. Is it possible at all? How can this outcome can be validated? These and other questions will be discussed.