FIERCES ON BICA 2016: FIRST INTERNATIONAL EARLY RESEARCH CAREER ENHANCEMENT SCHOOL ON BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES
PROGRAM FOR SUNDAY, APRIL 24TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 17
Chair:
Lubov Podladchikova (A.B. Kogan Research Institute for Neurocybernetics at Southern Federal University, Russian Federation)
Location: Alekseevsky hall
09:00
Witali Dunin-Barkowski (Scientific Research Institute for System Analysis, Russian Federation)
Ksenia Solovyeva (Moscow Institute of Physics and Technology, Russian Federation)
Data Formats Inside the Brain

ABSTRACT. Technically, the brain presents the "end-to-end" system (ETES), which monitors the state of the external world using numerous receptors and acts back to it via multiple effectors to impose the world's states, comforting the brain's (the host organism's) needs/goals. In ETES, the concrete aspects of work of particular parts and elements of the system do not matter per se, as long as the system's needs are reasonably satisfied. Taking into consideration the "kluge principles" of the brain construction (Marcus, 2009) it seems quite improbable that inside brain there are well mathematically-treatable schemes. Most probably, the processes there are well "entangled" (in not quantum mechanical way). Examples of real language words entanglement are recently found in deep learning based computer translation systems. The likely aspects we should be ready to see in brain. Along with that, the simple examples of data formats are also prone to be found in natural neural systems. Especially, if they have precious qualities and are easy to get in simple neural networks (neural models). Examples of such data formats will be presented in the talk. The work was supported by the Russian Foundation of Basic Research grant # 16-07-01059.

09:45
Vladimir Golovko (Brest State Technical University, Belarus)
Current Trends and Advances in Deep Neural Networks

ABSTRACT. Over the last decade the deep neural networks are the powerful tool in the domain of machine learning. In the general case a deep neural network consists of multiple layers of neural units and can accomplish a deep hierarchical representation of their input data. The first layer extracts low-level features; the second layer detects higher level features, and as a result the deep neural network performs deep non-linear transformation of input data into more abstract level of representation. The important problem is training of deep neural network, because learning of such a network is much complicated compared to shallow neural networks. This is due to the vanishing gradient problem, poor local minima and unstable gradient problem. Therefore a lot of deep learning techniques were developed that permit us to overcome some limitations of conventional training approaches. Since 2006, the unsuper-vised pre-training of deep neural network was proposed. In this case the training of deep neural networks consists of two stages: pre-training of the neural network using a greedy layer-wise approach and fine-tuning all the parameters of the neural network using back-propagation or the wake sleep algorithm. The pre-training of deep neural network is based on either the restricted Boltzmann machine (RBM) or auto-encoder approach. At present, the stochastic gradient descent (SGD) with rectified linear unit (ReLU) activation function is used for training of deep neural networks in supervised manner. There exist different types of deep neural networks, namely deep belief neural networks, deep convolutional neural networks, deep recurrent neural networks, deep autoencoder and so on. Deep neural networks currently provide the best performance to many problems in images, video, speech recognition, and natural language processing. This lecture provides an overview of deep neural networks and deep learning. Different deep learning techniques, including well-known and new approaches are discussed. So, for instance, a new technique called “REBA” for the training of deep neural networks, based on the restricted Boltzmann machine is demonstrated. In contrast to Hinton’s traditional approach to the training of restricted Boltzmann machines, which is based on linear training rules, the proposed technique is founded on nonlinear training rules. Experiments demonstrate a high potential of deep neural networks in real applications.

10:30-11:00Coffee Break
11:00-12:30 Session 18
Chair:
Witali Dunin-Barkowski (Scientific Research Institute for System Analysis, Russian Federation)
Location: Alekseevsky hall
11:00
Lubov Podladchikova (A.B. Kogan Research Institute for Neurocybernetics at Southern Federal University, Russian Federation)
Dmitry Shaposhnikov (A.B. Kogan Research Institute for Neurocybernetics at Southern Federal University, Russian Federation)
An approach to the study of the internal functional structure of cortical columns

ABSTRACT. At present, the cortical columns attract an attention of the scientists and developers in various areas of neuroscience and neuroengineering. In the present report, the results obtained in experimental and model studies have been considered The problems of the columnar organization that are not solved up to now, have been determined, namely: (i) what are appropriate criteria for the identification of functional columns; (ii) may the cortical functional columns to be considered as relatively discrete units; (iii) what are the principles of intrinsic functional structure of the columns; (iv) what are cooperative functions of a column as compare with single neuron level; (v) what are the dynamic operations within the columns. An approach to the study of the internal functional structure of cortical columns has been considered.

11:45
Kate Jeffery (UCL, UK)
Neural representation of complex space
SPEAKER: Kate Jeffery

ABSTRACT. How the brain forms a representation of space, called a "cognitive map", for use in navigation and memory, has been an intensive topic of investigation for several decades. It began with the discovery by John O'Keefe of place cells, which fire at high rates when an animal ventures into particular parts of the environment. Subsequent decades saw the discovery of additional types of spatially encoding neurons including head direction cells and grid cells. Study of these neurons has been very informative in understanding how rats, and we think other mammals including humans, form cognitive maps. More recently attention is turning to how these neurons function in the complex real world, which has multiple compartments and in which animals are not always constrained to a horizontal surface. This talk will review the spatially encoding neurons and discuss recent research into the encoding of complex space.

12:30-13:00 Session 19: Discussion Panel 5: "B" in BICA

Panelists will include speakers of the session and others by invitation.

Chair:
Witali Dunin-Barkowski (Scientific Research Institute for System Analysis, Russian Federation)
13:00-14:00Lunch Break
14:00-15:15 Session 20

Session summaries, Discussion Panel 6: Roadmap, Hackathon awards, Adjourn

Chair:
Frank Ritter (PSU, USA)
Location: Alekseevsky hall