MOCO 2017: 4TH INTERNATIONAL CONFERENCE ON MOVEMENT AND COMPUTING
PROGRAM FOR WEDNESDAY, JUNE 28TH
Days:
next day
all days

View: session overviewtalk overview

09:00-12:30 Session 2A: Workshop: Training, Sharing, and Enacting: Somaesthetics of Rhythmic Interaction
Location: SIML
09:00
Training, Sharing, and Enacting: Somaesthetics of Rhythmic Interaction
SPEAKER: unknown

ABSTRACT. Designing for and through movement in creative ways is becoming increasingly important in media technology. ​We propose a workshop for developing and designing sketches of creative applications, where the quality of movement and rhythms are of paramount importance. At MOCO’17, we aim to bring together performers, movers, students, and creative coders for design exploration. We will reuse observations, experience, and data from a previous workshop, but ensure a first-hand account of somaesthetic experience through movement and creative coding.

09:00-12:30 Session 2B: Workshop: Workshop on Experience Explicitation Interviews for Movement Researchers and Practitioners
Location: PSH 302
09:00
Workshop on Experience Explicitation Interviews for Movement Researchers and Practitioners.

ABSTRACT. Accessing the user, the audience or the participant’s lived experience of a movement based interaction, installation or artwork is a challenge that movement researchers and practitioners face during the making process and the evaluation of their work. In psychology, experience explicitation interviews developed by psychologist and researcher Pierre Vermersch allow to bring the user back to a moment of evocation of a past activity in order to access different dimensions of her lived experience that may not be immediately present to her consciousness. We propose a workshop to movement researchers and practitioners that introduces techniques from experience explicitation interviews in order to bridge the gap between human embodied experiences and movement and computing.

09:00-12:30 Session 2C: Doctoral Symposium
Location: PSH 305
09:00
The Effect of Control-Display Ratio in Handheld Motion Controllers on Player Immersion: A Pilot Study

ABSTRACT. Motion controls are experiencing a renaissance in current Virtual Reality (VR) gaming applications. While there is significant work on how motion controllers affect player experience in games compared to other controllers, relatively little is known about the experiential effects of concrete, low-level design of motion controls. Therefore, this study explores the relationship between immersion and control-display ratio in motion controllers in a VR setup. A pilot experiment compared a 1:1 ‘natural’ mapping ratio with a decelerated ratio. While quantitative results were inconclusive, interviews showed that novelty might be playing an important role in the results.

09:00
Towards Automatic Unsupervised Segmentation of Music-Induced Arm Gestures from Accelerometer Data
SPEAKER: unknown

ABSTRACT. This article presents an ongoing study towards the automatic unsupervised segmentation of gesture induced by music.

09:00
Body-brain-avatar Interface: a tool to study sensory-motor integration and neuroplasticity
SPEAKER: unknown

ABSTRACT. We introduce a body-brain-avatar interface (BBAI), which integrates physiological signals from multiple layers of the peripheral nervous system (brain, heart rate, temperature) collected in real time. We also present new analytical methods amenable to selectively enhance motor control. To be precise, we first characterize the stochastic signatures of biorhythms harnessed from the nervous systems of the individual end user employing a unifying statistical platform for personalized analyses and inference. Then we pair the end user with an avatar endowed with the person's biorhythms and with their noisy variants. Gradually, we perturb the avatar with the person's noise from different nervous systems levels. We evaluate the outcome of these visuo-motor manipulations in real time to change the noise patterns and inform the interface about the person's nervous systems' reactions. In closed loop, we co-adapt the end user and the avatar driving the coupled performance of their body parts with various noise regimes. We then evaluate the person's resting state involuntary micro-movements and iterate the training until we find noise regimes with a tendency to increase the person's bodily awareness leading as well to highly predictive statistical patterns of the involuntary micro-movements across the body. We discuss our results as we further evaluate the person's performance across multiple levels of control, ranging from voluntary to automatic to autonomic.

09:00
Sentimental Soft Robotics as Companion Artefacts

ABSTRACT. In my practice-led PhD research I explore new dimensions of human-machine relations in the affective space by making and engaging social interaction. I make robotic artefacts whose movements are responsive and later aimed to be adaptive to human emotional cues. Soft robotic mechanism and materials are adopted as enabling technology for they contribute to the pursuit of artefacts with the integrity of computational intelligence and aesthetic serendipity.

These artefacts offer organic appearances and sensual movements. The interactive movements are programmable and data about human interactivity at the same time. Their aesthetic value functions in attracting people to interact with them. Through interaction, the embedded sensors collect data on interactors’ behaviour. Through data analysis it aims to find patterns that infer affective status, and could later adapt their own behaviours accordingly.

Their sensual properties namely the organic appearance and sensual movements facilitate establish connection on an emotional level. Unlike computational artefacts which often come with pre-defined interaction guidelines and prescribed ‘emotional status’, these artefacts enable a process of relation forming at individual level, through accumulated interaction. The design perceives emotion as interaction (Kirsten et al, 2007) and supports the understanding, interpretation and experiencing of emotions. The research and practice naturally falls in three stages. Stage one: learning material and kinetic properties of soft robotic artefacts through studio making. Stage two: Engaging human interaction through public facing workshops to collect observation and feedback. Stage three: developing artefacts in specific context. My research is at the end of stage two and actively seeking context to bring the research and practice to stage three. This paper would brief the development of the early research stages and project later stages and hope to collect feedback and comments.

09:00
Truly performative within a distributed network

ABSTRACT. This paper investigates the changing nature of performance art within a distributed network, by investigating the changing perception of public space to a technologically mediated form of space. This situation instigates critical performative artworks, which straddle the boundaries between art and life, even more than the mediums historical predecessors. The condition is linked to the emergence of a new praxis from artists who utilise this new form of public space, or mediated space, as their studio site, research site, along with their site of display.

09:00
VIMOs: Enabling Expressive Mediation and Generation of Embodied Musical Interactions
SPEAKER: unknown

ABSTRACT. This paper presents a recently-started doctoral project towards a new framework, called Virtual Intelligent Musical Objects (VIMOs). VIMOs aim at combining features of interactive machine learning tools, autonomous agents, and collaborative media into the creation of motion-based, user adaptable, shareable interactive music systems. We propose two models for stylistic motion learning and generation under a ``design through performance" interactive workflow. We discuss further human and computer-related research challenges involving expressiveness rendering and social musical interaction, as well as novel artistic and educational applications to be led within the scope of VIMOs.

09:00
Fibres, Fabrics and F-Formations.
SPEAKER: unknown

ABSTRACT. Body movements contain a great deal of information about pat- terns of participation in conversation. For example, speakers and addressees move their hands in systematically different ways. Exist- ing approaches to identifying patterns in social interaction typically employ relatively complex sensing devices such as fixed cameras or mobile phones. With this work, a new, non-intrusive method for sensing patterns of social interaction using only fabrics is introduced. Using a textile surface as a sensing material for capturing body movement shall be discussed further in the scope of this project.

09:00
Embodied contemplative practices and interactive music.

ABSTRACT. Human movement has become more and more central in nowadays music performance, in particular because of an increasing use of interactive technologies that allow the body to generate and control music. In order to better understand this kind of musical expres- sion, we need to understand the body and her perception. Embodied contemplative practices (like Qigong or Yoga ) have developed a knowledge of the body that can be helpful to understand human perception: phenomenology and embodied cognition have studied and adopted this knowledge, in order to scienti cally consider rst hand experiences without reductionism. If the role of the body is so important for interactive music performance, we need to consider and include this perceptual knowledge into interactive music prac- tices. The goal of this doctoral research is to develop an interactive music system that supports and helps the development of speci c abilities of embodied contemplative practices: proprioperception, self-observation and introspection.

09:00
ConDiS – Conducting Digital System

ABSTRACT. Abstract This paper presents the Conducting Digital System - ConDiS that is designed to enable a conductor not only to control the overall sound of the performing musicians but also to control a digitally processed version of the performer´s sound in real time. It means that the conductor should be able to “grab” a digitally processed sound from one or more instrument, change its volume, sonority and move it around the hall, all in real time, with his/her conducting gestures. In other words, conducting the overall balance/volume, timbre and location in space between the instrumental signal and the computer-generated sound signal. ConDiS is directed toward the interaction, the expressions, the musical gestures and movements of the classical conductor. Are we capable to build up a system that feels “natural” for the conductor? Can we build up a gesture recognizing system that allows the conductor to use his/her natural way of expressive conducting to add the same expressiveness to live interactive electronic sounds? Through analyzing conducting gestures and testing various sensor techniques and most importantly, composing and performing music, I seek to find answers to these questions.

09:00
Co-adaptive Tools to Support Expert Creative Practice

ABSTRACT. I am interested in designing interactive systems to support expert creative practice. During my thesis I will propose novel tools grounded on creative professionals’ processes, that will allow them to explore complex creative concepts. With this goal in mind, I studied the practices of designers, and built a tool called StickyLines. StickyLines allows users to appropriate the concept of alignment and distribution relationships in graphical layout. My main focus for the rest of my thesis is on choreographers, as they work not only with spatial but also temporal and more general relationships, presenting an interesting challenge from a Human-Computer Interaction (HCI) perspective. I interviewed contemporary choreographers about their creative process and conducted workshops to explore how they express creative ideas. As a result, we proposed a framework for articulating the high-level patterns that emerge from their practice, and presented a set of implications for the design of interactive tools for choreographers. I am currently prototyping Knotation, a tool that will let them explore their choreographic ideas by sketching and linking multimedia les, at different levels of abstraction. I plan to iterate the design with users in the frame of participatory design workshops and to evaluate it through qualitative studies.

09:00
Amateur User-Centered Software for Creative, Diverse 3D Human Avatar Design

ABSTRACT. This doctoral research designs 3D human avatar generation software for amateur creative users. Currently available software relies on limiting the range of possible bodies that the user is able to create, within the boundaries of normative physicality, in order to simplify interaction for users without 3D modeling skills. Rather than artificially limiting user output, I am creating open source software that expands the range of bodies able to be represented in program, following a user centered design process to implement direct manipulation techniques extrapolated from artistic practice. This paper describes the background context, aims, and current research activities related to creating this software as a PhD project.

09:00
Embodied Consciousness During Meditative Moving - neurocognitive theories
SPEAKER: Aska Sakuta

ABSTRACT. My doctoral research lies at the intersection between movement phenomenology, Eastern philosophy, and cognitive science. I see the mover’s embodied phenomenon as one of the key resources in developing new theories on how our consciousness relates to motor control. My work aims to re-articulate the phenomenon of movement – as perceived by somatic movement practitioners – in a language and framework appropriate for future scientific and empirical investigation into human cognition and movement. In my presentation, I will introduce the idea of meditation in movement, wherein a mover experiences the state of “no mind” (Yuasa, 1993), a sense of mental tranquillity and ‘nothing-ness,’ during instances of Eastern movement practices. The state of “no mind” is often associated with the idea of “flow” (Csikszentmihalyi, 2013), a feeling of being “in the zone,” in which the individual’s intuition sharpens, and he/she exhibits “peak performance” (Privette, 1983). My research asks the question: Is there a relationship between the phenomenon of “no mind” and ‘optimal’ movement performance? My theory addresses the notion that the state of “no mind” (or “flow”) can be neurologically represented by a deactivation in some of the executive functions in the brain, thereby potentially giving way to more primitive functions including intuitive motor control (Austin, 2010). By introducing a brief theoretical outline and then inviting the listeners to participate in a short movement session, I would like to open up an interdisciplinary discussion on ideas such as the neurophenomenology of meditative moving, and movement efficiency enabled through the deactivation of executive functions.

Reference: Austin, J. H. (2010). The thalamic gateway: how the meditative training of attention evolves toward selfless transformations of consciousness. Effortless Attention. A New Perspective in the Cognitive Science of Attention and Action, 373-407. Csikszentmihalyi, M., & Csikszentmihalyi, I. S. (Eds.). (1992). Optimal experience: Psychological studies of flow in consciousness. Cambridge university press. Privette, G. (1983). Peak experience, peak performance, and flow: A comparative analysis of positive human experiences. Journal of personality and social psychology, 45(6), 1361. Yuasa, Y. (1993). The body, self-cultivation, and ki-energy. Suny Press.

10:00-10:30Coffee Break
12:30-14:00Lunch Break
13:00-14:00 Session 3A: Installations and Demoes
Location: Weston Atrium
13:00
Collaborative 2D center of mass serious game
SPEAKER: Mehdi Hamamda

ABSTRACT. We have implemented a 2D serious game based on collaboration between players instead of the competitive scenario. It based on the control of the Center Of Mass of players, a physical concept which links participants in real-time. We will explain the main pedagogical impacts of this collaborative movement from K-12 to university.

13:00
Maria Montessori meets gesture tracking technologies
SPEAKER: Adrien Husson

ABSTRACT. This paper presents our recent works dealing with the relevance of introducing digital technologies into the education field, especially at the kindergarten. Specifically, we focus on digital technologies that allow any kind of movement tracking and how it can enhance teaching and learning potential on various fields. The prototype we submit is the first of a series focusing on writing and reading education. We present the various influences that led us to this prototype and describe the perspectives for further experimentation. We also mention how this former work can engage similar studies into others fields dealing with body gestures.

13:00
The Box: A Game About Two-Handed 3D Rotation

ABSTRACT. The Box is a prototype 3D puzzle game to study two-handed motion control schemes for spatial rotation. Using two handheld motion tracking devices, players are tasked to rotate a maze cube to roll a small sphere towards a goal inside the cube. We designed the game to iteratively observe how people would spontaneously use controls and develop and refine a ‘natural’ control scheme from that. Initial results indicate no immediate clear principles of best practices.

13:00
Wired 2
SPEAKER: Karen Wood

ABSTRACT. The Stream Project’s founding members, two dancers and a neuroscientist, explored the possibilities of using the dancer’s physiological information to create a series of works, called Wired, that disrupts and informs the viewer’s understanding of their own physiological state. Brain wave states, heart rate variability and respiratory rate was used to create a series of artistic dance works. It focuses on bringing scientific exploration into a creative environment, taking full advantage of the visual and auditory possibilities already being used within the field. Wired is an exciting collaboration between dance performance, neuroscience, film, sound and lighting, culminating in a live-feed multi-media performance.

The project worked with a creative coder to develop an installation that involved using the audience member's heart rate to view different sections of dance film footage. The footage filmed dancer, Genevieve Say, in the peak district dancing on a bridge. The audience member holds on to an object that has heart rate sensors embedded in it and depending on the audience's heart rate speed, this then dictates the section of the film shown. The installation was shown as part of the Wired series at FACT (Foundation for Art and Creative Technology), Liverpool in 2015. The footage has also been used to make a dance film.

We would like to propose to show the Wired 2 installation at MOCO17 as we think that it would be a good opportunity for us to show the work and get feedback on areas for development.

13:00
Becoming Light
SPEAKER: Timothy Wood

ABSTRACT. Becoming Light is an immersive world made for live performance and for virtual reality. As a virtual reality installation, participants are free to interact with the world of light and sound on a path through memory and dream-like space. The motions of the body are remembered within the world and re-encountered as ghost-like storytellers along the journey. As the pathway through the world unfolds, voices and recorded poetry is discovered revealing an ethereal narrative. The shape, timing, and velocity of the body changes the way the story is experienced.

As a performance piece, the virtual reality headset is replaced with projectors and a stage. A solo dancer guides the audience through the world of light while following an improvisational somatic movement score.

13:00
Jinn-Ginnaye
SPEAKER: Kirk Woolford

ABSTRACT. Jinn-Ginnaye is an exploration of movement in place. It is a collection of dance pieces exploring issues of bringing western dance performance to the United Arab Emirates, where local modesty laws influence how women can be shown in public. The pieces use video compositing, motion capture, and Virtual Reality techniques to remove the body of the dancer, but leave behind the dance, and the traces of the desert in which it was created.

13:00-14:00 Session 3B: A Space To Wonder
Location: SIML
13:00
A Space to Wonder: Movement and Sound Interaction through Biosensor Technology
SPEAKER: unknown

ABSTRACT. This interdisciplinary workshop researches shared experiences of embodied knowledge and kinesthetic empathy, in creative social environments by exploring the benefits of combining the disciplines of The Arts and Sciences. Utilizing biosensor technology, and cognitive science research, within a movement workshop designed for public engagement for participants with limited dance experience, opens a discussion on the facilitation of shared embodied knowledge and kinesthetic empathy.

Presenting this in a black-box dance studio invites participant’s senses to be heightened, whilst encourage trusting relationships. Introducing biosensor interfaces, through wearable breath-bands, allows the participants’ breathing patterns to be sonified as the soundscape throughout the session, giving the group a layered experience of their bodily movement. The sound landscape, combined with facilitated somatic movement practices, highlights movement and sound interaction explorations through movement with others, whilst offering a collaborative environment transferable to a variety of communities.

This collaborative research project, facilitated by Abigail and Klara, explores personal embodied knowledge and the role of kinaesthetic empathy, when understanding others through embodied knowledge. This workshop will emulate an Interactive Installation whilst furthering discussions about somatic movement, and the use of biosensors for movement and sound interaction in research applying technology to support and understand human movement practices.

13:00-14:00 Session 3C: Schrifttanz Zwei
Location: PSH 332
13:00
Schrifttanz Zwei
SPEAKER: unknown

ABSTRACT. This proposed demonstration/workshop will use a phenomenological perspective as well as a production-oriented approach to the work/research on which we are collaborating (‘Schrifttanz Zwei’). Although digital technologies support this multi-disciplinary project that combines archival research, choreography, music composition, animation creation, and video projection, our participation in MOCO will centre around the collaborative process and how we access and move through the digital and analog spaces in which we each work. The hope is that, through this discussion, attendees -- and we as collaborators -- will come to better understand the place of the Arts in the Digital world while exploring the possibilities and challenges inherent in the reconstruction/reimagining of a dance score.

We will share our processes with MOCO attendees in a demonstration. Then the Schrifttanz Zwei team would like to then invite them to explore their own process of reconstructing/reimagining Bartenieff’s score in a workshop setting. This would involve teams working together in a given timespan (either throughout the conference or in a ½ or full-day workshop, we are open) to create collaborative products to be shown to the full gathering at some point during the conference. They may work with our music and dance files using their own technologies, or play with Indigo. Attendees might work with the score itself, or any process they desire. It is our hope that we can inspire others to find ways to access other disciplines through collaborative, multi-disciplinary activities.

Full 2-page demo proposal is attached.

14:00-14:30 Session 4: Welcome
Location: LG02 Lecture Theatre
14:30-15:30 Session 5: Opening Keynote: Mark Coniglio
Location: LG02 Lecture Theatre
14:30
Troika Ranch
SPEAKER: Mark Coniglio

ABSTRACT. Media artist, composer and programmer Mark Coniglio is widely considered to be a pioneering force in the exploration of dance and interactive media. Beginning with Troika Ranch (http://troikaranch.org), the media intensive dance company he co-founded with Dawn Stoppiello, and later as the creator of the media software Isadora (http://troikatronix.com), he has spent nearly three decades enmeshed in the relationship of movement, media, and computing.

Coniglio's talk will begin with an overview of his artistic practice, showing how Troika Ranch's early works – where sensory systems, responsive media software and live performers combined to produce an interactive "reflection" of the body – evolved into an approach where technology actively intervened in the creation, rehearsal and performance of movement. He will conclude by championing potential future interventions, where computational artificial intelligence will be placed into conflict with human bodily knowledge, to provoke the invention of new movement, unforeseen choreographic structures, and compelling relationships between the body and media.

15:30-16:00Coffee Break
16:00-18:00 Session 7: Sensors
Location: LG02 Lecture Theatre
16:00
Initial Investigations into Characterizing DIY E-Textile Stretch Sensors
SPEAKER: unknown

ABSTRACT. This paper evaluates three electronic textile (e-textile) stretch sensors commonly constructed for bespoke applications: fabric knit with a stainless steel and polyester yarn, and knit fabric coated with a conductive polymer. Two versions of the knit stainless steel and polyester yarn sensor, one hand and one machine knit, are evaluated. All of the materials used in the construction of the sensors are accessible to designers and engineers, and are commonly used in wearable technology projects, particularly arts performance. However, the properties of each sensor have not before been formally analysed. We evaluate the sensors' performance when being stretched and released.

16:15
Methods for Tracking Dynamically Coupled Brain-Body Activities during Natural Movement
SPEAKER: unknown

ABSTRACT. A fundamental property of movement is its dynamically changing variability and its adaptive nature. These features seem to be connected to the cognitive control of our actions by the brain. However, it has been a challenge to connect cognitive neuroscience and movement science in developing a framework amenable to study the coupled dynamics of the brain and body during natural movements. Part of the problem has been the lack of proper sensors to probe both activities in tandem. Fortunately, contemporary advances in wireless technology with high sampling resolution have paved the way to address this challenge. In this paper, we make use of wireless wearable sensors and a new statistical platform to study the dynamic interactions of the brain and body during natural walking. To examine the influence of cognitive tasks that are either spontaneous or deliberate, we combine the use of a metronome to impose a passive/spontaneous task, and the use of specific instructions on paced breathing to impose a deliberate task. This paper presents a new platform for the individualized behavioral analyses, which incorporates a new set of data types and visualization tools, to quantify the outcome of such experimental paradigm. We discuss our results and suggest that these new methods and paradigm may serve to unify and advance the fields of cognitive neuroscience and neuro-motor control.

16:40
Critical Appropriations of Biosensors in Artistic Practice
SPEAKER: unknown

ABSTRACT. In this article we discuss the ethical and aesthetic implications of the appropriation of biomedical sensors in artistic practice. The concept of cross-disciplinary appropriation is elaborated with reference to Guattari’s ethico-aesthetic paradigms, and Barad’s metaphor of diffraction as methodology. In reviewing existing artistic projects with biosensors, we consider ways in which the recontextualization of technologies, and likewise techniques, can both propagate and violate disciplinary expectations and approaches. We propose that by way of critical appropriations of biosensors in artistic practice—that is to say, de- and re-contextualizations of biosensors that acknowledge the shift of ecology and epistemology—artists have a vital role to play in troubling reductive representations of bodies, and furthermore, destabilizing the ethico-aesthetic boundaries of differently constituted disciplines.

17:05
Digital oxymorons: From ordinary to expressive objects using tiny wireless IMUs
SPEAKER: unknown

ABSTRACT. In this paper we discuss the potential of ordinary objects acting as human computer interfaces with an Inertial Measurement Unit, the Twiz, to capture a body's orientation and acceleration. The motivation behind this research is to develop a toolkit that enables end users to quickly prototype custom interfaces for artistic expressions through movement. Through an iterative design process we have enhanced existing technical implementations such as wireless data transfer, battery lifespan, two-way communication and data analysis including machine-learning techniques. We conducted object-making sessions and developed software prototypes for audio and visual feedback. We explored a range of experiments related to visual arts, dance, and music by attaching the Twiz to different types of objects to allow users to carry out impromptu interactions. As a result of this process we have gained a better understand of an object’s expressive potential whilst capturing and analyzing its movement.

17:30
Assessing the Accuracy of an Algorithm for the Estimation of Spatial Gait Parameters Using Inertial Measurement Units: Application to Healthy Subject and Hemiparetic Stroke Survivor
SPEAKER: Federico Visi

ABSTRACT. We have reviewed and assessed the reliability of a dead reckoning and drift correction algorithm for the estimation of spatial gait parameters using Inertial Measurement Units (IMUs). In particular, we are interested in obtaining accurate stride lengths measurements in order to assess the effects of a wearable haptic cueing device designed to assist people with neurological health conditions during gait rehabilitation. To assess the accuracy of the stride lengths estimates, we compared the output of the algorithm with measurements obtained using a high-end marker-based motion capture system, here adopted as a gold standard. In addition, we introduce an alternative method for detecting initial impact events (i.e. the instants at which one foot contacts the ground, here used for delimiting strides) using accelerometer data. Our method, based on a kinematic feature we named 'jerkage', has proved more robust than detecting peaks on raw accelerometer data. We argue that the resulting measurements of stride lengths are accurate enough to provide trend data needed to support worthwhile gait rehabilitation applications. This approach has potential to assist physiotherapists and patients without access to fully-equipped movement labs. More specifically, it has applications for collecting data to guide and assess gait rehabilitation both outdoors and at home.

18:00-19:30 Session 8: Corpus Nil
Location: SIML
18:00
Corpus Nil
SPEAKER: unknown

ABSTRACT. Corpus Nil is a body art performance for biophysical sensing technologies, surround sound and interactive light. In this piece, we have applied findings from our research on the notion of gesture power in terms of control and multimodal muscle sensing. We have implemented a set of algorithms which autonomously govern sound and light parameters in response to the performer's muscular activity. Aesthetically, the performance recalls a ritual of birth for a modified human body. It is a tense and sensual choreography between a human performer and a machine of hardware and software, forcedly embracing the audience through thresholds of movement, sound and light.