previous day
all days

View: session overviewtalk overview


registration table open day#3

09:00-10:30 Session 15: Movement Recognition

Movement Recognition

Force & Motion: Conducting to the Click

ABSTRACT. We present preliminary results from an on-going project at the University of Southampton that aims to develop protocols for motion capture with music conducting. These protocols will facilitate the study of conducting gestures, provide a high-quality open-access data set using professional conductors and provide a platform for developing machine learning for conductor following systems. In this paper we explore the potential use of force-plate data to track conductors’ beats as a non-intrusive method for conductor following. Three conductors were captured directing the same piece of music and we analysed a section of the piece where the conductors are working with a click-track to ensure the intended beats share the same timing and there is an inherent ground truth. We then examined the data from force plate and high-end optical marker tracking, against observer beat tapping and click audio to determine whether force plate data could serve as a useful analogue in conductor following. The results suggest that with simple analysis of the data, beats can be extracted with comparable timing accuracy to optical marker tracking.

Imagery and metaphors: from movement practices to digital and immersive environments

ABSTRACT. Imagery is a commonly used practice that is applied in areas such as sports, rehabilitation, therapy and dance. Especially in dance, students of all ages are encouraged by their teachers to use imagery to improve their performance or clearly understand the form and quality of a movement. Mental imagery aims to stimulate thinking with the body, mostly with metaphoric pictures, that usually trigger the sense of sight. These visual or kinesthetic images may include handling imaginary objects, imagining being in particular environments and many other possible imaginary bodily states and shapes. Nowadays motion sensing, augmented and virtual reality technologies offer powerful tools for implementing interactive and real-time visualizations ,merging the two worlds of mental imagery and immersive technology into a new range of opportunities. In this work we raise the question of how design and transition of certain types of imagery into digital experiences might assist dance training. Within an embodied interactive experience, through visual or other modalities a mental imagery metaphor can be transformed into a visual or other kind of representation consisting an effective and/or creative feedback. In this work, first, we examine the existing imagery approaches in movement practices and we discuss their characteristics. Secondly, we study the existing applications that are proposed by researchers in the field of interactive dance and are categorize them in terms of the modalities that they use and the type of metaphors that they are related to. Based on existing literature on augmented performances and reported metaphors, we propose a practical map for implementing self practice tools, reflection tools and learning environments.

K-Multiscope: Combining Multiple Kinect Sensors into a Common 3D Coordinate System

ABSTRACT. We present a method for combining data from multiple Kinect motion-capture sensors into a common coordinate system. Kinect sensors offer a cheaper, potentially less accurate alternative for full-body motion tracking. By incorporating multiple sensors into a multiscopic system, we address potential accuracy and recognition flaws caused by individual sensor conditions, such as occlusions and space limitations, and increase the overall accuracy of skeletal data tracking. We merge data from multiple Kinects using a custom calibration algorithm, called K-Multiscope.  K-Multiscope generates an affine transform for each of the available sensors, and thus combines their data into a common 3D coordinate system. We have incorporated this algorithm into Kuatro, a skeletal data pipeline designed earlier to simplify live motion capture for use in music interaction experiences and installations.  In closing, we present Liminal Space, a live duet performance for cello and dance, which utilizes the Kuatro system to transform dance movements into music.

The Mirrored Body: Sensation, Agency and Expression in a Video Processed World

ABSTRACT. Real-time video processing in multimedia performances can create a video double that recalibrates understandings of both on screen and physical bodies. This mirrored bodily exchange affords new experiences of agency, sensation and expression in the context of theatrical performances. Careful attention is paid to creating a life- sized mirror image that places the processed body in close relationship with the living body. These video processing techniques find their purpose as characters in storytelling for theatre and dance.

First Steps in Dance Data Science: Educational Design

ABSTRACT. We report results of a design-research effort to develop a culturally-relevant educational experience that can engage high school dancers in statistics and data science. In partnership with a local high school and members of its step team, we explore quantitative analysis of both visual and acoustic data captured from student dance. We describe prototype visualizations and interactive applications for evaluating pose precision, tempo, and timbre. With educational goals in mind, we have constrained our design to using only interpretable features and simple, accessible algorithms.

10:50-11:00Coffee Break
12:15-13:30Lunch Break
13:30-14:45 Session 18: Movement Analysis and Representation

Movement Representation 

Functional Data Analysis of Rowing Technique Using Motion Capture Data

ABSTRACT. We present an approach to analyzing the motion capture data of rowers using bivariate functional principal component analysis (bfPCA). The method has been applied on data from six elite rowers rowing on an ergometer. The analyses of the upper and lower body coordination during the rowing cycle revealed significant differences between the rowers, even though the data was normalized to account for differences in body dimensions. We make an argument for the use of bfPCA and other functional data analysis methods for the quantitative evaluation and description of technique in sports.

Evaluating movement qualities with visual feedback for real-time motion capture

ABSTRACT. The focus of this paper is to investigate how the design of visual feedback on full body movement affect the quality of the movements. Informed by the theory of embodiment in interaction design and media technology, as well as by the Laban theory of effort, a computer application was implemented in which users are able to project their movements onto two visuals ('Particle' and 'Metal') %virtual characters. We investigated whether the visual designs influenced movers through an experiment where participants were randomly assigned to one of the visuals while performing a set of simple tasks. Qualitative analysis of participants' verbal movement descriptions as well as analysis of quantitative movement features combine several perspectives with respect to describing the differences and the change in the movement qualities. The qualitative data shows clear differences between the groups. The quantitative data indicates that all groups move differently when visual feedback is provided. Our results contribute to the design effort of visual modality in movement-focused design of extended realities.

The Calder Effect - Embodied Knowledge Through Moving Images
PRESENTER: Jan Schacher

ABSTRACT. How can a contemporary technological apparatus reproduce the space of a gesture that is 40'000 years old? Through an iconic (or iconological) process, the project `Les Mains Négatives' is an attempt to combine intensive archaeological, anthropological, iconological, phenomenological research on rock art painting, using technological tools and re-mediation processes from media arts. Focused on a teleological approach, our research addresses the means of communication and interaction: What is happening to one's consciousness when the body moves? Considering that rock art and cave ornamentations are not simple pictorial art, but a place for technical transmission, our aim is to show how rock art has been a medium for teaching via the medium of performance. The `mise-en-oeuvre' of our apparatus generates an artistic object, whose primary goal is knowledge production through aesthetic experience.

14:45-15:00Coffee Break
15:00-16:15 Session 19: Machine Learning

Machine Learning

Beyond Imitation: Generative and Variational Choreography via Machine Learning

ABSTRACT. Our team of dancers, physicists, and researchers has developed machine-learning tools to generate sequences of movements and variations on movements using recurrent neural network and autoencoder architectures from a training dataset of choreography captured as 55 3D points.

Learning Movement through Human-Computer Co-Creative Improvisation

ABSTRACT. Computers that are able to collaboratively improvise movement with humans could have an impact on a variety of application domains, ranging from improving procedural animation in game environments to fostering human-computer co-creativity. Enabling real-time movement improvisation requires equipping computers with strategies for learning and understanding movement. Most existing research focuses on gesture classification, which does not facilitate the learning of new gestures, thereby limiting the creative capacity of computers. In this paper, we explore how to develop a gesture clustering pipeline that facilitates reasoning about arbitrary novel movements in real-time. We describe the implementation of this pipeline within the context of LuminAI, a system in which humans can collaboratively improvise movements together with an AI agent. A preliminary evaluation indicates that our pipeline is capable of efficiently clustering similar gestures together, but further work is necessary to fully assess the pipeline's ability to meaningfully cluster complex movements.

Information Augmentation for Human Activity Recognition and Fall Detection using Empirical Mode Decomposition on Smartphone Data
PRESENTER: Selçuk Sezer

ABSTRACT. In this paper, we propose a novel design to reduce the number of sensors used in activity recognition and fall detection by using empirical mode decomposition (EMD) along with gravity filtering so as to untangle the useful information gathered from a single sensor, i.e. accelerometer. We focus on reducing the number of sensors utilized by augmenting the information obtained from accelerometer only given that the accelerometer is the most common and easy to access sensor on smartphones. To do so, one gravity component and three intrinsic mode functions (IMFs) are extracted from the accelerometer signal. In order to assess how informative each component is, the raw components are directly used for classification, i.e. without hand-crafting statistical features. The extracted signal components are then individually fed into parallelized random forest (RF) classifiers. The proposed design is evaluated on the publicly available MobiAct dataset. The results show that by only using accelerometer data within the proposed scheme, it is possible to reach the performance of two sensors (accelerometer and gyroscope) used in a conventional manner. This study provides an efficient and convenient-to-use solution for the smartphone applications in human activity recognition domain.

16:30-17:00 Session 21A: Inverted Narratives(IN): Design Through Practice


Inverted Narratives(IN): Design Through Practice

ABSTRACT. Inverted Narratives (IN) is an interactive system and performance practice designed to facilitate improvised solo performance by combining hand-balancing, computer music composition, and embodied interaction. It fosters experimental play using primarily the legs and feet to construct characters and narratives. Further, IN is a web-camera based system which invites critical inquiry into self-quantification, surveillance, agency, and self-expression. This proposal articulates the artistic themes and intentions of IN. This proposal also offers to present a performative demonstration of IN as a creative practice and system still under development to the MOCO community.

16:30-17:00 Session 21B: Mubone: An Augmented Trombone and Movement-Based Granular Synthesizer


Mubone: An Augmented Trombone and Movement-Based Granular Synthesizer

ABSTRACT. The mubone is an incremental evolution of the slide trombone based on a system of electronic augmentations which allow the spatial orientation of the instrument to be incorporated into the control of algorithmic sound and music synthesis. We present our conceptual design for this new instrument, detail its practical implementation with a spatial granular synthesizer, and discuss initial creative uses.

16:30-17:15 Session 21C: A Machine

Practice Works - Perfromance

A Machine

ABSTRACT. “A Machine” is an interactive dance performance exploring metaphors about machines and movement. Participating audience members are seated inside a gridded space on cushions numbered in a base 16 numbering system (hexidecimal), which references the numbering systems often used in assembly languages where commands correspond to directly to transistor-based hardware. Prior to being seated each of these audience members have provided their phones and phone numbers, which are used as interactive elements in the performance, controlled by a computer script. Excerpts from academic papers by Alan Turing (computer science) and Catherine Elgin (philosophy of dance) are incorporated into the show alongside live performance and interaction with audience members, who are queried by the performer. The piece evokes the dark, rigidly structured internal world of computers, made visible and experiential through embodiment. Thus, the piece is a metaphor for robots in factories -- and humans in dance studios -- where movement tasks are systematized.

17:00-19:00 Session 22: Double Agent: the dancer in the machine


Double Agent: the dancer in the machine

ABSTRACT. The subtitle of this presentation, 'the dancer in the machine', evokes Gilbert Ryle’s critique of René Descarte’s mind-body dualism as the "ghost in the machine". [1] Ryle argued that Cartesian dualism depends on a model of the body-mind relationship that posits the mind as a 'ghost' within, or 'puppeteer' of, the physical body. Ryle's is an embodied concept of cognition, where agency is considered enacted not from a central control system but as distributed, akin to what Gregory Bateson subsequently described as an "ecology of mind". [2] In the recent artistic project 'Double Agent' the authors have been exploring dual modalities of agency in the moving body. 'Double Agent' [3] employs machine-learning and computational representation of human movement alongside algorithmic interaction with, and responses to, live human movement. Double Agent is an interactive augmented performance environment where people (interactors) physically interact with a virtual 'agent' within a large-scale three-dimensional projection. The 'agent' is an emergent phenomenon determined by the behaviour of numerous small invisible virtual elements that are both drawn to and repelled by the movement of human bodies in the installation space. The 'agent' is formed from the totality of this behaviour as a complex three-dimensional visual structure that is both tensile and fluid. Interaction with the 'agent' encourages exploration by interactors of the system's tensional polarity and the sense of physical extension it allows. Double Agent was developed as a collaboration between artist Simon Biggs, computer scientists Mark McDonnell and Samya Bagchi and dance artists Sue Hawksley and Tammy Arjona. The project incorporates a software agent within the system that has learned how to dance. The title Double Agent evokes the two-fold agency of the work, wherein a computationally generated agent interacts with a live interactor whilst another computationally generated agent simultaneously 'dances' based on what it has learned. Employing over 8 hours of recorded dance data, acquired through the live motion-capture of two dancers improvising within the work, the software agent has learned to improvise dance movements in response to the live actions of interactors. The software agent moves in ways similar to the dancers but also possesses a host of novel moves. This novelty could be considered a form of creative agency emergent from the machine-learning process. Double Agent employs a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) [4]. LSTM-RNNs allow computational systems to evolve models of complex behaviour in an unsupervised manner, without reference to pre-existing datasets. The system learns by identifying patterns in the data in what could be conceived of as an idealised non-verbal or non-linguistic experiential framework. Such computational systems can acquire the capacity to generate novel data-sets that follow similar patterns; in the case of Double Agent, humanoid movement data replicating similar, but not identical, behaviour as found in the original motion-capture data. In Double Agent we witness the emergence of a software generated co-interactor, that cohabits a virtual installation space with human interactors, contributing to the collective construction and experience of the work. This software agent is not unaware of its immediate environment. The agent monitors the activity of human interactors and conditions its own behaviour in response, as an inverse correlate: the more active the human interactors the less active the software agent, and vice versa. Here the installation, the software, computers, sensors and interactors (both human and computer-generated) function as a contingent assemblage that, from moment to moment and state to state, instantiates itself as a dynamic heterogeneous subject. Double Agent raises questions about the role of agency within complex distributed systems, whether human, machine or hybrid. In Double Agent there is no 'dancer in the machine'. The system as a whole, including the machine and the human, is the dancer.

17:30-18:45 Session 23: A Shared Gesture and Positioning System for Smart Environments


Exploring Embodied Sonic Meditation through time and space

ABSTRACT. This project artistically explores new ways of using human movement data to manipulate auditory and visual feedback in real-time in a group performance. Three different digital musical instruments or audio-visual systems are implemented using body-sensing technology and body-audio-visual mapping strategies. Based on Wu’s (2017) previous performance research, it connects Eastern philosophy to cognitive science and mindfulness meditative practice, through body expression, voice, electric sound, and data visualization. It augments multidimensional spaces, art forms, and human cognitivefeedback. It disrupts the boundary between cultural identities, machine intelligence, and universal human meaning. The collaborative performance initiates a dialog between eastern and western media artists who design and use their own technologies and artistic tools. Through this performance, we are looking into how individual artists and their systems can work as a whole under diverse cultural backgrounds through their gestures and music expressions.

A Shared Gesture and Positioning System for Smart Environments
PRESENTER: Chris Ziegler

ABSTRACT. Simultaneous location and mapping (SLAM) describes a problem in applications such as robotics, self-driving vehicles, and unmanned aerial vehicles, of mapping an environment while simultaneously tracking agents and navigating within it. Existing research has focused on increasing the speed and quality of this mapping process, while reducing cost. In this paper we propose a simple framework for gathering location and gestural tracking of users via their mobile devices. We present an Internet of Things (IoT) framework for building creative environments with existing low-cost technologies. We will describe how to use this framework to design a shared gesture space. In this paper we focus on trying to solve an interfacing problem of interactive movement-based performances, where the audience watches performers interacting with the stage via embedded technology on stage/installation.

Augmented Violin Performance: A Model-Free Personal Instrument

ABSTRACT. My augmented violin is a personal instrument consisting of an acoustic violin, real-time signal processing, a custom sensor glove, and a violin shoulder rest embedded with voice coils for haptic feedback. Feature construction in software is not model-based but experimental, continuously refined by trial and error. My approach to movement analysis is not based on classifying formalized styles of bowing but on continuous tracking of bowing movement and variation. These techniques are co- developed with novel sonification methods ad libitum, conditioning the performance space without schematizing possible gestures a priori. Furthermore, ongoing development of the instrument, which includes novel haptic feedback elements, effectively symmetrizes sensory feedforward and feedback paths—the enactive loop between action and perception—yielding refined instrumental dynamics, yet the signal processing decisions entail no claim to universal or scientific validity. I improvise with this instrument in live performance, unfurling its possibilities and resingularizing my own technique(s).

Body, Full of Time

ABSTRACT. Body, Full of Time Practice Abstract: 6th International Conference on Movement & Computing

Project Description

Body, Full of Time is a solo choreographic work performed and created by dance media artist Scotty Hardwig in collaboration with visual artist Zach Duer. Using motion capture, projection, and interactive avatar designs, the work presents a chimeric vision of the human body fragmented in the cyber age, examining the relationship between physical and digital versions of self. The dance emerges in the space between the human and the virtual, with the body both as active sensor and passive recipient to technological forces. In choreography, stage design, and sound composition, the work draws upon old and new ways of making, melding ancient ways of creating and dancing with more contemporary currents in digital culture.

This performance integrates inertial motion capture technology with custom software to freeze, record, and playback portions of a controlled avatar linked to the movement performer. In this way, the choreography is re-coded in digital space so that two simultaneous performances are happening: the movements of the live body alongside the “digital choreography” of the avatar and animations. This is a hybrid performance work that draws together the visual languages of dance, choreography, stage design, sound, visual art, 3D and 2D animation techniques, and contemporary digital aesthetics. It is in this blending of art forms that we are researching embodied fragmentation and multiplicity in three-dimensional virtual space. One of the goals is to investigate this hybridity of form between traditional choreographic arts and the potentialities provided by digital technology. In form and in content, the work investigates the relationship between physical and cyber forms of embodiment.

The work also follows a somewhat classical structure with three movements. In the first movement, we see the human body in a raw form (it's a very physical, highly choreographic section with minimal projections), in the second movement, we see a duet between the performer and an avatar responding and coded to respond to movements in various ways. In the third movement, we see a pacified, passive body being actively "scanned" by projection mapping software. The three movements take us on a surreal journey from the body as active, into a hybrid technological space, and finally into a passive body subsumed by technological forces.


We are currently in development of three versions of this piece - the full-length work will premiere in the Moss Arts Center here at Virginia Tech April 25-27, after which full-length documentation will be available. We would prefer to present the full-length piece, but it does require a blackbox theater and thrust-stage model. Our current tech rider materials are attached for this. That version involves rear and floor projections, as well as scenic designs which require white marley on the floor. This version would probably work best as some kind of keynote performance or opening night welcome performance; we’re also willing to partner with colleagues within the ASU Dance Department to co-organize a performance of this nature or have it coincide with some manner of residency activities there. Liz Lerman was recently visiting here at Virginia Tech and expressed interest in bringing a work like this. If that kind of co-presentation might be possible, we would definitely embrace that.

The second iteration we are developing specifically for conference-style performances, which is a more crafted version of Movement II (the section in the supporting media), and will involve only back projection. The technical needs for this version are a bit more flexible. We are also waiting to hear back about funding for a third iteration which would be designed entirely for VR, a completely digital, immersive choreographic version using the avatar designs from Movement II. We are slated to be creating this during August 2019, pending funding approval. Ideally, this VR version would be available in lobby displays pre and post-show.

Creative Team

Performance & Choreography: Scotty Hardwig

Visual Direction & Computing: Zach Duer

Music Composition & Performance: Caleb Flood

Animation Designs: Nate King

Stage, Scenic & Costume Design: Estefania Perez-Vera


Software used: Unity, Maya, Ableton Live, MAX/MSP, AfterEffects & hand rotoscoping, 3D Printed mask modeled in Maya and adapted in David

Length of full work: 40-45 minutes [Presentation Version One]

Length of Movement II excerpt: 15 minutes [Presentation Version Two]

Work in progress video documentation: // Password: Drafts [Movement II Visuals from Rehearsals] // [Movement II Visuals from Rehearsals] // [Movement I Visual Design Sketches] // [Movement I Visual Design Sketches] [Movement I back-projected visuals draft] [Movement I back-projected visuals draft] [Movement I - Movement II choreography and sound design, without projections -- this is a good overview of the movement style of the work, in draft form]

More studio and rehearsal videos are also available (we are currently in the process of weaving together all of the technology and the choreographic elements), so those rehearsal videos are somewhat separate (movement & interactive visuals are in separate videos at the moment), but we can also send those extra materials upon request.

19:00-21:00 Session 24: Qualia

ABSTRACT. Qualia is a participatory event that engages MOCO conference attendees in both subtle and overt ways, using simple technology that is readily at hand – their own phones. Qualia is a term that refers to the internal and subjective component of sense perceptions. The overall goal of this work is to forge physical, psychological, aural, and visual connections through embodied means, via a contemporary update on the postmodern task based movement score.

I propose that the work take place during the opening reception for the conference, when people register, and expect to interact with one another anyway. At registration they are invited to text the word “Affect” to an SMS short code on their phones, from which they will receive text message prompts for movement tasks throughout the evening. “Affect” can be read as both a noun and a verb. As such, it implies both an emotional state and the fact that our behavior can influence others. Initially, participants will be prompted to do simple, individual tasks, such as “Sit on the couch,” that they might do anyway, tailored to the specific environment based on consultation with the conference organizers. As they become more comfortable, they will be asked to interact: “Share your food with someone,” or “Introduce yourself with your partner’s last name.” As the participants move around the space, they make self-determined choices to comply with or ignore the prompts they receive. They listen, follow, lead, move, or recline—becoming part of the imagery of the work in ways that collude, conflict, and conjoin with one another. As the evening continues, the prompts will invite the participants to interact more actively with each other, with instructions such as “Gesticulate,” or “Mirror someone.” Some prompts are abstract, like “Reject convention”. While not physically possible, these prompts are intended to create an imaginal effect, a form of qualia.

By spontaneously interacting with the environment and one another, the participants generate their own phenomenological experiences. Thus, this work employs the concept of qualia as embodied experience to expand our understanding of one-on-one relationship to a broader cultural one. Varied interpretations of meaning in Qualia evoke questions of what is real, and how our perceptions of reality are influenced by others’ views. The intention is to both reveal these lines of difference, and also to sew them together, creating interstitial, communal bonds in a way that still acknowledges individual difference. Movement happens in the in-between, creating interstitial connections from one place and one person to another. It is the leap that must be made between two entities that creates connection. By prioritizing movement as the means of communication, the participants can foster intangible yet viscerally remembered connections as both spectators and participants.

*Please note that I did not submit this abstract as a PDF because only one file could be uploaded, and I uploaded the Tech Requirements document instead.