MOCO2019: INTERNATIONAL CONFERENCE ON MOVEMENT AND COMPUTING
PROGRAM FOR FRIDAY, OCTOBER 11TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:15 Session 4: Movement and HCI

PAPER SESSION 1 - FRIDAY: Movement and Design

09:00
Performance, Art, and Cyber-Interoceptive Systems (PACIS)
PRESENTER: Kate Digby

ABSTRACT. This paper provides a survey of the research-creation activities of the collaboration, Performance, Art, and Cyber-Interoceptive Systems (PACIS). PACIS has been exploring how technology can help us create deeper connections with the world around us, each other, and ourselves by combining bioinformatic sensing technology with physiological awareness techniques found in The Batdorf Technique (TBT). Primary outputs of this research involve the development of new and novel interfaces that integrate complex physiological data in performance and computational art contexts. The focus of this endeavour is on the sharing of knowledge and exchange of ideas across disciplines, the traineeship of students and emerging scholars in the technological as well as somatic techniques and the creation of workshops, articles and performances.

09:25
Using Training Technology Probes in Bodystorming for Physical Training

ABSTRACT. A promising domain for wearable technology for physical training is as assistive tools to help people access and act upon their proprioceptive and vestibular senses. To design these tools in close relation to a targeted training practice, we propose an embodied design activity using Training Technology Probes (TTPs). These are pieces of technology with a simple interactivity augmenting and exteriorizing cues from those senses. Here, we explore how to design new TTPs, and explore the usefulness of existing TTPs as design material to spur creativity in embodied design ideation methods. We report on an embodied co-creation design workshop to generate new technology ideas for an ongoing technology-supported circus training course for children with motor difficulties. We characterize the resulting design concepts, elaborating on three that were implemented as TTP prototypes, and show their relevance in several physical training domains. We also bring forward a novel form of technology-supported bodystorming, adding to previous bodystorming methods.

09:50
Shifting Spaces: Using Defamiliarization to Design Choreographic Technologies That Support Co-Creation

ABSTRACT. Choreography includes much improvisation and situated decision-making. These embodied abilities have inspired recent trends in HCI to design systems for nuanced experiences of movement [1]. While there are many systems that support aspects of choreographic composition such as generating rules for improvisation [6] or stringing together poses to create a dance [7], this trend suggests there is room for developing interactive technologies that can more deeply support creative procedures from an embodied perspective. We look to defamiliarization as one tactic that can enable new perspectives in the creative process during otherwise familiar experiences. Defamiliarization can be described as an attentional technique designed to engage a human user‘s decision-making capabilities to provoke unfamiliar approaches to their own creativity. We became interested in more deeply analyzing the interaction between mover and choreographic systems when we discovered limitations when attempting to choreograph collaboratively with a variety of existing systems. This led us to look more closely at how defamiliarization has been used in existing human-computer interaction projects, to understand provocative interaction in a different domain. We then applied the same analysis to choreographic technology projects to confirm the creative options enabled by this framework. This paper presents a framework for co-creative systems that proposes analytical components of: Disorientation, Open-Play, Closed-Exploration, and Balanced Creativity. These components focus on design for choreography yet they can be applied to many creative domains. To test our framework, we also present a variety of speculative designs that would engage with a choreographer in the sensory exploration, movement generation, and composition processes.

10:15-10:30Coffee Break
10:30-12:00 Session 5: Sound and Movement Computing

Sound and Movement Computing

10:30
Gesture-Ink-Sound: Linking Calligraphy Performance with Sound
PRESENTER: Jan Schacher

ABSTRACT. In calligraphy, a brush stroke is rooted in an inner image, breath and the uninterrupted flow of movement. The same can be said of a bow stroke on a string instrument or a note sounded on a wind instrument. This article documents the encounter between a specific, two-person form of calligraphic performance, movement analysis techniques, and the mapping of brush gestures to sound processes. It shows how, based on data obtained in motion-capture sessions, the link between gesture and sound is established. This enables different models of sound processes, their specific mode of operation, and the understanding of what makes a stroke. Questions and issues arising from this concrete work are collected and a reflective analysis is carried out via a diagrammatic process. A discussion of critical limitations and possible extensions in this configuration concludes the article.

10:55
Bodily Signals Entrainment in the Sound of Music

ABSTRACT. As a user listens to music, his bodily biorhythms can entrain with the music's rhythms. This work describes a human computer interface used to characterize the evolution of the stochastic signatures of physiological rhythms across the central and the peripheral nervous systems in the presence (or absence) of music. We track the heart, EEG and kinematics' variability under different music-driven conditions to identify the parameter manifold and context with maximal signal to noise ratio as well as to identify regions of maximal and minimal statistical co-dependencies of present events from past events.

11:20
Instruments of Articulation: Signal Processing in Live Performance

ABSTRACT. Building on the first author’s hybrid/augmented violin practice and the second author’s work with responsive media environments, we build and reflect on collectively-played room- scale instruments that afford the precision and nuance of an individually-played real-time gestural media system. We consider gestural instruments designed for the interplay of action and perception at the sensorimotor level bypassing tokenization of features of activity and sensors. Our gesturally-modulated media instruments are based not on models or a priori schemata but driven by continuous adaptation to contingent activity and state of the event, as well as compositional intent. We think of such performable, expressive systems as instruments of articulation rather than of representation. Our work is motivated by a progression from phenomenological interpretations of individually-played instruments through non- anthropocentric notions of lived experience, to ecosystemic approaches to ensembles of realtime instruments, people and processes concurrently co-articulating an event.

11:45
The Airborne Instruments nUFO: a Movement Based Musical Instrument for Possibility Space Exploration
PRESENTER: Isak Han

ABSTRACT. The Airborne Instruments nUFO (nontrivial/new flying object) is a new movement based Digital Music Instrument designed for maximal motional freedom of the performer. A handheld wireless Interactor digitizes large scale movement via a 9-axis IMU together with a set of 8 touch-sensitive pads for fine motor finger action. The corresponding software, nUFO_App, applies elaborate meta-mapping strategies (called Influx) to the movement data to inform a number of sound processes such that even very simple movements create complex changes in the sound, which frees players from distracting technical concerns, and empowers them to focus on playing by listening and intuitive motion.
The nUFO distills 15 years of research into complex sound synthesis, just-in-time programming, modal control, and meta-control strategies with a physical Interactor, ergonomically designed from scratch for intuitively exploring the possibility spaces of such systems.

12:00-13:30Lunch Break
12:00-13:30 Session 6A: Poster Session

Poster Session #1 (6 posters)

12:00-12:30 Session 6C: Intimidatrix
12:00
Intimidatrix

ABSTRACT. Intimidatrix is a performative installation for Exo.Rosie, a custom-made electronic instrument I designed and built. Exo.Rosie is a roughly body-sized frame inspired by mecha (wearable robot suits such as Iron Man). Infrared proximity sensors of varying active distances demand choreography on micro (e.g. slight lean while standing) and macro scales (e.g. falling to the floor). This motion input translates directly to audio output via self-designed integrated-circuit oscillators. Intimidatrix, a piece for Exo.Rosie, foregrounds the performative risk inherent to the full-body playing style of this instrument, and interrogates the unique possibilities of occupying versus wearing technology.

14:45-15:00Coffee Break
15:00-16:05 Session 8: Robots

Robotics

15:00
Toward Expressive Multi-Platform Teleoperation: Laban-Inspired Concurrent Operation of Multiple Joints on the Rethink Robotics Baxter Robot in Static and Dynamic Tasks

ABSTRACT. Human motion calls upon embodied strategies, which can be difficult to replicate in teleoperation architectures. This paper presents a teleoperation method that centers around the Space component of Laban Movement Analysis and may improve the dynamic complexity of teleoperation commands, allowing a trained user to command multiple joint angles at one time via a large database of stored poses, which are indexed by Space parameters. In this paper, this method is compared to a benchmark method, utilizing a joint-by-joint manner of control on a Rethink Robotics Baxter with compliant limbs using a Microsoft Xbox controller. Across four tasks with a trained operator, analysis of the number of active joints at a given point in time and time to completion emphasize the utility that comes with the proposed method. In particular, for the two presented static tasks, the average number of joint angles moving at one time improves and completion times reduce for the proposed method. Plots of behavior show additional qualitative differences in operator strategies and resulting motion, which are also discussed. Future work will extend this initial demonstration to more formal trials with multiple operators. This method may help achieve more fluid, continuous, and improvised motion in teleoperation of robots via gamepads as are currently used in disaster response platforms.

15:25
Tonight We Improvise! Real-time tracking for human-robot improvisational dance

ABSTRACT. One challenge in robotics is to develop motion planning tools that enable mobile robots to move safely and predictably in public spaces with people. Navigating public spaces requires a high degree of information about context and environment, which can be partly identified by understanding how people move. Motivated by research in dance and robotics, we developed an exploratory study of improvisational movement for dance performance. Dance forms have recognizable styles and specific interaction patterns (turn-taking, tempo changes, etc.) that reveal important information about behavior and context. Following extensive iterations with expert dancers, we developed a sequence of basic motion algorithms based on improvisation exercises to generate three unique, original performances between a robot and human performers trained in various dance styles. We developed a novel method for tracking dancers in real time using inputs to generate choreography for non-anthropomorphic robots. Although the motion algorithms were identical, the individual dancers generated vastly different performances and elicited unexpected motions and choreographies from the robot. We summarize our study and identify some challenges of devising performances between robots and humans, and outline future work to experiment with more advanced algorithms.

15:50
Embodied Intention: Robot Spinal Initiation to Indicate Directionality

ABSTRACT. This paper explores how a spine-inspired robot can be designed to provide embodied cues to performers, in order to indicate its intention to change direction through its shifting of weight. Our prior work has explored the importance of utilizing body-level cues such as breath and center of gravity to provide more intuitive information about how a robot, media, or physical agent can act in collaboration with human performers. We detail the design and implemented process of the spinal robot and situate this exploration in a body of work on embodied interaction.

16:05-16:15Coffee Break
16:15-17:45 Session 9: Generative tension in cross-disciplinary collaboration: Call for provocations and panelists at MOCO 2019

Panel

16:15
Generative tension in cross-disciplinary collaboration: Call for provocations and panelists at MOCO 2019

ABSTRACT. We pose a question of great significance to the MOCO community, that is: what aspects of your practice/research are invisible to your collaborators? We seek responses from individuals and teams engaged in cross-disciplinary research and collaboration spanning the broad array of practices implicated in the field of ‘movement and computing’. Responses may be in the form of a succinct online provocation (in image, sound, writing, video, or other), and may be submitted independently or collectively. In addition, a group of panelists will be invited to convene for a public discussion at MOCO 2019, drawing on the body of provocations to address the notion of generative tension in cross-disciplinary collaboration. The intention of this panel is to draw out and mobilize critical differences between the motives and methods of various disciplinary communities as a source of mutual inspiration and innovation.

17:55-18:30 Session 10: Body and Embodiment in Dance Performance

Performance

17:55
Body and Embodiment in Dance Performance

ABSTRACT. In this paper, we describe the modalities of the body being interpreted and utilized by various practitioners including, choreographers, artists, and architects through time. The definition of the body has been extended by the concepts of other disciplines, such as philosophy, where some define it as decided by the potential of its actions. Inspired by these philosophical ideas of the body, Skin-awareness, an interactive dance performance, was developed to explore and experiment with the body as a self-aware entity, embodying and interacting with artefacts and an immersive environment. The technical and choreographic design are introduced, followed by a discussion of the composition of the work.

18:15-19:00 Session 12A: Embodying Notation: Scoring Movement in Augmented Reality

time-based demo

18:15
Embodying Notation: Scoring Movement in Augmented Reality

ABSTRACT. How can we capture dance? Once captured, how can we analyze dance movement? Further, how can other dancers embody this captured movement? These are some of the questions we seek to answer with our augmented reality (AR) dance notating tool LabanLens. LabanLens is an application for the Microsoft mixed-reality HoloLens headset. It uses two forms of dance notation: Labanotation, an internationally recognized system for documenting and analyzing dance through dance “scores,” or written representations of movement similar to music notation; and Motif scoring, which describes a dance’s general characteristics and can also be used to generate new dances and analyze existing ones. LabanLens projects new or existing movement notation scores into the user’s field of view, which enables the user to perform them in an immersive videogame-style environment. LabanLens engages innovative digital practices responding to diverse intelligences and abilities to immerse students in a digital-kinesthetic experience with dance analysis. It introduces a modality to re-envision how we teach and employ Laban-based scoring tools, and an immersive environment for composing dances, analyzing them, and understanding historical embodiments. The AR experience engenders studentsʼ articulate analysis of dances, which deepens their kinesthetic-empathetic engagement with each other and their world. LabanLens expands existing dance scoring methodsʼ capacities to advance knowledge for accessing and analyzing new and existing dances, with broader applications for motion analysis in the athletics and health fields. It has additional applications in diverse educational environments.

Addressing MOCO 2019 themes of Movement Imaginaries, we propose a demonstration of LabanLens in the “practice work” category that engages the topics of movement notation systems and computation, movement generation, and movement analysis through digital technology. This lecture-demonstration will introduce audiences to LabanLens and its user interface and user experience. We propose to present a demonstration of LabanLens that displays the capabilities of Phase 1 of the application, which includes scoring and playback. We will do a live demo of the application followed by explanation of the project and its future directions.

Our projectʼs framework is based in three aspects: historical embodiment, creative process, and quantitative reasoning. LabanLens re-envisions how we teach and employ Laban-based scoring tools through a digital environment for composing dances, analyzing them, and understanding history through the body. Since reading a Labanotation score requires the user to perform the dance, users embody how dancers performed the work when the notator recorded it, giving the user a sense of historical subjectivities (Watts 2013). By dancing with avatars performing existing repertory, as we propose in future versions of LabanLens, users can match the movement qualities of the dancing avatars to similarly feel in their bodies what it was like to perform that dance at that time (Thompson, et al, 2016-2017). LabanLens’s immersive digital environment easily enables dancers to embody and perform dances from notated scores, and for dancers to create and notate their own dances. The integration of symbol palettes into the user’s field of view provides possibilities for dancers to choose tools and options for making dances. Additional project goals include furthering users’ creative processes by enabling them to analyze (score) their work in situ, see avatar playback of their choreography, and collaborate with others; and enhancing users’ quantitative reasoning skills by harnessing Laban frameworks’ quantification of movement and language-skills building through the application’s interactive scoring. In addition, reading scores of existing repertory, generating scores for new dances, and analyzing dances through the tools in LabanLens quickens usersʼ processes of analyzing and embodying movements.

In LabanLens’s AR environment, users dance in the real space of their room or studio alongside digital projections, as opposed to virtual reality (VR) where users are in a completely digital landscape. LabanLens’s mixed digital- physical characteristics enable the digital elements to help users in real-world spaces. The HoloLens is untethered, enabling users to dance fully in the space hands-free. Users are limited neither by needing to read a score from paper nor from cables tied to a battery pack. To notate, the user moves holographic symbols using hand gestures (with optional Xbox controller), so that the act of scoring is kinesthetic and users remain in their dancing space to score dances. LabanLens capitalizes on fields of knowledge from dance and digital technologies to expand dance scoring capacities for research, teaching, and health applications.

LabanLens furthers digital applications for Laban-based movement notation. It is in conversation with projects in desktop, tablet, motion capture, and virtual reality applications. Researchers have developed computer desktop-based applications for notating Laban-based scores in LabanWriter (Venable, 1989), LabanPad (Griesbeck, 1996), Calaban (Adamson, 1991), and LabaNotator (Bezjak, 2012), search-based score analysis in Labanatory (Fügedi & Misi, 2005), tablet-based notation editing in KineScribe (Kosstrin, 2014), generating three-dimensional dancing figures from Labanotation scores in LabanEditor (Kojima, et al, 2002), LabanDancer (Wilke, et al, 2005), and a Unity 3D program application for notating Thai dance (Tongpaeng, et al, 2017), generating Labanotation scores from motion-captured data in GenLaban (Choensawat, et al, 2015), and recognizing movements through Laban Movement Analysis using Kinect (Bernstein, et al, 2016) and bipeds (Truong, et al, 2016). LabanLens expands on LabanWriterʼs and KineScribeʼs functionality and generates holographic scores; future phases of LabanLens will generate three-dimensional figures from input scores. LabanLens is currently singular as an augmented reality application. Our perspective of pursuing this technology for education and research to engender creative processes, historical embodiment, and quantitative reasoning builds on previous research while pursuing new directions.

REFERENCES Adamson, A. (1991). Calaban. [Software]. Bernstein, R., Shafir, T., Tsachor, R., Studd, K., & Schuster, A. (2015). Laban movement analysis using Kinect. International Journal of Computer, Electrical, Automation, Control, and Information Engineering 9(6), 1574-1578. Bezjak, P. (2012). LabaNotator. [Software]. Available from http://www.labanotator.com/ Choensawat, M., Nakamura, M., & Hachimura, K. (2015). GenLaban: a tool for generating Labanotation from motion capture data. Multimedia Tools Applications 74(23), 10823-10846. doi 10.1007/s11042-014-2209-6 Fügedi, J.& Misi, G. Labanatory. [Software]. Available from http://www.labanatory.com/ Griesbeck, C. (1996). LabanPad. [Software]. Available from http://user.uni-frankfurt.de/~griesbec/ CHOREOE.HTML Guest, A. H. (2005). Labanotation: the system of analyzing and recording movement (4th ed). New York: Routledge. Guest, A. H. & Curran, T. (2008). Your move: the Language of Dance approach to the study of movement and dance (2nd ed). New York: Routledge. Kojima, K., Hachimura, K., & Nakamura, M. (2002). LabanEditor: Graphic editor for dance notation. In Proceedings: 11th IEEE International Workshop on Robot and Human Interactive Communication. doi 10.1109/ROMAN.2002.1045598 Kosstrin, H. (2014). KineScribe. [Software]. Available from http://www.kinescribe.org Maletic, V. (2005). Dance dynamics: effort and phrasing. Columbus, OH: Grade A Notes. Thompson, J., Berezina-Blackburn, V., & Udakandage, L. (2016-2017). Motion capture and VR for physical theatre training. Retrieved from https://accad.osu.edu/research/projectgallery/motion-capture-and- virtual-reality-physical-theatre-training Tongpaeng, Y., Rattanakhum, M., Sureephong, P., & Wicha, S. (2017). Implementing a tool for translating dance notation to display in 3D animation: a case study of traditional Thai dance. In S. Benferhat, K. Tabia, and M. Ali (Eds.), Advances in Artificial Intelligence: From Theory to Practice. doi: 10.1007/978-3-319-60045-1_3 Truong, A., Boujut, H., & Zaharia, T. (2006). Laban descriptors for gesture recognition and emotional analysis. The Visual Computer: International Journal of Computer Graphics 32(1), 83-98. doi: 10.1007/s00371-014-1057-8 Venable, L. (1989). LabanWriter. [Software]. Available from https://dance.osu.edu/research/dnb/laban-writer Watts, V. (2013). Archives of embodiment: visual culture and the practice of score reading. In M. Bales & K. Eliot (Eds.), Dance on Its Own Terms: Histories and Methodologies (pp. 363-388). New York: Oxford University Press. Wilke, L., Calvert, T., Ryman, R., & Fox, I. (2005). From dance notation to human animation: the LabanDancer project. Computer Animation and Virtual Worlds 16(3-4), 201-211.

18:15-19:15 Session 12B: Skeleton Conductor: an interactive real time, movement-based VR experience

Tie-based Demo/Performance

18:15
Skeleton Conductor: an interactive real time, movement-based VR experience

ABSTRACT. Skeleton Conductor (SC) is a cross-disciplinary research and development project that aims towards a new immersive and interactive VR experience, which positions the perceiver and one’s body as an active agent within the virtual realm while creating and interacting with the displayed sensorial input in an head mounted display (HMD). We propose to share the current state of the Skeleton Conductor project and reflect to the many research and technology development-geared questions this project has brought us to. The aim of the practice work is to present preliminary results from our user experience target groups and to reflect upon some of the key research areas which aim to explore the significance of embodied experience. We plan to further define the modes and characteristics of full body real-time interactivity within this particular virtual environment. In practical terms this would mean setting up volumetric capturing equipment and a VR device (Vive, Oculus) and to allow a participants to enter the SC experience. while other participants watch this experience from a large monitor or video projector. With this practice work, we hope to enable knowledge exchange and feedback from our peers, artists and other researchers.

19:00-19:20 Session 14: DATURA: – Kymatocarpa - the Ephemeral Performance Garden

Performance

19:00
Performance Proposal by DATURA: – Kymatocarpa - the Ephemeral Performance Garden

ABSTRACT. A DATURA Performance: Kymatocarpa—The Ephemeral Performance Garden MOCO Submitted Text

Datura is proposing to perform a 12 to 20 - minute work titled Kymatocarpa – The Ephemeral Performance Garden. 

 DATURA is an interdisciplinary performance ensemble formed to develop a multidisciplinary approach to improvisational performance and explore the boundaries and cross overs through the use of digital and analog media and processes in live performance settings. The current lineup includes five dancers, three composer/musicians, and one visual artist/musician, who are creating work at a professional level. 

The ensemble works from an event-based score devised by the group. The musicians perform using a combination of analog patch synthesizers, laptop digital synthesis, processed woodwinds, and home-made, amplified percussion instruments. Some dancers wear analog accelerometers that connect to a digitally-modeled Moog synthesizer tuned to 31-tones per octave. Other dancers wear digital accelerometers that feed into an analog patch synthesizer. In this way, during certain parts of the piece the dancers are providing all of the sound as well as the movement. Projected images used in the performance are processed live using input from the sensors as well as a combination of live digital and analog processing.



DATURA Kymatocarpa has the largest flowers of any Datura species, the flowers open for only one night and wither the next day. The Ephemeral Performance Garden will explore the impermanence and beauty of the fleeting elements that exist in our lives. Relationships, work, family, living spaces – often the things that we most take for granted – are beautiful one day (one night) and are gone the next. 

For this work we will create a Performance Garden in the performance space. This garden will be guided, as the viewer experiences a walk-through to view and become immersed in the performance. The participants will be led, in groups, on a tour of the performance space, experiencing the emergent elements of the performance piece as they traverse the space. 

This is a practice work. It is a performance that can be easily accommodated by the Nelson Fine Arts Center room 122. We will provide our own technology, but will require basic lighting, live sound reinforcement in QUAD, and video projection. A floor suitable for dance is also needed. 

 This is a new work that has not yet been documented. 
video of previous work: https://youtu.be/FFQIGOL5Mxg 
 video of sensor rehearsal for Kymatocarpa 2-24-2019: https://youtu.be/fFXiEdsqMq8 
 website with complete list of performances and workshops: datura-phx.org