View: session overviewtalk overview
09:00 | Recognizing and altering the emotional body SPEAKER: Nadia Bianchi-Berthouze ABSTRACT. Increasingly popular wearable sensing and feedback technology is starting to consider the emotional body as a means for creating affect-competent real life applications. In this talk, I will discuss opportunities offered by the sensed emotional body and the challenges that need to be addressed for real world ubiquitous applications. The discussion will be grounded on our work to support chronic pain physical rehabilitation in everyday activity and on altering one’s body perception in healthy population and in people with a certain body dysmorphic disorder. |
13:00 | Jinn-Ginnaye SPEAKER: Kirk Woolford ABSTRACT. Jinn-Ginnaye is an exploration of movement in place. It is a collection of dance pieces exploring issues of bringing western dance performance to the United Arab Emirates, where local modesty laws influence how women can be shown in public. The pieces use video compositing, motion capture, and Virtual Reality techniques to remove the body of the dancer, but leave behind the dance, and the traces of the desert in which it was created. |
13:00 | Collaborative 2D center of mass serious game SPEAKER: unknown ABSTRACT. We have implemented a 2D serious game based on collaboration between players instead of the competitive scenario. It based on the control of the Center Of Mass of players, a physical concept which links participants in real-time. We will explain the main pedagogical impacts of this collaborative movement from K-12 to university. |
13:00 | Maria Montessori meets gesture tracking technologies SPEAKER: unknown ABSTRACT. This paper presents our recent works dealing with the relevance of introducing digital technologies into the education field, especially at the kindergarten. Specifically, we focus on digital technologies that allow any kind of movement tracking and how it can enhance teaching and learning potential on various fields. The prototype we submit is the first of a series focusing on writing and reading education. We present the various influences that led us to this prototype and describe the perspectives for further experimentation. We also mention how this former work can engage similar studies into others fields dealing with body gestures. |
13:00 | Becoming Light SPEAKER: Timothy Wood ABSTRACT. Becoming Light is an immersive world made for live performance and for virtual reality. As a virtual reality installation, participants are free to interact with the world of light and sound on a path through memory and dream-like space. The motions of the body are remembered within the world and re-encountered as ghost-like storytellers along the journey. As the pathway through the world unfolds, voices and recorded poetry is discovered revealing an ethereal narrative. The shape, timing, and velocity of the body changes the way the story is experienced. As a performance piece, the virtual reality headset is replaced with projectors and a stage. A solo dancer guides the audience through the world of light while following an improvisational somatic movement score. |
13:00 | The Box: A Game About Two-Handed 3D Rotation SPEAKER: Carlos Gonzalez Diaz ABSTRACT. The Box is a prototype 3D puzzle game to study two-handed motion control schemes for spatial rotation. Using two handheld motion tracking devices, players are tasked to rotate a maze cube to roll a small sphere towards a goal inside the cube. We designed the game to iteratively observe how people would spontaneously use controls and develop and refine a ‘natural’ control scheme from that. Initial results indicate no immediate clear principles of best practices. |
13:00 | Wired 2 SPEAKER: unknown ABSTRACT. The Stream Project’s founding members, two dancers and a neuroscientist, explored the possibilities of using the dancer’s physiological information to create a series of works, called Wired, that disrupts and informs the viewer’s understanding of their own physiological state. Brain wave states, heart rate variability and respiratory rate was used to create a series of artistic dance works. It focuses on bringing scientific exploration into a creative environment, taking full advantage of the visual and auditory possibilities already being used within the field. Wired is an exciting collaboration between dance performance, neuroscience, film, sound and lighting, culminating in a live-feed multi-media performance. The project worked with a creative coder to develop an installation that involved using the audience member's heart rate to view different sections of dance film footage. The footage filmed dancer, Genevieve Say, in the peak district dancing on a bridge. The audience member holds on to an object that has heart rate sensors embedded in it and depending on the audience's heart rate speed, this then dictates the section of the film shown. The installation was shown as part of the Wired series at FACT (Foundation for Art and Creative Technology), Liverpool in 2015. The footage has also been used to make a dance film. We would like to propose to show the Wired 2 installation at MOCO17 as we think that it would be a good opportunity for us to show the work and get feedback on areas for development. |
13:00 | The EU ICT H2020 WHOLODANCE dance learning applications SPEAKER: Stefano Piana ABSTRACT. We present a platform to assist dance students, teachers and choreographs in learning, practicing, teaching dance principles and creating new choreographies. The aim of this work is to present the on-going progresses of the WHOLODANCE EU ICT project. Our demonstrations will consist of showing different aspects and applications developed under the framework of the project including: an online repository of different dance sequences captured with motion capture from different dance styles, a browsing and visualization interface of the repository that make use of augmented reality and holographic displays, a “movement sketch” application where participants’ movement will be recorded using low-spec technology, analysed and used to retrieve similar examples from the existing repository, and an authoring tool that starting from two different dance sequences allows to blend and merge them in a new dance sequence. |
13:00 | Collective interactive machine learning using mobiles SPEAKER: Joseph Larralde ABSTRACT. We demonstrate our prototype called COMO, which allows for distributed and collective gesture recognition. Gestures can be recorded using mobiles, thanks to the motion-sensing capabilities of smartphones. All the recorded gestures are then available on a server, and can be retrieved on any other connected mobiles. The recognition algorithm can run on webpages, on each mobiles. During the demonstration, users will be able to test the system using their own mobiles, and participate some collective scenarios, including gesture-design and music playing using user-defined gestures. |
13:00 | Bodytraces: Embodied Movement Exploration Via Feedforward Visualizations SPEAKER: Shu-Yuan Hsueh ABSTRACT. A phenomenological approach to interaction design puts the body at the center of inquiry. When designing body-centric interfaces, reflective awareness of how the body moves is an important aspect for consideration. This paper presents a full-body interactive system that allows end users to explore movements using dynamic feedforward visualizations of movement pathways. We propose a demonstration of the system as an interactive installation through which participants are encouraged to move in order to interact with visual representations of their movement characteristics. The system captures each participant’s movement in terms of spatial trajectories and dynamic qualities. It then encourages the participants to improvise using existing movement ideas by embodying, appropriating, and variating them. The global visual canvas makes visible the movement traces of the participants over a specified period of time as feedback and the future possibilities and movement potential as feedforward. |
13:00 | Polytropos Project SPEAKER: Christina Mamakos ABSTRACT. Polytropos Project is a set of experiments designed to explore aspects of creativity through the blending of elements from multiple mediums of expression and communication including sound, language, image and movement. The theory underlying and directing the process of blending is Joseph Goguen’s computational account of conceptual blending complemented by an understanding of style as a choice of blending principles. In our view, such experiments allow us to explore the creative possibilities of computational conceptual blending in the field of multimedia art practice presenting possibilities of self-propagating AI modes of creativity. |
15:30 | Four-way mirror game: developing methods to study group coordination SPEAKER: unknown ABSTRACT. Mirror game is an improvisation exercise for two people, where one person moves and the other acts as their mirror. In the game, the roles of leader and follower can be switched, and eventually the roles can be abolished, and the pair shares leadership, both mutually mirroring each other. The mirror game has been adapted to scientific research, where the game has been simplified to a 1D version with buttons on sliders, and a 2D version where participants move their hands as if drawing in the air. In these studies, the condition of joint leadership has been found to produce movements that are better synchronised and smoother than those in the leader-follower conditions. We extended this game to four people, and are investigating it as a) a method for studying group dynamics in movement coordination, and b) a measure of intersubjectivity . We use optical motion capture to record these four-player games. Participants stand in a circle, with their right arm and index finger extended towards the centre. Participants are instructed to mirror each others' hand movements, and these hand movements are most accurately tracked by a reflective marker on the participants' index fingers. Other markers on participants' upper body joints allow a whole-body analysis of motion. The average velocity of all the markers, or the quantity of motion of each player, can then be cross-correlated with those of the other players, producing a correlation matrix for the game, showing the dynamics of following and leading in the group. Our pilot results suggest that the four-person game gives rise to "conflicts" where a performer must make a quick decision about which other player to align their behaviour with, and as a consequence, which other player to not align with. This makes the four-player game very interesting from social psychological point of view. Comparing two games, played before and after a different group improvisation exercise, the latter game produced more group synchrony, and facilitated the introduction of larger movements. This indicates, that the four-player game has potential as an intersubjectivity measure. In this ongoing research project, more data will be collected and analysed during spring 2017. |
15:30 | Kinetic predictors of spectators' segmentation of a live dance performance SPEAKER: unknown ABSTRACT. We present a pilot study that explores connection between accelerations in dance movements and the temporal segmentation that is perceived by spectators during a live performance. Our data set consists of recorded accelerations from two 7 minutes long duo dances that were annotated by 12 spectators in real-time. The annotations were indications of perceived starts and endings in the dance. We were able to create an acceleration based predictor that has a significant correlation with the pooled subjective annotations. Our approach can be useful in analysis of improvised dance where the segmentation cannot rely on repetitive patterns of steps. We also present suggestions for future development of acceleration based dance analysis. |
15:30 | Syntax Error - Choreographic Coding SPEAKER: unknown ABSTRACT. The Syntax Error installation's aim is to create a generative, ever transforming sculpture representation from real-time recorded motion data of a dancer, blurring the borders between physical space and digital realm through an interactive feedback loop, that constantly re-informs the dancer with the generated audiovisual feedback through projection mapping. The digital sculpture's aesthetics are defined by the dancer's constant motion and continuously variable relationships between moments, creating new input for the physical dance performance. Syntax Error is a metaphor of real life processes influenced by the precision and fine mechanism of choreography's informal and temporal patterns, that emerge, which are being used as a digital experience translated back into physical space . The virtual tectonics accompanied by an interactive noise field mostly capture the fragility of dancers’ movements, showing the beauty of human inaccuracy in the syntax of a programmed dance sequence. The digital sculpture is a representation of individual human interpretation and implies the attributes that contrast human behavior from mechanical perfection. A custom algorithm weighs incoming datasets from the choreography and creates different tectonics and subdivisions that represent persistence and change at the same time, like one dancer subjectively interprets the sound/music/directions in his/her performance differently. The dancer moves to an interactive projection and generated noise fields, where a simple modification of the random seed could iteratively create new versions of the performance. Kinect cameras are used to put together the intersection of the images to a three-dimensional volume. In a second, further setup the project becomes an audiovisual, real-time performance. A interactive, reactive system between the audio and the image, between the man and the machine. The refined algorithm creates geometry, defined by the velocity of multiple visitors and mixes it with the sound information to the time of the recording, therefore producing a 3d-printable geometry output of each individual visitor. A Digital representation was created through use of several CAD software for Post Processing to create a non-representational collage of the whole performance in a physical, 3d-printed model. The sculpture captures the motion of the visitors as well as the music played,which directly influenced the CAD data output. The dancers' representations are printed and handed out to the performers in an effort to create a common and shared memory of each participant's individual actions and motions. |
15:30 | Musical Skin: Fabric Interface for Expressive Music Control SPEAKER: Cedric Honnet ABSTRACT. We demonstrate a soft, malleable fabric contoller. Attendees can use it to explore sounds, collaboratively create soundscapes and music. This soft input device - Musical Skin - senses where it is touched and how much pressure is exerted on it. This is done using a method consisting entirely of fabric components. Using this textile matrix sensor, the performer's role is changed from that of manipulating a rigid device to engaging with a malleable material. The sensor pushes the performer to explore how the motion of their body map to the sound, changing not only the performers experience but also engaging the audience beyond what typical electronic musical input devices would. In this extended abstract, we discuss the sensing mechanism and describe the installation we envision the musical skin to be used in. |
15:30 | P.A.C. (Performative Auditory Creation) SPEAKER: Ioannis Sidiropoulos ABSTRACT. This practice as research (PaR) paper, examines the effects of auditory perception on a dance physical performance and how internal and external sounds affected the performer’s movement and behaviour during the whole process of creation. This PaR combines research of physical theatre and dance practices along with cognitive neuroscience. Throughout this paper, an interdisciplinary approach has been established by investigating the auditory perception of the human brain. The orientation of the performer’s body helped to direct the movements in space, enabling the performer to pace their actions to sounds, especially during dance-physical actions. Likewise, this paper includes cognitive psychology, which is an integral part of cognitive neuroscience, demonstrating how emotional response changes through different sounds and gestures. For this experimental and practical qualitative research on auditory perception, the main methodology incorporated was the use of sound technology. Thus, two different types of microphones were used; a regular microphone with its stand and ten contact microphones. These were used to enhance the performer’s internal sounds, as means of exploring the different levels of sound providing high-quality hearing perception. Contact microphones were linked with three different platforms (rostra), creating three ‘sound stages’ on which the sounds of dance-physical movements were amplified and exaggerated, resulting in further explorations. These movements involved full-body actions, as the PaR examines dance-physical theatre performance and practice. Through this experimental research and with the use of sound technology, a specific methodology was defined, providing binaural audio within the performance space. At the beginning of this PaR, internal and external sounds are investigated. Starting with the body and exploring its potential for inner and outer sounds physical sounds of varying pitch levels, such as the breath, voice, teeth, nails, etc., portrayed how each sound can affect the performer’s physicality. Afterwards, this paper demonstrates how external sounds, such as an ambulance siren and an aircraft can interrupt gestures, movements, and even silence or immobility, affecting the quality of the motion and emotion. Finally, this PaR focuses on how these sounds and their affections can influence the performer in the creation and execution of different movement patterns. As follows, through the combination of the theoretical and practical research during the experimental period, this paper analyses the findings of this process which were presented through a solo performance. The findings provide a clear understanding on how amplification combined with the contrast of fluid and organic movements and jerky/fragmented gestures, created depth and variety of tone for the final practice, which resulted in a multi-dimensional performance. Lastly, this paper offers propositions on how these results can help in further research on auditory perception combined with performing arts. More specifically, from this paper new questions have emerged and will expand in a PhD research such as how internal and external sounds affect the performer in the creation and execution of different movements during the creative process of a performance. Furthermore, the ultimate goal of this research is the establishment of a specific methodology and tools for use in the creation of performances, examining how different sounds affect the creation of movement patterns. Thus, the paper suggests a development of this practice, with the use of Electroencephalogram (EEG) Headset, which can provide analysis on the brain’s wavelets and behaviour through different acoustic stimulations. Also, a final suggestion for the establishment of a methodology is to analyse movements with the use of Microsoft Kinect System. Finally, with the use of EEG Headset and Microsoft Kinect, the research hypothesis can provide quantitative results which will contribute in gaining further insight into the complex mechanisms of the human brain, which is involved in the perception and processing of auditory information. |
15:30 | Measuring Impact of Social Presence Through Gesture Analysis in Musical Performances SPEAKER: unknown ABSTRACT. Immersive Virtual Environments combined with motion capture systems have been used as experimental set-ups for studying the influence of the presence of an audience on musicians' performances. This study highlights that musicians playing with different expressive manners move differently, through increasing their kinetic energy and body twisting. These factors are increased or decreased by the presence of an audience, depending on the difficulty of the task. Such behavior fits Zajonc's theory of social facilitation. |
15:30 | Pinoke: Using human movement to guide a robot’s dance SPEAKER: unknown ABSTRACT. There are many approaches to generating movement for a robot, programming motors, keyframing poses, recording hand-manipulated sequences (puppeteering), recorded motion capture sequences, live motion capture and using motion capture to train a neural network to name a few. This research centres around the performance project Pinoke in which a number of methods for enabling a robot to dance, with and without a human partner, were explored. |
15:30 | Algorithmic Reflections on Choreography SPEAKER: unknown ABSTRACT. In 1996, Pablo Ventura turns his attention to the choreography software Life Forms to find out whether the at the time revolutionary new tool for the creation of dance movements can lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies. |
15:30 | Symbiosis - Interspecific Associations in Collaborative Performance SPEAKER: Manoli Moriaty ABSTRACT. This paper presents an investigation into interdisciplinary collaboration between practitioners of distinct performative art forms, which was carried out with the aim of developing a framework for collaboration informed by the biological phenomenon of symbiosis. Symbiosis is a pervasive occurrence in nature, describing the close and persistent interaction among organisms of different species. The aim of symbiosis is for at least one of the interacting organisms, or symbionts, to extract benefit from their association, with the different types of symbiotic interactions – mutualism, commensalism, and parasitism – denoting the fitness outcome for each of the symbionts. Over the years, the fine details of symbiosis have been the subject of controversy among researchers of General Biology and General Ecology. However, nowadays there is consensus on the phenomenon’s ubiquity, and its importance in accelerating the rate of many species’ evolutionary process. Through observing the manner in which diverse organisms interact with each other within symbiotic relationships, I have developed a framework which aims to facilitate collaboration with artists of different disciplines in developing live performance works. By interpreting the different types of symbiotic interactions, as well as their key observable traits – interspecificity, closeness, and persistence – the framework provides artists with a set of actions and precepts that can be employed during all stages of the collaborative practice, including authorship, hierarchy in creative control, aesthetics, development, and live interaction interfaces. The development of the framework draws insight from the findings emerging from my own practice, which focuses on the collaboration between disciplines utilising sound and physical movement as their predominant mediums of expression. Furthermore, key theories in similar collaborative practices have also contributed in supporting the framework’s development, such as the long-term collaboration between John Cage and Merce Cunningham, as well as contemporary precedents from practitioners such as Jo Hyde, Sophy Smith, and Marco Donnarumma. Further to the theoretical presentation of the framework, I also present a number of performance works which activate the notion of symbiosis within collaborative practice. |
15:30 | MoFeat: The Movement Features Database SPEAKER: unknown ABSTRACT. With the continual proliferation of new devices and techniques for motion capture, there is an essential need for the formalization of high-level semantic features describing human movement. The large variety of available sensors provides various perspectives on movement information, but it also comes at the cost of a large disparity of data representations that complicates movement analysis and interaction design. This paper describes a collaborative initiative aiming at formalizing current knowledge in movement signal processing. We introduce the Movement Features Database, an open online repository that collects and formalizes movement feature extraction techniques. |
15:30 | Tangible Tiles for Laban Movement Analysis SPEAKER: unknown ABSTRACT. Laban Movement Analysis (LMA) is an expert-based method by which Certified Movement Analysts observe, analyze, describe and write movement. LMA is increasingly used in Human Computer Interaction through articulating a precise use of langage for describing movement expression. In this paper we propose Motif Tiles, a tool developed to analyse human movement with the help of physical objects called Motif Tiles. It is composed of tangible Tiles that allow for both analyzing and generating temporal patterns of movement. Through our design research experiments, we will unfold how the tangible tool engage movement experts and dance professionals with the analysis and the embodiment of movement and collaborative focus on human movement patterns. |
15:30 | An Educational Experience with Motor Planning and Sound Semantics SPEAKER: unknown ABSTRACT. This paper reports an educational experience with 75 graduate students on action preparation as a function of sound semantics. In 6 hours of lesson and by working in a group project, students were able to investigate modulations in motor preparation timing induced by sounds that fall within their peri-personal space (PPS) in non-visual virtual reality setting. The original modus operandi of our experimental approach allowed to study and analyse human motor planning of a simple action against a potential threats which were virtual sound sources with different emotional rates (i.e. ranging from 4.4 to 5.4 of arousal rating and from 2.1 to 5.7 of valence rating) rendered via headphones. Results from this experience suggests that semantics differently modulates the process of PPS estimation due to auditory stimulation in terms of pre-motor reaction time and distance perception. |
15:30 | Methods to Track Dynamically Coupled Bodies in Artistic Motion SPEAKER: unknown ABSTRACT. Partnering dancing requires skilled coordination of synergies between dancers. It is very challenging to quantify the unfolding of this dynamic coupled rhythm in ways that can provide dancers and choreographers immediate feedback on their performance. In particular, there is a paucity of methods that automatically reveal synergies within the body parts of each participating dancer, while also providing metrics of coupled synergies. In this paper, we introduce a new platform for the tracking of coupled dynamical systems with a direct application to partnering dancing. We present visualisation tools of "togetherness" in body parts and profile the stochastic signatures of each individual dancer along with those of the coupled components of their bodies moving in tandem. We use complex dancing segments and non-dancing segments of rehearsal snippets or staged poses to illustrate the use of our methods. Further, we suggest possible ways to quantify inherent variability in the subtle motion fluctuations that -although seemingly invisible- help the dancers entrain from moment to moment. We hope that these tools are of use to the movement computing community. |
20:00 | 0⏎ SPEAKER: unknown ABSTRACT. We propose the presentation of 0⏎, a performance that is both a game and an experiment in real-time computer generated performance. 0⏎ uses audio and projection to direct performers in a series of escalating actions and movements determined by the computer during the performance in real-time. Performers attempt--and often fail--to carry out the computer’s instructions, which range from the specific and simple to the complex and figurative but are always new and unexpected. The result is an off-balance, hilarious, and occasionally arresting experience that questions the nature of human/computer relationships and interfaces. As our lives are increasingly governed by algorithms, what do the new structures of power and control look like, and what are the distinctions--if there are any--between human and artificial identity? 0⏎ is performed by a core group of three performers, accompanied by two or three additional performers that are “drafted” from the local community. The new performers rehearse the piece once or twice before the performance but are otherwise left to respond naturally to the computer’s instructions. MOCO’s assistance in identifying the additional performers and securing a rehearsal space would be appreciated, but is not necessary. We would like to present 0⏎ as a forty-five minute to one hour performance. The performance can be staged anywhere from a proscenium stage to a gallery, but it works best when presented in an intimate setting where audience members are free to sit, stand, and move around the space. The performance area must have a projection screen or wall at the back, at least 1.7m x 3m in size. The space must also have a sound system. The projector and sound system are run from a laptop. 0⏎’s performance system is designed for flexibility and portability, so it can be installed quickly and easily and adapted to a variety of presentation formats. Post-performance, we will engage in a brief discussion with audience members who will then be invited to try out the rules of the game for themselves. The piece is inspired by the collaborators’ decades of working with dance and computation: experimenting with sensors, machine learning, computer vision and other emerging technologies and interfaces. 0⏎ interrogates the ethical and cultural issues raised by this work in a bidirectional manner. On the one hand, how does approaching computation from a dance perspective inform our use of technology by problematizing issues that are often overlooked by the scientific and engineering mainstreams? On the other hand, how does approaching dance through computer code influence our choreographic and movement practices? MOCO’s community and topics provide an ideal environment for raising and discussing these questions, and we look forward to the possibility of presenting our work. |
20:30 | still, moving SPEAKER: unknown ABSTRACT. There is no stillness in human movement. Even when simply standing, the body is constantly falling, catching itself, and subtly adapting to the environment. And the experience of this seeming stillness is filled with a myriad of inner bodily sensations that remain unseen to the eye. Micro-movements induced by breath and weight shifts connect us to the force of gravity and an organic flow of embodied rhythms through time. still, moving is a performance for two dancers in which an interactive sound environment responds to subtle changes in muscular activity, disclosing and extending the inner bodily experiences of the performers. We explore how the sonification of micro-movements can increase or disrupt the performers' kinesthetic awareness, and how it affects the kinesthetic empathy between the performers and the audience. The performers are equipped with two Myo Armbands that capture physiological signals such as muscle tension and subtle accelerations. The system mediates the relation between movement and sound through interactive machine learning, in a design that evolves over time, in response to the interplay between the performers. |
21:00 | Intersubjective Soundings SPEAKER: Doug Van Nort ABSTRACT. This piece is written for MYO armbands, electro-acoustic ensemble and gestural recognition of soundpainting conducting. In the piece, soundpainting-inspired gestures guide the performers, as does a graphic/textual score which defines the sonic palette and instrumental gestures available to the players. Tensions are negotiated between acoustic and electronic sources, and between bottom-up structured improvisation and top-down guiding via conducting. These continuums are amplified and explored through another layer of shared articulation: machine learning has been applied to recognition of the composer/conductors gestures, with symbolic recognition opening up channels of electronic processing, which then acts upon the acoustic players at moments in the piece. Continuous mappings between conducted motion and sonic transformations have also been learned, creating a tension between the symbology of conducted instruction and that of continuously co-constructed sound, as the conducting and performer share signals and intentional resonance in performance. This piece was created for my Electro-Acoustic Orchestra (EAO) ensemble at York University, a mixed electronic/acoustic ensemble that is comprised of a combination of York students and Toronto-area professional musicians. The piece was premiered in a concert at my DisPerSion Lab at York U. For this updated version for MOCO, my proposition is to conduct the EAO telematically, who would be performing from my lab in Toronto. |
21:30 | WebPage in Three Acts SPEAKER: Joana Chicau ABSTRACT. For MoCo 2017 I propose to present a live coding performance: WebPage in Three Acts, an assemblage of graphic experiments into a hybrid form of composition, combining principles of choreography with the formal structures of web-coding. As in choreography, web-design also deals with space, time and movement qualities. It has been defining ways of moving, collectively or individually, through fluid nonetheless complex landscapes of information displays, networked spaces, and multimedia environments. The performance being presented and the notion of ‘choreographic coding’ is a technical as much as social, cultural and aesthetic experiment which can be expanded both at the level of web-design as well at the one of choreography. |