SI15: SI15 2ND INTERNATIONAL SYMPOSIUM ON SOUND AND INTERACTIVITY
PROGRAM FOR THURSDAY, AUGUST 20TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:00-09:00 Session 6: Registration
Location: Theatre @ The Nest
09:00-09:15 Session 7: Proceedings Opening & Welcome
Location: Theatre @ The Nest
09:00
Welcome remarks from Si15 Chair

ABSTRACT. Welcome to Si15!

09:15-10:15 Session 8: Stefania Serafin Keynote: Sonic Interactions in Multimodal Virtual Environments

In this studio report we present the research and teaching activities performed in the Multisensory Experience Lab at Aalborg University Copenhagen. We also briefly describe the Sound and Music Computing Education started at Aalborg University in September 2014.

Stefania Serafin is a full professor at Aalborg University Copenhagen, Denmark, with special responsibilities in “sound for multimodal environments”. She teaches and researches on sound models and sound design for interactive media and multimodal interfaces. Her book Sonic Interaction Design (MIT Press, 2013) deals with the exploitation of sound as one of the principal channels to convey information, meaning, and aesthetic-emotional qualities in interactive contexts (http://imi.aau.dk/~sts/).

With support from the Italian Institute of Culture in Singapore.

Location: Theatre @ The Nest
10:15-10:30Coffee/Tea
10:30-12:00 Session 9: New Interfaces for Multimodal Expression 1
Location: Theatre @ The Nest
10:30
Taiwanese Virtual Singer By Singing Synthesizer
SPEAKER: unknown

ABSTRACT. The paper attempt to use singing synthesis to build a Taiwanese virtual singer, which is similar to the famous virtual singer in Japan, Miku Hatsune, but it singing by Taiwanese. The song is composed by the author Chih-Fang Huang. The voice of singer is synthesis. We want to creative a singer by computer, not a real human. In this research, we want to create new form for people entertainment. First Taiwanese basic syllables are analyzed to build a text-to-singing (TTSI) synthesis system. The singing synthesis is based on STRAIGHT algorithm. Second, use of synthesizers to synthesis a song. Third, let some people do questionnaire. We also add soundtracks to our sing voice to form a music file. Some people do questionnaire that helps us to know if our research is good or bad.

11:00
EEG & music performance

ABSTRACT. Active music listening is a way of listening to music through active interactions. In this paper we present an expressive brain-­‐computer interactive music system for active music listening, which allows listeners to manipulate expressive parameters in music performances using their emotional state, as detected by a brain-­‐computer interface. The proposed system is divided in two parts: a real-­‐time system able to detect listeners’ emotional state from their EEG data, and a real-­‐time expressive music performance system capable of adapting the expressive parameters of music based on the detected listeners’ emotion. We comment on an applica-tion of our system as a music neurofeedback system to alleviate depression in elderly people.

11:30
ASSEMBLING MUSIC

ABSTRACT. In this paper, I report my three projects - sound installation, new musical instruments and new platform of computer music - the common key concept is "assembling music". Musical performance is a kind of expansion of human-body activity. I have been developed many kinds of musical instruments as a part of my composition, and I focused at human "assembling" action in music performance in these projects. Of course, the musical composition is an assembling process of musical parts essentially. But I want to expand the concept of "assembly" - not only for the composition but also for the performance. I hope this report expands the possibility in interaction design for media art, and hope to discuss about the technological detail and artistic approach.

12:00-13:00Lunch

Local style buffet.

(For participants having registered for Soundislands Festival or for Si15 Symposium.)

13:00-14:30 Session 10: New Interfaces for Multimodal Expression 2

Interfaces between humans and machines in the service of music have become a rich area of exploration, incorporating all kinds of sensors, actuators, algorithms, and mechanical systems. The three speakers in this session each present new interfaces they have developed representing three radically different points in this space of systems mediating the relationship between human actions and sound.

Chair:
Location: Theatre @ The Nest
13:00
Beyond the Keyboard and Sequencer: Strategies for Interaction between Parametrically-Dense Motion Sensing Devices and Robotic Musical Instruments
SPEAKER: Jingyin He

ABSTRACT. The proliferation and ubiquity of sensor, actuator and microcontroller technology in recent years have pro- pelled contemporary robotic musical instruments and digital music controllers to become more parametrically dense than their predecessors. Prior projects have focused on creating interaction strategies for relatively low- degrees- of-freedom input and output schemes. Drawing upon prior research, this paper explores schemes for interaction between parametrically-dense motion-based control devices and contemporary parametrically- dense robotic musical instruments. The details of two interaction schemes are presented: those consisting of one-to-one control (allowing the actions of a performer to directly affect an instrument) and those consisting of a recognition system wherein user-created gestures result in output patterns from the robotic musical in- strument. The implementation of the interaction schemes is described, and a performance utilizing these schemes is presented.

13:30
Interactive Computation of Timbre Spaces for Sound Synthesis Control

ABSTRACT. The effective sonic interaction with sound synthesizers requires the continuous control of a high dimensional space. Further, the relationship between synthesis variables and timbre of the generated sound is typically complex or unknown to users. We introduced a generic and unsupervised mapping method based on machine listening and machine learning techniques, which addresses these challenges by providing a low-dimensional and perceptually related control space. The mapping was implemented in a fully automated system requiring little input from users. The time required for analysis and mapping computation is drastically reduced with the improved method and optimized implementation we present in this paper. We introduce the use of the extreme learning machines for the regression between control and timbre spaces, improving efficiency and accuracy. We also include an interactive approach for the analysis of the synthesizer sonic response, which is per-formed as users explore the parameters of the instrument. This work enables the computation of customized synthesis mappings through timbre spaces, reducing time and complexity to obtain a usable system.

14:00
Having a Ball with The Sphere
SPEAKER: unknown

ABSTRACT. Pitch and rhythm are often considered to be the defining elements of music upon which a hierarchy is superimposed of rhythm, the masculine element dominating pitch, the feminine element. These two elements also trap the creative imaginations in the confines of centuries of constructs. The Sphere and the Strombophone, new e-instruments created by Dirk Stromberg allow for the exploration of new expressive relationship between performer, sound movement and the audience. Each instrument focused on physical interactions of timbre and a gestural approach to the creation of music and sound. The result of each instrument is an intuitive, tactile approach to making sound where the emphasis shifts from pitch to timbre. The Sphere player concentrates on creating sound gestures and rhythms without having to be concerned by pitch production. The sound engineer at the mixing desk concentrates on the sound processing, the timbral transformations without having to be concerned by rhythm. In this presentation, e-luthier and composer Dirk Stromberg will explain the software and hardware he created for the Sphere. Conductor Dr Robert Casteels will demonstrate sounds whilst sharing how playing the Sphere changes the music making with fellow performers. Dirk and Robert will conclude a short composition for solo Sphere.

14:30-15:15 Session 11: Performance Demos & Posters

This is a session with short presentations of some of the artworks presented in the Festival program. The presentations will focus on aesthetic, critical, and technical aspects of the work, aiming to provide inside information about how it was made, and possibly influence the way it is perceived.

14:30
About When We Collide
14:40
Performing Sounds Using Ethnic Chinese Instruments with Electronic Pedals: A Case Study on SA
SPEAKER: unknown

ABSTRACT. From a first-person research method, using the perspective of Singaporean experimental chinese music group SA, this paper discusses the questions of motivation for incorporating technology with ethnic chinese music instruments from a socio-cultural perspective; how this incorporation of technology is performed and has affected the perception of sound performance for the musicians; what were the initial challenges and how the challenges were overcome.

14:50
Illustration of the Resonating Spaces Collaborative Process And Presentation of Two Sonic Artworks.
SPEAKER: Paul Fletcher

ABSTRACT. Resonating Spaces (live audiovisual performance) is a collaborative research project “reconstructing the familiar” that addresses the sonic possibilities of making the familiar, unfamiliar. It involves collaborative inter-disciplinary and trans-disciplinary research, exchange and translation between music, animation and location by researchers and artists Mark Pollard and Paul Fletcher. The collaboration has resulted in a collection of sound and vision artworks and installations that interrelated with the topography, intent, history and the surrounds of the particular location. Today we will illustrate two works Gridlife (2013) and We notice raindrops as they fall (2015). Gridlife is based on structural grid patterns observed in city apartments and was created for the Ian Potter Museum of Art in Melbourne. It was the first reconstructing the familiar research project and has been selected for screening at Animex, London (2015) and PuntoYraya Visual Music Festival Iceland 2014. We notice raindrops as they fall is a new work that continues this approach to making. This project examines and responds to observed, reimagined and remapped time-based patterns of raindrops falling. It involves examining the unique trans-sensory  (eg converging sound, visual, haptic, kinetic) characteristics of both single and multiple rain drops in descent. We use this process to create a live performance with new perspectives and an altered experience of familiar natural phenomena. Together we will notice raindrops as they fall.

15:15-15:30Coffee/Tea
15:30-16:30 Session 12: Music Perception and Cognition

After more than forty years of experimenting with the art of performance, the works presented in this session make it eventually clear that we are ready to take stock of it, and move on with increased awareness and new tools for creation. This stimulating and exciting open territory has already some emerging points: a deeper connection between art and cognitive sciences, the recognition that performance is by its own nature multi-modal and multi-sensory, and an increasing superimposition, enhanced by technology, of the discourse and practice of (user centred) design with the art exploration. Another sign that after the digital overdose of the recent past we are ready to go back to the human dimension of every experience.

Chair:
Location: Theatre @ The Nest
15:30
Evaluating a method for the analysis of performance practices in electronic music
SPEAKER: unknown

ABSTRACT. Electronic music has established very different performance practices, ranging from the suppression of any correspondence between a performer's appearance and the sounds, to fully embodied interfaces. The authors of this paper consider a concert performance an audiovisual event. The choice of a particular performance practice therefore has to be considered part of the presented work. Considering the huge range of choices for the performance of electronic music, it seems necessary to find a method to analyze and better understand the aesthetic implication of a particular choice. This paper discusses the application of a method for the analysis of performance practices of electronic music that was originally presented in the paper “Towards an Aesthetic of Electronic Music Performance Practice” (xxx 2014). This paper will provide a brief summary of a revised version of this model. In the semester 2014/15 a group of 60 students used this method for the analysis of five different performance situations. These examples included performances DJQBert, A.Schubert/F.Aulbert, N.Collins, M.Donnarumma and C.M.vonHausswolff. Altogether more than 180 analyses have been generated. The different results have been compared in detail in order to evaluate the functionality and usefulness of the analysis method. The outcome is discussed in the paper.

16:00
Same Time, Same Place, Keep it Simple, Repeat: Four Rules for Establishing Causality in Interactive Audiovisual Performances
SPEAKER: Yago de Quay

ABSTRACT. Recent consumer-grade technologies that extract physiological biosignals from users are being introduced into interactive live performances and innovating its practice. However, the relationship between these signals and the responsive audiovisual content is often not understood by the audience. Recent discoveries in neuroscience can address this issue by proposing perceptual cues that help us connect the things we see and hear in our environment. Drawing from the field of neuroscience, and more specifically the theory of crossmodal binding, this paper proposes four rules that govern the mechanism which attributes causality between audiovisual elements: same time, same place, keep it simple, repeat. Intended as a set of guidelines for artists, they will help the audience unify, and understand the underlying cause to, what they see and hear in a performance. The last section describes a brainwave-based performance called Ad Mortuos that applies these four rules. A video of the performance is available at www.tiny.cc/admortuos.

17:00-18:00 Session : Bus to ASM

Bus transport from Innovation Centre (NTU) to ArtScience Museum (Marina Bay Sands). Be a tourist!

(For participants having made full registration at Soundislands Festival.)

20:00-21:00 Session 13: Ryoji Ikeda Performance: Supercodex [live set]

Sonic artist, visual artist, electronic composer and computer-musician in one, Ryoji Ikeda has gained international renown for his provocative work combining visuals and sound. Supercodex [live set] reworks musical concepts from Ikeda’s 2013 superposition piece, which investigates how we understand the reality of nature on an atomic scale, taking its starting point in mathematical models and quantum mechanics. From a subtle beginning consisting of digital noise, blips and bass drones, there gradually emerges the core elements of techno and dance music, where Ikeda uses raw data and mathematical models to generate music and projections. Don’t miss this extraordinary live performance, where the accompanying visuals add to the rich textures to merge the senses, creating an unforgettable experience. Prepare to be immersed in a multi-sensorial extravaganza. (http://www.ryojiikeda.com/archive/concerts/#supercodex_live_set)

  • Concept & composition: Ryoji Ikeda
  • Computer graphics & programming: Tomonaga Tokuyama

Produced by ArtScience Museum with support from Soundislands Festival. (http://www.marinabaysands.com/museum/exhibitions-and-events/artscience-late.html)

Location: ASM Expression