SMC 2023 / SMAC 2023: SOUND AND MUSIC COMPUTING CONFERENCE 2023 TOGETHER WITH STOCKHOLM MUSIC ACOUSTIC CONFERENCE 2023
PROGRAM FOR FRIDAY, JUNE 16TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-11:00 Session SMC Papers 3
Location: Kungasalen
09:30
A Web-Based MIDI 2.0 Monitor

ABSTRACT. This paper presents a publicly available MIDI monitor. The application provides a web interface to list both MIDI 1.0 and MIDI 2.0 messages and aims to offer an easy-to-read interpretation of standardized message parts. In this sense, particular attention is paid to the SysEx messages that implement MIDI-CI communication in the context of MIDI 2.0. The MIDI monitor was developed through the Web MIDI API, a standard proposal by the W3C Audio Group currently supported by many browsers. Possible uses range from diagnosing issues with MIDI devices and connections to investigating MIDI 2.0 concepts in educational scenarios.

09:35
A comparative analysis of latent regressor losses for singing voice conversion

ABSTRACT. Previous research has shown that established techniques for spoken voice conversion do not perform as well when applied to singing voice conversion (SVC). We propose an alternative loss component in a loss function that is otherwise well-established among VC tasks, which has been shown to improve our model's SVC performance. We first trained a singer identity embedding (SIE) network on mel-spectrograms of singer recordings to produce singer-specific variance encodings using contrastive learning. We subsequently trained a well-known autoencoder framework (AutoVC) conditioned on these SIEs, and measured differences in SVC performance when using different latent regressor loss components. We found that using this loss w.r.t. SIEs leads to better performance than w.r.t. bottleneck embeddings, where converted audio is more natural and specific towards target singers. The inclusion of this loss component has the advantage of explicitly forcing the network to reconstruct with timbral similarity, and also negates the effect of poor disentanglement in AutoVC's bottleneck embeddings. We demonstrate peculiar diversity between computational and human evaluations on singer-converted audio clips, which highlights the necessity of both. We also propose a pitch-matching mechanism between source and target singers to ensure these evaluations are not influenced by differences in pitch register.

09:40
Citation is not Collaboration: Music-Genre Dependence of Graph-Related Metrics in a Music Credits Network

ABSTRACT. We present a study of the relationship between music genres and graph-related metrics in a directed graph of music credits built using data from Spotify. Our objective is to examine crediting patterns and their dependence on music genre and artist popularity. To this end, we introduce a node-wise index of reciprocity, which could be a useful feature in recommendation systems. We argue that reciprocity allows distinguishing between the two types of connections: citations and collaborations. Previous works analyse only undirected graphs of credits, making the assumption that every credit implies a collaboration. However, this discards all information about reciprocity. To avoid this oversimplification, we define a directed graph. We show that, as previously found, the most central artists in the network are classical and hip-hop artists. Then, we analyse the reciprocity of artists to demonstrate that the high centrality of the two groups is the result of two different phenomena. Classical artists have low reciprocity and most of their connections are attributable to citations, while hip-hop artists have high reciprocity and most of their connections are true collaborations.

09:45
A Microcontroller-Based Network Client Towards Distributed Spatial Audio

ABSTRACT. Audio spatialisation techniques such as wave field synthesis call for the deployment of large arrays of loudspeakers, typically managed by dedicated audio hardware. Such systems are typically costly, inflexible, and limited by the computational demands and high throughput requirements of centralised, highly-multichannel digital signal processing. The development of a distributed system for audio spatialisation based on Audio over Ethernet represents a potential easing of the infrastructural burdens posed by traditional, centralised approaches.

This work details the development of a networked audio client, supporting the popular JackTrip audio protocol, and running on a low-cost microcontroller. The system is applied to the case of a wave field synthesis installation, with a number of client instances forming a distributed array of signal processors. The problems of client-server latency, and interclient synchronicity are discussed and a mitigative strategy described. The client software and hardware modules could support large scale audio installations, plus serve as self-contained interfaces for other networked audio applications.

09:50
Our sound space (OSS) – an installation for participatory and interactive exploration of soundscapes

ABSTRACT. This paper describes the development of an interactive tool which allows playing different soundscapes by mixing diverse environmental sounds on demand. This tool is titled Our Sound Space (OSS) and has been developed as part of an ongoing project where we test methods and tools for the participation of young people in spatial planning. As such OSS is meant to offer new opportunities to engage youth in talks about planning, placemaking and more sustainable living environments. In this paper, we describe an implementation of OSS that we are using as an interactive soundscape installation sited in a public place daily visited by people from a diversity of entities (e.g. university, a gymnasium, a restaurant, start-ups). The OSS installation is designed to allow simultaneous activation of several pre-recorded sounds broadcast through four loudspeakers. The installation is interactive, meaning that it can be activated and operated by anyone via smartphones and is designed to allow interaction among multiple people at the same time and space.

09:55
Resurrecting the violino arpa: a museum exhibition

ABSTRACT. This paper presents a project aimed at digitally resurrecting and presenting the unique Violino Arpa instrument from the collection of The Danish Music Museum, part of the National Museum of Denmark. The project endeavors to create an interactive installation that provides visitors with an understanding of the impact of different violin body designs through co-design. The paper offers a comprehensive examination of the project's development and execution, including a review of relevant literature in the field of interactive museum installations and physical modeling of string instruments. The design, implementation, hardware, software, and challenges encountered, as well as the solutions adopted, are described in detail. Furthermore, an evaluation of the project was conducted by museum personnel at The Danish Music Museum. Observations of the participants revealed that the installation was engaging and interactive. However, some participants found aspects of the installation confusing, while others enjoyed the experience, found it aesthetically pleasing, and perceived clear differences between the sound of the Violino Arpa and the classical violin.

10:00
Salient Sights and Sounds: Comparing Visual and Auditory Stimuli Remembrance using Audio Set Ontology and Sonic Mapping

ABSTRACT. In this study, we explore how store customers recall their perceptual experience with a focus on comparing the remembrance of auditory and visual stimuli. The study was carried out using a novel mixed-methods approach that involved Deep Hanging Out, field study and interviews, including drawing sonic mind maps. The data collected was analysed using thematic analysis, sound classification with the Audio Set ontology, counting occurrences of different auditory and visual elements attended in the store, and rating the richness of their descriptions. The results showed that sights were more salient than sounds and that participants recalled music more frequently compared to the Deep Hanging Out observations, but remembered fewer varieties of sounds in general.

10:05
What is the color of choro? Color preferences for an instrumental Brazilian popular music genre

ABSTRACT. This project explores how a synesthetic experience related to music perception and color association varies across cultures, and whether music with more energetic expressions elicits richer color responses. A total of 206 participants took part in a survey using a customized web page. The participants got to listen to excerpts of Brazilian music in the genre Choro and got to choose one or more color that matched the music the most. The music excerpts were chosen based on the their portrayal of the emotions joy, tender and sorrow. The results showed differences in color preferences for each emotional expression studied across different groups. Furthermore, a correlation between the subjective intensity of the excerpt (considering that, in terms of intensity, Joy $>$ Tender $>$ Sorrow) and the variety of colors chosen by the participants was observed. In general the results supports previous research in this field with happiness or joy is often correlated to the color yellow and sorrow to the color blue. The excerpts that portrayed tenderness had most participants choosing the color yellow but also green for non-Brazilians. Due to the limits of the study, the results are not conclusive. More research is needed to get a better understanding of the impact of utilizing color combination rather than single colors to match music or emotional expressions.

10:10
“Video Accompaniment”: Synchronous Live Playback for Score-Aligned Animation

ABSTRACT. We developed a “video accompaniment” system capable of closely aligning a pre-made music video to a live, tempo- varying musical performance. Traditionally, this level of video to audio synchrony is only capable with a musician- restricting system like a click track or the musician actively controlling the visual content. Our system automatically aligns the video to a live music performance. It uses the Informatics Philharmonic automatic accompaniment soft- ware to 1) follow a musician’s score position in real time and 2) predict when the next score position will occur. These position predictions are used to stretch the video such that animated gestures align with their musical coun- terparts. We worked with clarinetist and animator Nikki Pet to adapt two musical works by composer Joan Tower to this new medium. These works were performed at mul- tiple venues with our video accompaniment system. This paper will describe both the design details of the video ac- companiment system and our performance and develop- ment experience. We will end by discussing the artistic ramifications and future improvements for this technology.

11:00-12:00 Session SMC Keynote 2: Oliver Bown

  KEYNOTE   Oliver Bown, UNSW, Sydney, Australia

Location: Kungasalen
12:00-12:45 Session SMC Concert 2

  CONCERT  

Note: For the exact times of the pieces, please refer to the concert schedule.

12:00
Community Dialogue: A participatory performance inspired by Pauline Oliveros’ Sonic Meditations and scientific experiment with live EEG data collection

ABSTRACT. Fifty years ago, composer-improviser Pauline Oliveros was interested in the science of music-centered meditation. She engaged musicians and non-musicians over ten weeks with meditative listening and participatory vocalizations from her Sonic Meditations, along with other movement exercises. Before and after the project, she recorded participants’ electroencephalography (EEG) at two occipital electrode sites with a paper-based system. Oliveros hypothesized that alpha activity (8-13 Hz) would change after the ten weeks (trait effects), but never analyzed the data quantitatively. We digitized her paper-based EEG data, but the task markings were inconsistent during the recording, preventing us from examining task-related changes (state effects). While we found no trait effects, recent literature suggests reliable alpha power changes during meditation (Deolindo, 2020) and during music improvisation (Lopata, 2017), so Oliveros’ thinking may still be valid. Therefore, we designed a follow-up study with tasks adapted from Oliveros’ "Sonic Meditation XIII: Environmental Dialogue", to examine EEG alpha power during listening, focusing on breath, humming, or imagining humming.

In autumn 2022, we collected data from 20 solo participants, with preliminary results showing alpha power was greater during imagining humming compared to eyes open. Because community performance was an integral part of Oliveros’ practice, we would like to take this study into a more ecological setting and collect EEG data from 2 participants during a group performance at SMC. The actual performance will take no longer than 11 minutes. It will consist of a 10-minute guided meditation using a fixed media track containing simple cues spaced apart in 1-minute intervals, preceded by a brief explanation of the tasks. A simple text score (shown below) outlines these cues for the audience. We will bring two high quality, wireless EEG headsets, which will be set up prior to the concert to collect simultaneous data from myself and another colleague from CCRMA. The ideal space for the performance will be the Nathan Mielstein chamber music hall, since we only require stereo sound playback, but I am flexible if the organizers think there is a better venue or audience.

Text score:

COMMUNITY DIALOGUE

We will perform the following tasks together as a group. Each task will last 1 minute unless otherwise specified, and an audio track will provide the cues for each task.

1. Close your eyes (30 sec) 2. Open your eyes and just rest (30 sec) 3. Focus on your breath : focus your attention on each breath in and out. If you lose focus, gently return your attention to the breath. 4. Listen to all sounds: expand your field of awareness to include all sounds in the environment 5. Make sound: with each breath out, hum any pitch that you choose. 6. Imagine: with each breath out, imagine humming any pitch you choose. This should feel very similar to making sound, except you’re not going to engage your vocal chords. Please do not make sound. 7. Make sound 8. Imagine 9. Listen to all sounds 10. Focus on your breath 11. Open your eyes and just rest (30 sec) 12. Close your eyes (30 sec)

References:

Deolindo, C. S., Ribeiro, M. W., Aratanha, M. A., Afonso, R. F., Irrmischer, M., & Kozasa, E. H. (2020). A critical analysis on characterizing the meditation experience through the electroencephalogram. Frontiers in systems neuroscience, 14, 53.

Lopata, J. A., Nowicki, E. A., & Joanisse, M. F. (2017). Creativity as a distinct trainable mental state: an EEG study of musical improvisation. Neuropsychologia, 99, 246-258.

12:05
Der hohle Zahn

ABSTRACT. “Der hohle Zahn” is inspired by “Wings of Desire” by Wim Wenders filmed in 1987. “Der hohle Zahn” is how Berliners nicknamed Kaiser-Wilhelm-Gedächtniskirche church after it was bombed in 1943. It’s from the ruins of the bell tower, the hollow tooth, that Damiel the angel observes the movement of human life. The chaotic and magmatic flux of the souls corrupted by their own carnality. The entire composition presents exclusively acoustic materials of flugelhorn. The natural sound of the instrument symbolises the vitality and conflictual nature of the men while manipulated sounds represent the gaze of the angel which stares back at humanity, at the wideness of its vision. A few instrumental gestures [movements], just like the few and essential reasons of the insatiable crisis of the man, and his recurrent mal de vivre. The incipit and conclusion of the composition overlap, the circle is born and dies within the same instant and at the same point. When does time start? Where does space end?

12:10
"Elevator Pitch" for clarinet and electronics (2022)

ABSTRACT. An elevator opens on three wars of our present. The voices are those of a bunch of very young witnesses. Who cries out their plight? Who is listening to their "elevator pitches"? "Elevator Pitch” uses methods applied in auditory phonetics to analyse and increase the emotional content of speech in traumatised young boys and girls coming from three war torn countries: Nagorno-Karabakh, Ukraine and Syria. The clarinet ideally moves along the designed soundscape as an empathetic companion of wounded souls.

N.B. Upon selection, financial support from the Italian Cultural Institute will be asked to cover the performer's expenses.

12:45-14:00Lunch Break Oktav
14:00-15:30 Session SMC Papers 4
Location: Kungasalen
14:00
Post-mix vocoding and the making of All You Need Is Lunch

ABSTRACT. A six-minute audiovisual presentation, All You Need Is Lunch, was produced using a novel vocoding technique in which the spectrum of a sung word is altered from within a finished stereo mix, avoiding the need for blind source separation. In the piece, snippets of pop music tunes containing the word "love" are altered to say "lunch" instead, as in "where is lunch", "tainted lunch", "saving all my lunch for you", etc. To do this the utterance "lunch" is analyzed using an additive-synthesis model, and the musical recording to be altered is selectively filtered in specific, time-varying frequency ranges and left untouched elsewhere. The depth of alteration is frequency-dependent and time-varying. The target ("lunch") utterance must be time-morphed to fit optimally onto each individual source utterance ("love"). It proved particularly important, and often difficult, to either suppress or hide the sibilant portion of the "v" consonant. Since over 100 occurrences of the source word were altered, production tools were developed for editing and managing the many time-varying parameters that had to be chosen through critical listening and, ultimately, painstaking trial and error. This paper is proposed as a companion to our artistic submission of All YOu Need is Lunch.

14:05
WebChucK IDE: A Web-Based Programming Sandbox for ChucK

ABSTRACT. WebChucK IDE is a web-based integrated development environment for writing and running ChucK code. WebChucK IDE provides tools and workflows for developing and running ChucK on-the-fly and in any web browser, on desktop and mobile devices. This environment integrates ChucK development with visualization and code-based generative web UI elements to offer an accessible and playful way to program computer music. In this paper, we detail the design and implementation of WebChucK IDE and discuss its various affordances and limitations as a sandbox for learning, experimentation, and art making.

14:10
An artistic audiotactile installation for augmented music.

ABSTRACT. Music is an art form that organizes and presents vibrations that travel to the ears through air conduction. The range of audible vibrations (10 Hz to 20 kHz) partly overlaps with the range of perceivable tactile vibrations (10 Hz to 1 kHz), making it possible to feel the vibration of music as a pleasant by-product. In this project, we designed a series of vibrotactile installations that respond to specific note ranges. Musicians were then asked to compose a piece of music specifically for this installation to create a piece that could be heard and felt. Each composition was performed in front of a live audience. After the concert, the audience filled out a questionnaire about their experience. The results clearly indicate the different efficiencies of each installation and will help in designing better devices to optimize the tactile musical experience.

14:15
Embodied Tempo Tracking with a Virtual Quadruped

ABSTRACT. Dynamic attending theory posits that we entrain to time-structured events in a similar way to synchronizing oscillators. Hence, a tempo tracker based on oscillators may replicate humans' ability to rapidly and robustly identify musical tempi. We demonstrate this idea using virtual quadrupeds, whose gaits are controlled by oscillatory neural circuits known as central pattern generators (CPGs). The quadruped CPGs were first optimized for flexible gait frequency and direction, and then an additional recurrent layer was optimized for entrainment to isochronous pulses. Using excerpts of musical pieces, we find that the motion of these agents can rapidly entrain to simple rhythms. Performance was found to be partially predicted by pulse entropy, a measure of the sample’s rhythmic complexity. Notably, in addition to having wide tempo ranges, the best performing agents can also entrain to rhythms that are periodic but not quantized on a grid. Our approach offers an embodied alternative to other dynamical systems-based approaches to entrainment, such as gradient-frequency arrays. Such agents could find use as participants in virtual musicking environments, or as real-world musical robots.

14:20
Generating symbolic music using diffusion models

ABSTRACT. Denoising Diffusion Probabilistic models have emerged as simple yet very powerful generative models. Unlike other generative models, diffusion models do not suffer from mode collapse or require a discriminator to generate high-quality samples. In this paper, a diffusion model that uses a binomial prior distribution to generate piano rolls is proposed. The paper also proposes an efficient method to train the model and generate samples. The generated music has coherence at time scales up to the length of the training piano roll segments. The paper demonstrates how this model is conditioned on the input and can be used to harmonize a given melody, complete an incomplete piano roll, or generate a variation of a given piece. The code is publicly shared to encourage the use and development of the method by the community.

14:25
musif: a Python package for symbolic music feature extraction

ABSTRACT. In this work, we introduce musif, a Python package that facilitates the automatic extraction of features from symbolic music scores. The package includes the implementation of a large number of features, which have been developed by a team of experts in musicology, music theory, statistics, and computer science. Additionally, the package allows for the easy creation of custom features using commonly available Python libraries. musif is primarily geared towards processing high-quality musicological data encoded in MusicXML format, but also supports other formats commonly used in music information retrieval tasks, including MIDI, MEI, Kern, and others. We provide comprehensive documentation and tutorials to aid in the extension of the framework and to facilitate the introduction of new and inexperienced users to its usage.

14:30
Heat-sensitive sonic textiles: increasing awareness of the energy we save by wearing warm fabrics

ABSTRACT. In this paper we describe the development of two heat and movement-sensitive sonic textile prototypes. The prototypes interactively sonify in real-time the bodily temperature of the person who wears them, complementing the user’s felt experience of warmth. The main aim is making users aware of the heat exchanges between the body, the fabric, and the surrounding environment through nonintrusive and creative sonic interactions. After describing the design challenges and the technical development of the prototypes - in terms of textile fabrication, electronics and sound components - we discuss the results of two user experiments. In the first experiment, two different sonification approaches were evaluated allowing us to select the most appropriate for the task. The prototypes’ use-experience was explored in the second experiment.

14:35
Playing the Design: Creating Soundscapes through Playful Interaction

ABSTRACT. This study takes inspiration from provocative design methods to gain knowledge on sound preferences regarding future vehicles' designed sounds. A particular population subset was a triggering component of this study: people with hearing impairments. To that aim, we have developed a public installation in which to test a hypothetical futuristic city square. It includes three electrical vehicles whose sound can be designed by the visitor. The interface allows the user to interact and play with a number of provided sonic textures within a real-time web application, thus "playing" the design. This opens a design space of three distinct sounds that are mixed into an overall soundscape presented in a multichannel immersive environment. The paper describes the design processes involved.

15:30-16:00 Session SMC Concert 3

  CONCERT  

Note: For the exact times of the pieces, please refer to the concert schedule.

Location: Lilla Salen
15:30
Glitch Mass

ABSTRACT. It is an acousmatic performance based on sacredness as a concept of absence; simplifying is "God" everything that does not exist and will never exist. Hence the use of residual material (hence the term "glitch") derived from common audio restoration processes (de-hum, de-reverb, de-esser) applied to recordings of music from the sacred tradition such as , for example, the five sequences authorized by the Council of Trent; or field recording material of places such as churches or elements related to spirituality. From a technical point of view, the performance will develop in the Ambisonic format so as to recreate a liturgical environment that gives space to the musical ritual.

15:35
La Mer Emeraude

ABSTRACT. La Mer Émeraude (2018)

Let us imagine a small invented world, a micro universe where everything exists... matter, energy, spirit, telluric movements, mysteries, natural and supernatural forces. That world is whole and from afar, whoever watches, sees it as a living ocean. This work was composed in the Musiques-Recherches studio and is dedicated to Annette Vande Gorne and Francis Dhomont. It received the second prize at SIME Competition 2019, the first Prize at Cittá di Udine Competition 2020, the first prize at Destellos Competition 2020 and the first prize at the Chicago Composers Consortium Competition.

16:00-17:30 Session SMC Papers 5: Online presentations

The papers are presented remotely, and a moderated panel session with all authors and the on-site participants follows directly in the lecture hall.

Location: Kungasalen
16:00
Temporality Across Three Media: Inner Transmissions

ABSTRACT. With time inextricable from music, as well as theatre and film, creating effective audiovisual works in digital media relies on an understanding of the medium’s effect on the work’s temporality. There is a need to analyze the interaction between virtual environments and time perception within a musical context. This paper discusses the sound installation Inner Transmissions as a case study for examining this issue. The work compares three media: physical space using radio transmission, web-based 360° photo and video, and headset virtual reality (VR). The core concept of the installation remained the same across formats, designed to allow the issue of temporality to be addressed explicitly. Listener feedback suggests expectations, physical interactivity, and environmental movement are important factors in determining how temporality varies between media, posing as affordances for some and limitations for others. The outcomes of this case study provide an example of how temporality functions in musical works created for virtual environments, serving as a starting point for future research in this area.

16:05
Design Process in Visual Programming: Methods for Visual and Temporal Analysis

ABSTRACT. Visual programming languages, such as Pure Data (Pd) and Max/MSP, have been prevalent in computer music for nearly three decades. However, few shared and consistent research methods have emerged for studying the reproducible use of digital musical instrument (DMI) designers employing these languages. In this paper, we introduce straightforward methods for extracting design process data from Pd usage through automated version control and protocol-based annotation. This data enables visual and temporal analysis, which can reveal patterns of DMI design cognition and collaboration processes. Although our focus is on design, we believe that this approach could also benefit creativity studies and musicological analysis of the compositional process. We present the outcomes of a study involving four groups of DMI designers in a one-hour closed activity and demonstrate how these analysis methods can be used to gain additional insight by comparing them against participant survey data. In discussing how these methods could be enhanced and further developed, we address validity, scalability, replicability, and generalisability. Lastly, we examine motivations and challenges for DMI design cognition research.

16:10
Sculpting Algorithmic Pattern: Informal and Visuospatial Interaction in Musical Instrument Design

ABSTRACT. In live coding, the concept of algorithmic patterns is employed to characterise the improvisation of artistic structures. This paper presents a digital musical instrument (DMI) design study that led to the development of our understanding of proto-algorithmic thinking. The study focused on tools for manually sculpting digital resonance models using clay, with participants following brief technical instructions. The resulting thought process was grounded in systematic embodied interaction with the clay, giving rise to a form of algorithmic thinking that precedes the conceptual formalisation of the algorithm. We propose the term `proto-algorithmic pattern' to encompass implicit, tacit, gestural, and embodied practices that lack a formalised notational language. In conclusion, we explore the implications of our findings for interfaces and instruments in live coding and identify potential avenues for future research at the intersection of live coding and DMI design.

16:15
Conditional sound effects generation with regularized WGAN

ABSTRACT. Over recent years generative models utilizing deep neural networks have demonstrated outstanding capacity in synthesizing high-quality and plausible human speech and music. The majority of research in neural audio synthesis (NAS) targets speech or music, whereas general sound effects such as environmental sounds or Foley sounds have received less attention. In this work, we study the generative performance of NAS models for sound effects with a conditional Wasserstein GAN (WGAN) model. We train our models conditioned on different classes of sound effects and report on their performances in terms of quality and diversity. Many existing GAN models use magnitude spectrograms which require audio reconstruction using phase estimation after training. The often imperfect reconstruction of the audio signal has led us to propose an additional audio reconstruction loss term for the generator. We show that this additional loss term improves the quality of the audio generation considerably with small sacrifice to the diversity. The results indicate that a conditional WGAN model trained on log-magnitude spectrograms paired with an appropriately weighted reconstruction loss is capable of synthesizing highly plausible sound effects.

16:20
Interactive music score completion

ABSTRACT. The autocompletion paradigm is applied to computer-aided music composition. Integrated in an interactive score editor prototype, a data-driven model is trained on an initial corpus defined by the composer. When the completion function is called, the model suggests possible continuations of musical phrases by superimposing them on the score. The user can then choose or avoid a quotation from the corpus, then modify it. Based on an interactive and online automatic learning, the modifications made, once validated, are reinserted in the learning corpus. Progressively, the completion suggestions are more and more personalized, according to the curatorial and editorial choices made by the composer. The score editor becomes a personal companion of the composer in classical tasks such as composition, harmonization, arrangement and orchestration.

16:25
A Qualitative Investigation of Binaural Spatial Music in Virtual Reality

ABSTRACT. Virtual reality (VR) games and applications strive to create as believable an illusion of “being there” in the virtual environment as possible. This implies fidelity in all aspects of technical reproduction, including the spatial qualities of audio content. “Spatial music” is a phenomenon that has incited growing interest among researchers, sound engineers and composers. Even though VR offers a perfect environment for spatial music experiences and experiments, most music soundtracks are typically presented to the player without situational spatial positioning or other game-scene-related spatial cues. This article studies how people experience spatial sound and music in a multisensory VR context, using multidisciplinary technical and artistic methodologies. It examines a VR experience with two music conditions: a normal stereo music soundtrack, and a spatially auralised music soundtrack. Qualitative interviews were conducted to gather information about the subjective experiences of music spatialisation in the VR experience, and the data gathered was evaluated using qualitative content analysis. The data reveals advantages and disadvantages of music spatialisation in VR, as well as the potential to increase immersion and to affect players’ perceptions of virtual spaces. The findings indicate that spatial music could enhance the illusion of “being there” in the virtual environment.

16:30
Comparing various sensors for capturing human micromotion

ABSTRACT. The paper presents a study of the noise level of accelerometer data from a mobile phone compared to three commercially available IMU-based devices (AX3, Equivital, and Movesense) and a marker-based infrared motion capture system (Qualisys). The sensors are compared in static positions and for measuring human micromotion, with larger motion sequences as reference. The measurements show that all but one of the IMU-based devices capture motion with an accuracy and precision that is far below human micromotion. However, their data and representations differ, so care should be taken when comparing data between devices.

17:30-18:30 Session SMC Concert 4

  CONCERT  

Note: For the exact times of the pieces, please refer to the concert schedule.

Location: Lilla Salen
17:30
Morphogenesis

ABSTRACT. Morphogenesis is the process through which cells differentiate to form an organism, and the genesis of this piece mirrors exactly this biological function. Starting from a limited pool of similar samples, all the sounds were crafted and organised to fit a specific place in the overarching form of the composition. As the biological process gives shape to an organism out of many unspecialised cells, similar was the compositional process, which shaped the piece using only a few samples and some of the most basic manipulation processes. The sounds employed, field recorded, fall in the category of found sounds and are usually considered noise. Important, therefore, was the minimalistic approach to the composition of this piece concerning the "poor" sonic material used, the basic manipulations employed, and the bottom-top way of working, which valued each sound object at the ground of the composition. When we look at a living organism—for example, our body—we don't think about the basic building blocks, the cells of which it is composed, somehow overlooking them. Morphogenesis is, in this sense, an attempt to blend the importance of the overall structure while recognising the fundamental role played by the atomic units of which it is composed, which are identifiable throughout the whole composition.

17:35
"Backlash": an idiomatic live-electronics performance to the KMH Klangkupolen

ABSTRACT. “Backlash” is an attempt at an idiomatic live-electronics performance to the KMH Klangkupolen where the sound system is no longer a mere spatialisation tool but becomes the medium of a site-specific dynamical system generating sound that evolves and self-adapts in a decentralised manner. Each of the 29 loudspeakers of the Klangkupolen is associated with a specific deformation of the temporal development of the sound, with a view to fostering the emergence of spatial and temporal perceptual phenomena that do not correspond to the sum of the individual elements but rather to the collective result of the entire system. In “Backlash”, the models of sound generation in micro-temporality coincide with the models of musical organisation and temporal development, whereby the conformation of the acoustic space and the sound reproduction system in question are at the centre of these models. The idea of developing spatialisation of a sound image, in which the idea of a sound pre-existing to it is implicit, is not present here; rather, the spatial qualities of the composition result from the complex interactions of the system that shapes the sound as a constantly changing frame. The non-linear nature of the system puts the two performers in the situation of no longer being the dominators of a system meant to be controlled, but rather, they favour unpredictable behaviour and a situation of constant interaction is established here.

17:40
CATÁSTROFES (2023) FOR MULTICHANNEL SYSTEM ARRAY

ABSTRACT. Catástrofes (2023) is a musical work composed for mul-tichannel system array, entirely programmed in Max/MSP combining different types of sound synthesis and ambisonics spatialization. The piece is malleable regarding the spatialization possibilities, considering ambisonics orders, 2D or 3D sound fields, and multi-channel diffusion (i.e. number of loudspeakers), aiming to be performed at the Klangkupolen speaker dome. The term “catástrofes” refers to the “catastrophes” concept by René Thom, i.e. continuous dynamic systems that pre-sent ruptures when critique points are reached, such as continuity breaks resulting in perceptive qualitative leaps. The piece is conceived as a sound continuum in which the textures and sound masses are gradually achieved from the addition of up to 26 voices employing different synthesis techniques. Three main sound masses are jux-taposed, formed by different superposition of partials, and their transitions are gradually performed through timbre interpolation. From this structure, psychoacoustic phenomena such as roughness and beats emerge as sali-ences and ruptures in listening. This composition was conceived in a multidisciplinary research using computer tools, combining artistic and technological methodolo-gies. The relationship between perception and action was strongly employed in the creative process by an interac-tive and operational listening.