Exploring the Expressive VR performance of Csound Instruments in Unity
ABSTRACT. Electronic music instruments have revolutionized musical performances. These gadgets allow musicians to perform using sound synthesis, unlocking infinite possiblities from countless algorithms, from those explored thoroughly to the cutting edge. However, as sound synthesis technologies evolve, such digital instruments must also be reimagined. A instrument that can fully utilize the capabilities of modern synthesizer technology should allow one to perform not just novel sounds, but be more expressive with their performance. This paper explores such expressiveness through an instrument created in VR, the Laser Synth.
Exploring Interactive Composition Techniques with CsoundUnity and Unity
ABSTRACT. This paper presents different techniques and systems that were used to create an interactive composition using Csound, Unity and CsoundUnity. The paper discuss the cre- ation of compositional and performative systems designed by combining the synthesis powers of Csound and the interactive game mechanisms in Unity. These systems includes: generative music with logic in C# played using Csound Instruments, trigger based control systems mimick- ing MIDI note on/off events using Unity’s collision and rigidbody mechanics, transform object and controllers functioning as real-time controls like knobs and sliders. Taking advantage of both systems, it became possible to create a game-like composition la foret.
Csound in the MetaVerse – From Cabbage to CsoundUnity and Beyond: Developing a Working Environment for SoundScapes, SoundCollages, and Collaborative SoundPlay
ABSTRACT. Csound in the MetaVerse is an immersive multiplayer system built in Unity for Meta Quest XR headsets that supports new ways to interact with Csound instruments and effects. Players are collocated into shared physical or virtual spaces, either locally, playing together in the same physical space, or remotely, joining in with other players over the internet. In these VR and AR worlds, sounds appear as physical objects that players can hit, grab, stretch, squeeze or toss away while they continue sounding and wandering freely on their own. One can also 'connect' to the sounds via 'cords' and control individual or multiple parameters with buttons or physical gestures. This systems offers new ways to play with sound in time, to play with sounds in space, and to play with each other's sounds. And in this paper, we will highlight small excerpts from the code that provides the means for some of the more exciting, unique and important features that enhance the capabilities of CsoundUnity and make possible some of the uniquely powerful modes of interaction and collaboration that our Csound in the MetaVerse environment offers.
Face Tracking with CsoundUnity: Converting Smiles into Sounds
ABSTRACT. Csound has been widely used for sound synthesis and live performance. While much exploration has been done in expanding the potential of music-making with Csound, few studies have looked into developing Csound-based music-making tools for people with physical conditions and/or disabilities. This paper presents a preliminary design and implementation of a face tracking-based musical expression system utilizing CsoundUnity's sound design capabilities for real-time musical performance. The goal of this development is aimed towards providing alternative methods for people with limb motor impairment to express music through facial gestures. Users could control parameters of Csound instruments through facial movements such as but not limited to opening their mouths and winking. The paper will also discuss observations from user testing sessions with patients at a rehabilitation facility.
10:40-11:00Coffee Break + Installation Session (Heintz) – AW K0101
Opening mind by opening architecture: analysis strategies
ABSTRACT. In numerical signal processing for electroacoustic composition, the progressive loss of specific development and research environments caused by the increasing use of digital market tools has favoured the dominance of the closed-architecture audio processor model. This model, while powerful, envisions the possibility of describing output data about its perceived characteristics, but at the cost of ignoring its internal process and interacting systems, which become complex, powerful environments but closed in an inscrutable black box, a loss we must consider. Any digital signal processing technique tells a story. Just as the words of a language incorporate social, historical and technical polysemic layers, a signal processor has its own story of implementation, a gradual technological achievement with its inevitable aesthetic consequences. Through the looking-glass of literature, one can access those environments with renewed awareness by reestablishing a scientific method and an attitude to research. In this specific case, starting from the case study of Manfred Schroeder's historical reverbs, we illustrate the process of building analytical evaluation tools, as well as practical implementation, at the basis of a conscious study path.
Integrating Csound into Unreal Engine for Enhanced Game Audio
ABSTRACT. Unreal Engine is one of the most widely used game engines in the current market, thanks to its exceptional flexibility and strong graphical capabilities. Recently, the development team has introduced a new tool called MetaSounds, designed to facilitate sound synthesis, digital processing and sound design in a native way and within a node-based interface.
Despite its user-friendly interface, MetaSounds still lacks certain functionalities present in older sound engines such as Csound or SuperCollider. Currently, integrating Csound into Unreal needs the use of a middleware like FMOD or Wwise, along with Cabbage to export Csound code into a VST. However, a MetaSounds node that inherently incorporates Csound, without the use of external dependencies, and with MetaSounds adaptable, intuitive, and potent graphical interface would be a significant advancement. Thanks to Unreal Engine’s support for C++ implementations and enabling developers to craft their own MetaSounds nodes, it can be possible to integrate Csound within a MetaSounds node through the Csound C++ API.
The advantages of multi-dimensional interfaces for the future of csound
ABSTRACT. Present day micro-controllers allow many physical properties
to be translated to and from the digital domain. As non-trivial sound synthesis encompasses
a large number of controlling variables, the necessary properties of an effective interface
are discussed. The dichotomy between the analytic approach of computer-mediated electro-acoustics
and Gestalt-based integrated human perception is shown. Special emphasis is laid on the importance
of simultaneous multi-modal presentation for sensory integration and the vital role played by
haptic and proprioceptic feedback. Comparisons are made
between the established conventions of analog electronic equipment and the relative pioneering
status of computer synthesis. Three interface designs are presented, illustrating
practical steps on the possible path forward and the implications these would have for csound's
further development.
ABSTRACT. Richard Boulanger’s CsoundScapes in the MetaVerse (2024) uses colocation to immerse local and remote players, wearing Quest 3 XR headsets, into virtual performance spaces where they conjure and share Csound SoundObjects (orbs), that they hit, squeeze, stretch, and toss about. The 10-minute piece is is a structured improvisation whose score consists of a number of Boulanger’s generative, immitative, procedural, percussive, modeled, textural, ambient, drone, environmental, random, rhythmic, granular, FM, AM, RM, granular, waveguide, scanned, sample-hold, and sample-based Csound instruments converted to Cabbage and imported into Unity for use with the CsoundUnity API. The sliders and triggers from the Cabbage UI have been mapped to buttons, grips, triggers, joysticks and physical hand gestures on the Quest and they appear as editable and assignable controls in the innovative and versatile Csound in the MetaVerse system designed by Hung Vo. In Strong Bear’s Csound MetaVerse, players can see each other and collaborate with each other in the creation, modification, and performance of their Csounds and CsoundOrbs. This system and this work represent a new way to play, design, and explore unique soundworlds together, locally and remotely, in mixed reality. A unique feature of this system is how the remote players appear in the local performance space.
ABSTRACT. The composition attempts to tell an imaginary story through a "sound fable".A female child with beautiful eyes, she is incarcerated alone in a huge prison, completely dark and without windows.She is unable to speak, the only glimmer of communication is represented by the sound she hears by hitting one of the steel bars in her suspended room. Through this sound, transforming it into her mind, she embarks on a dreamlike journey; along the way, her imagination gains strength and, trying to limit it, builds a "sound mosaic" that slowly falls apart to gently lead her into a parallel reality, removing the emptiness of her perception, finally returning to her prison, keeping her life altered.She doesn't fight, she just teaches who she is. And the "sound fable" continues...
The composition is inspired by a recurring dream and is dedicated to my dear friend Ottavia.
ABSTRACT. “Ordinary Rehearsals” is an electroacoustic piece that utilizes digital techniques inspired by the traditional workflows of tape studio recording. The piece utilizes CSound for sound synthesis through sampling, articulating complex sound gestures from initially contrasting materials. These materials are designed to evolve into a dialogue, seeking moments of equilibrium. Scores are algorithmically generated within a computer algebra system, ensuring a sophisticated integration of computational precision with artistic expression. This piece intricately explores the tension and dialogue between disparate sound elements.
ABSTRACT. Ripples in the Fabric of Space-Time constitutes the fourth movement of Jon Christopher Nelson’s The Persistence of Time and Memory. The work was composed making extensive use of Csound physical modeling opcodes driven by MPE data from a roli seaboard controller. The work explores correlations between modulated physical sound models and sampled sound environments.