ABSTRACT. Whether your musical journey begins in the family recorder quartet or in a wedding band, a college choir, or a community orchestra, making and playing music with others is one of life’s greatest and most memorable pleasures. For many years now, the authors have collaborated on compositions and enjoyed performing together on concert stages in the US, Asia, and Europe. For the past six years, they have been co-writing a new set of pieces in which, over the web, they are accompanied by and interacting with a generative algorithmic computer ensemble and both playing and controlling Csound instruments on each other’s laptop and in each other’s home studios. The code, design, research and advice presented in this paper is the result of the realization of the most recent of these ‘long-distance’ Web duets — the composition "Eleven Questions". In it, the authors share how compositional goals lead to design solutions and how those design solutions steer the work in new and different directions, often leading far beyond what they had originally imagined. What is shared here are ideas, instruments and algorithms that will hopefully be of use to other Csounders wishing to travel similar creative paths.
Frequency Modulation with Feedback in Granular Synthesis
ABSTRACT. The paper investigates audio synthesis with frequency modulation feedback in granular synthesis, comparing it with regular FM feedback. The combinations of these two classic synthesis techniques show some promising areas of exploration. As a full exploration of this potential is beyond the scope of this paper, we will rather give insight into some initial experiments and share the tools used, encouraging the reader to dive deeper into parameter combinations not yet described.
ABSTRACT. This paper discusses the creation of organic generative structures in Csound exemplified by a concrete artistic example. After discussing the properties of an organic generative structure the example is described in its fundamental aspects sounds, interdependency and development. Implementation details are described and shown by code examples. Finally, the open possibilities of such an artistic approach are discussed in some aspects.
ABSTRACT. Integrated into our daily lives, online systems such as the Web provide essential services and support a wide range of functions and tasks. Among these, Web Audio applications have revolutionized the production, streaming, and exploration of digital audio, offering advanced tools directly accessible from web browsers without the need for third-party software installations.
This paper presents the implementation of realtime convolution reverb using Csound’s engine within a web page container. The source code utilizes HTML, CSS for interface styling, and JavaScript for the Csound API implementation.
Through this project, our aim is to illustrate how Csound can be employed in crafting audio and multimedia devices for the web, fostering the development of versatile environments for technical and artistic exploration, as well as and for educational inclusiveness and accessibility.
cloud-5: A System for Composing and Publishing Cloud Music
ABSTRACT. The advent of the World Wide Web, adequate support for computer graphics and audio in HTML, and the introduction of WebAssembly as a low-level language and browser- hosted runtime for any number of computer language compilers, have now created an envi- ronment well suited to the online production, publication, and presentation of music, visual music, and related media at a professional standard of technical quality. A piece of music on the World Wide Web no longer need be merely a link to a downloadable soundfile or video, or even to a stream. A piece can, indeed, be its own “app” that is live code running at near native speed in the listener’s Web browser. I call this kind of music cloud music because it exists only in the “cloud,” the omnipresent computing infrastructure of the Web. I argue that this creates an entirely new environment for music that, in the future, should be developed with its own social context and to function as an alternative means of disseminating music in addition to live performances, discs, streams, and downloads. Here, I present and demonstrate cloud-5, a system of Web components for producing cloud music including, among other things, fixed medium music, music that plays indefinitely, visuals that generate music, music that generates visuals, interactive music, and live coding. cloud-5 includes a WebAssembly build of the sound programming language and software synthesis system Csound, a WebAssembly build of the CsoundAC library for algorithmic composition including chords, scales, and voice-leading, the live coding system Strudel, and supporting code for menus, event handlers, GLSL shaders, and more. A cloud-5 piece thus exists as an HTML page that embeds Csound code and/or score generation code and/or Strudel code and/or GLSL code, in the context of a static Web site that can be served either locally (for composing and performing) or remotely on the World Wide Web (for publication). cloud-5 differs from related online music systems not only by incorpo- rating Csound and CsoundAC, but even more by being designed primarily as a new medium of presentation, performance, and publication.
ABSTRACT. With the development of Bare-metal Csound, chips, microcontrollers, or boards with ARM-based CPUs can now be targeted to run Csound audio programs. This installation will demonstrate the potential of this development through an interactive, generative Csound piece running on a Digilent Zybo Z7020 board, which contains a Xilinx Zynq 7000 SoC. Csound’s generative and synthesis capabilities will be interfaced with motion-sensing through LIDAR sensors to capture and convert motion in any of the common spaces of the conference into varied ambient sonic results. The purpose of this installation is to create an interactive ambience for a common space and to showcase the potential and portability of Bare-metal Csound.
ABSTRACT. A meditation on Csound as living software and reflections on living with this program exploring sound and music. In this talk, I will look at Csound 7, the newest generation of our software, and discuss what it offers us today as users and as a community. I will discuss where we are today, as well as short- and long-term plans, and offer some thoughts on what we can do to nurture this program to keep it vibrant and healthy for the days ahead.
ABSTRACT. In April of this year, JUCE announced a new end-user license agreement. While the updated license doesn't signify the immediate demise of Cabbage in its current form, it has presented a unique opportunity to reassess the project as a whole. Consequently, a new version of Cabbage is currently under development from the ground up. The end-user experience will remain largely unchanged: the familiar Cabbage syntax will persist, users will retain access to a wide array of widgets, and they will still be able to export to all popular plugin formats. However, the bulk of the new work will occur behind the scenes. This redesigned version will feature a significantly reduced codebase. Moreover, it will leverage the power of VS Code, providing developers with more options to create modern, responsive, and dynamic user interfaces.
A tracker based Csound frontend software for musicians
ABSTRACT. Creating music for Csound can be done by creating code for orchestra and score sections, however frontends do exist that offer simplicity for making musical projects such as Blue, CsoundQt and jo_Tracker. Csound offers a wide array of opcodes and language features that are useful for writing complex musical pieces and with the help of the frontends, the process of music composition is significantly improved, making it useful for musical projects of any kind of complexity, requiring only patience and work for achieving the desired artistic visions. While the features and the language that Csound offers are very useful for many kinds of musical projects, it would be much desired if another frontend offers the artists a flexible way of editing the musical events in a structured way such that they can easily change and replace specific musical features, such as musical segments with new ones when needed. This paper proposes a new frontend for enabling the artists to have a significantly improved workflow for their own musical projects, favoring the ease of editing the musical segments and instrument codes in an organized way, while also behaving as an environment for Csound projects, similar with IDEs for computer programmers, but instead it is for musical compositions made with Csound language, which features a diverse palette of opcodes helpful for applying a myriad of sound synthesis techniques and musical tricks.
ABSTRACT. Creating envelopes is a valuable resource for giving movement to sound. Here we present a tool that facilitates the creation of complex envelopes thanks to a graphical interface in which the user can quickly draw the curves necessary for the most varied musical purposes. Four typical needs in the creation of the envelope are identified and discussed: the management of the general profile, the tremolo, the loop, the random component. The output product of this software will be Csound code. Designed particularly for beginners who start learning Csound, this tool makes it possible to facilitate the understanding of the envelope in the context of the parameter on which it is applied, and to provide ready-made code useful especially in conditions of very complex shapes.
Cordelia, crafting a method while live coding in Csound
ABSTRACT. This paper introduces Cordelia, a domain-specific language offering an intersection between live coding and contemporary composition practices. Designed to generate Csound and other code on demand, Cordelia integrates diverse musical elements such as envelopes, tuning systems, and various instrument types. By prioritising resource efficiency and flexibility, it enables seamless transitions between live coding session, DAW scripting in environments like Reaper, and graphical scoring. The paper highlights some particularly features and Cordelia’s architecture, suggesting its potential to broaden the creative possibilities of contemporary com- position.
ABSTRACT. This paper introduces a Python-based TCP socket server designed for collaborative live coding sessions utilizing the Csound engine, aimed at enhancing group music creation. The server facilitates real-time, multi-client connectivity, allowing users to dynamically create and manipulate custom Csound instruments. This system is equipped with an internal loop mechanism that manages quantized events and chord transitions, providing a rhythmic backbone for musical compositions./
Participants can engage concurrently, using a suite of commands that interact intelligently with ongoing chord changes to modify specific p-fields of the csound instruments produced. This feature ensures that musical expressions are both responsive and adaptive to the evolving sonic environment. Additionally, the system offers a variety of tools that support user interactions. Users have the capability to query and identify various components such as instruments, channels, and buses within the system. This transparency facilitates an intuitive understanding of the shared musical workspace.
Moreover, the architecture allows for the manipulation of loop events tied to the server's clock. Users can easily subscribe, modify, or remove their events, enabling a fluid and dynamic compositional process. By supporting direct manipulation of musical elements in a live setting, the server not only fosters individual creativity but also enhances collaborative efforts among users.
Designed as a fun and innovative project, this server is an excellent platform for both novice and experienced musicians to experiment with collaborative composition and live performance in a digital setting. It provides a playful yet robust framework for musical exploration and interaction.
ABSTRACT. Csound has developed in many ways in the past two decades. Not only in terms of its language and the different usage cases, but also in the structure of development and community. I would like developers and users discuss some of these items.
ABSTRACT. This installation will showcase four projects created and programmed in CsoundUnity by Professor Richard Boulanger’s Electronic Production and Design students at the Berklee College of Music in Boston. Individual players and small groups will be able to choose from and enter immersive VR, AR, and XR worlds where they can: 1. Wander through Zhong’s beautiful generative Sound Garden (La forét) and play some classic Chinese instruments; 2. Design and play expressive Csound instruments in Kobayashi’s Sound Lab (Laser Synth); 3. Turn a smile into a sound with Liu’s Face Tracing system; or 4. Colocate to see and collaborate with multiple local and remote players as you and they create, hit, stretch, squeeze, contort, reshape, grab, pass, catch and launch Vo’s “SoundOrbs” and “SoundWanders,” under the stars, on the beach, over the rooftops, and under the sea (Collaging in the MetaVerse with CsoundMeta). These CsoundUnity worlds will be installed on a number of Meta Quest 2+3 MR headsets and screencast onto multiple laptops. This will allow many to explore and play simultaneously while others can watch them play as they wait for an available headset to immerse themselves in these powerful, versatile, and fun VR soundworlds.
ABSTRACT. The paper discusses the piece “Silence(d)” (2020) for voice and electronics by the composer Marijana Janevska. It includes a programm note text about the piece, a biography of the composer-performer, Technical rider, information about the duration and a link to a realization of the piece.
ABSTRACT. “Space, the final frontier..”
Since my youth I was fascinated with the imaginative influence the stars have on our culture and society.
All the planets of our solar system have an influence on each other and as soon as a heavy enough object enters their gravitational field, they change their behavior and pathway.
Similar things happen between us humans. We enter each others life, have an influence and then we leave (or get kicked out).
ABSTRACT. The composition Cstück Nr. 2 (2015) was created using CsoundQt and comprises two principal sound sources. It oscillates between sound and noise, thereby creating a morphing between the timbres and characters of voices and brass instruments, which rise and fall in new sound fields.
ABSTRACT. This work is based on three words selected from poems by the Argentine poet Alejandra Pizarnik. The three words are: Errancia, Resolar and Grismente. All these words are portmanteaux, that is to say, words that are composed out of several other words using processes of imbrication and combination. There are other sonic materials that are also used in the work, as the poems make explicit reference to them, and their sounds are familiar and very ostensible: water, birds and wind. Sometime, these three elements are combined to produce some kind of abstract sonic landscape while several vocal sounds are presented either in the form of unintelligible sequences or in sequences that expand parts of the three selected words. Was composed entirely using the Csound program, plus several other general tools for mixing and mastering. The author has made available the technical details as well as the Csound code of the spatial-spectral granulation resources that he had used in several publications.
ABSTRACT. The piece is spatially encoded in 7th order Ambisonic. The sounds were generated exclusively with the sound synthesis program "Csound" and the editor for composition "blue" to create the piece. Also the spatialisation in 7thth Order Ambisonic was done via my own code using Csound within my blue-environment for spatialisation. The piece is about the generating forces of nature.
ABSTRACT. The "Gendy Cloud" (2022) is a real-time, networked, multichannel music piece performed by WORC, a telematic ensemble. Members control their instruments remotely via a web interface, manipulating one or more instances of a Csound-based software instrument implementing Xenakis's GENDYN algorithm. Inspired by "Xenakis22: The Centenary Symposium," this project commemorates Xenakis, enabling collaborative music-making across distances. The piece was performed at Xenakis Networked Music Marathon (Athens, 2022), Sonified Symposium (Istanbul, 2022) and presented as a demo at the Internet of Sounds Symposium (Pisa, 2023), also showcased at the Csound website.
ABSTRACT. Traverse: for Recorder and Electronics is an eight-minute electroacoustic composition for acoustic recorder and electronics. All sounds are created through live improvisational melodies performed by the composer on soprano and alto recorders, then processed with a range of Csound and Cabbage plugins, such as those from the McCurdy Collection and Cabbage plugins made by the com- poser. The piece depicts the composer’s journey of walking away from a place that is filled with sad- ness. The electronic sounds created through processing the recorder performance convey the memory flashbacks and swirling emotional disorientation. The piece eventually returns back to the theme, depicting the composer’s feelings when returning to the same place after years. With varying articulations in the recorder performance, processed with effects like spectral delay using Csound, the ending of the piece conveys the composer, though still agitated, learning to be at peace with the past. The composer is the performer of the piece, and will attend the conference to perform it live if accepted.
ABSTRACT. “Caibleadh, voices you hear in the distance at sea..especially on calm night in the mouth of the bay…They say not everyone can hear these things, but you have to be there at the right time”.
ABSTRACT. REEHD is not based on sounds of real instruments, but on sounds generated by physical modeling. Physical modeling allows to go beyond the limits imposed by real instruments as well as the limits imposed by human players. This can result in certain sounds no longer having any relation to known instrumental sounds. In REEHD sound objects interact as sound gestures as well as textures in a concept of composed spatial counterpoints in virtual spaces.
"But no one should be afraid that looking at signs leads us away from things; on the contrary, it leads us into the innermost of things." (Gottfried Wilhelm Leibniz, 1646-1716)
ABSTRACT. ‘Eleven Questions’ (2024) is an 8-channel internet-duet with an ‘ensemble’ consisting of 4 ‘generative’ computer players (the ‘choir”), and 2 live ASCII players – one playing on stage in the concert hall and the other playing remotely over the WEB via OSC and ZeroTier. The remote player is projected into the concert hall via ZOOM. The live coding of the on-stage performer is projected onto another screen. Both are hearing the entire work as it is all being realized in real-time; both are sending and receiving 'text-print' messages as feedback informing each other about what motives (questions) they are selecting, what transpositions and tempi they are setting, what chords and timbres they are playing, and how they might be affecting the sounds of the computer players and each other. Over the course of the 7 minute piece, the ‘tunings’ of the computer harmonies and the melodic motives move from 59-tone to 12-tone. Each motivic ‘question’ and every note from the ‘choir’ comes from a discrete location and the live performers have complete control over the timing, the tempo, the register, the dynamics, and the overall mix of all the elements in the piece. As they listen to each other, and to the computer, they question and answer, accompany and lead, compliment and contradict; in some ways, “Eleven Questions” could be considered a structured internet Csound jam as it is never exactly the same, but the players are all always ‘reading’ from the same algorithmic ‘lead-sheet’.