ABSTRACT. Over the past decade, the growth of Csound users in Iran has had a profound impact on the music scene, not only in the realm of electronic music but also in the general music scene. This software has empowered young composers to articulate their creative visions more effectively and more easily to perform their pieces, thereby contributing to a vibrant and evolving music scene. The accessibility of Csound, its open-source nature, and the boundless creative opportunities it offers to composers have made it a favorite companion for their music. Additionally, there is a noticeable increase in composers that utilize Csound. This innovation not only benefits composers by providing them with new tools and possibilities but also introduces fresh perspectives for listeners. It is clear that many audiences are attending more and more to live electronic music performances each year, which has enriched the connection between composers and their audience. Given these observations, it is clear that Csound has had a unique and valuable influence on the contemporary Iranian music landscape. Consequently, the aim of this article is to highlight the significance of Csound in Iranian music. To better understand the widespread appeal of Csound in Iran, some questions were written for those who have experience with Csound. Their responses in following will shed light on the positive impact Csound has had on their artistic journey.
Using SOFA HRTF Files with Csound Binaural Opcodes
ABSTRACT. The Csound HRTF opcodes were initially written for use with a generic 'dummy head' dataset of location measurements. More recently, the field of binaural processing has enjoyed a renaissance through the proliferation of virtual loudspeaker processing. In parallel, the SOFA file format has been developed to store HRTF datasets in a defined manner. This paper discusses a method to allow the Csound HRTF opcodes to use any SOFA HRTF dataset. The outlined approach (available as a command-line tool) takes any given SOFA HRTF dataset and preprocesses it to work with the existing opcodes; it essentially stores HRTFs for each location defined in the original 'dummy head' dataset used. A rigorous interpolation algorithm is used to derive HRTFs for non-measured locations where necessary.
ABSTRACT. Csound is able to target several platforms across desktop, mobile, web and embedded environments. This enables its vast audio processing capabilities to be leveraged in a wide range of sonic and musical contexts. Particularly, embedded platforms provide great portability and flexibility for users to design custom interfaces and signal processing chains for applications like installations and live performance. However, until now, embedded support for Csound was restricted to operating system-based platforms like Raspberry Pi and Bela. This paper presents our work on the development of Bare-metal Csound, extending the embedded support to ARM-based micro-controllers. We highlight the benefits and limitations of such systems and present two platforms on which we have conducted experiments - the Electrosmith Daisy and the Xilinx Zynq 7000 FPGA System-on-Chip. We also discuss potential use cases for Bare-metal Csound as well as future directions for this work.
ABSTRACT. The proposed event is more a performance than an installation. But with an installation it shares that visitors can come in and leave, can move closer or stay far, and take their own time. It can happen between other events in a corridor or corner, as we had it in Montevideo on the ICSC 2015. It should be noted that the text is in German; but the proposed event is not about "understanding" the text rather than experiencing the musical space
ABSTRACT. As everyone attending this conference will know very well, creative coders have, today more than ever, a breadth of options to make music programmatically: from specialised software old and new, to toolset expanding general computer languages, many visions of what a good art-enabling coding environment cohabitate and cross-pollinate. While trends rise and fall, along the way communities wax and wane, the artworks survive as best as they could, and the artist-programmer tries to strike a balance between inspired mastery and catching up.
But is there a value to this multitude of opportunities? Are new proposals diluting energies and foci? Are there commonalities that would be better sorted once-and-for-all? What values each of these interfaces defend, consciously or not? And what about the underlying metaphors they employ to create bridges between practices and disciplines?
In this presentation, the author will muse on these questions around the design of software environments that are foundational to artistic research through creative coding. He will try to ascertain their value, the affordances and responsibilities of such enabling endeavour, through sharing his early-career personal experience of CSound, and the emergence of the FluCoMa ecosystem.
ABSTRACT. This workshop introduces users to the tools, processes, and practices involved in building and developing Csound. Attendees will use popular IDEs and editors to build, explore, debug, and optimize the Csound codebase.
17:30-18:00Coffee Break/Fingerfood + Installation Session (Boulanger) – AW M0107
Decay -- An AI-assisted Electroacoustic composition
ABSTRACT. Decay is a short electro-acoustic audio/visual piece inspired by the work of Jonty Harrison.
The composition makes use of AI-generated visuals, serving as an interperative visual score for the
audio. The audio content is comprised of recordings of matches being lit, extinguished and broken,
with some additional nature sounds. The piece makes extensive use of the Grain3FilePlayer granular
synth written by Iain McCurdy in Csound.
ABSTRACT. My Studi I-VIII are inspired by Karlheinz Stockhausen's Klavierstücke I-VIII. If Klavierstücke I-IV (1952-53) represent a sort of sketches of the electronic pieces to come, Klavierstücke V-VIII (1954-55) reveal a new attention to time which at the same time 'stretch' the form according to “statistical form criteria” and allows the author to build different timbres. In my studies I wanted to recreate the colour of those years’ electronic sounds. The many different timbres are obtained with modal synthesis applied to audio signals produced by a set of Julia (implemented in CSound by Hans Mikelson, 1999). Each sound is conceived as a momentform.
ABSTRACT. A scene from the Cotswolds (UK), composed as a sonic cross-section of a broadleaf woodland habitat. Processing for the piece relied heavily on random modulation of grain opcode parameters, with Csound used as a kind of granular ‘rain generator’ to augment and build upon existing field recordings of precipitation. Rain-shower width was adjusted via random modulation of the pan2 opcode. Towards the end of the piece, hrtfstat was used to position a small stream binaurally, along the composed forest floor (audible from 04:10 onwards).
"Franz Strauss – Five Etudes" (2021) for natural horn and electronics
ABSTRACT. “Franz Strauss – Five Etudes” is a composition for natural horn and live electronics using Csound. The piece places in contrast very vulnerable pracitcing situation of a beginner horn player and noisy, merciless intervention of the electronics, the uttermost predictability of the etudes and the unexpectedness created by computer algorithm.
ABSTRACT. A fashionable nightclub is a live electronic music performance spatialized on variable loudspeaker arrays. With this immersive creation, Jean-Basile Sosa delivers an ethereal, phantasmatic version of some of electronic musics played in American nightclubs in the 80s and 90s...
ABSTRACT. Sievert is an audiovisual music for Csound and GLSL Shader, inspired by the decay in nuclear physics, Sievert is the international unit of absorbed dose of nuclear radiation, meaning that this work was creat-ed using the radiation values of uranium ore picked up by GM counters. The work was created using Csound, ATS, Magic--the front-end graphical interface of the GLSL Shader, an audiovisual music cre-ated through the synthesis and decomposition of sound.
ABSTRACT. Recording of a live performance. The proposal is to perform this piece live on stage at the conference. The piece is a fullscreen GLSL shader that is translated, with interactive controls, into Csound notes.
ABSTRACT. This piece experiments with the concept of intrasensory synesthesia but Instead of perceiving one sensory as another we perceive a sound feature as another one So instead of hearing colors we will perceive the time as timbre or the pitch as dynamics. To do so, I use how the perception of a musical feature affects the perception of the other musical features. The perception of one parameter is determined by the other ones, especially in the threshold of perception. We can achieve this using the thresholds of perception and the way that one parameter can determine the perception of another one to make parametric interdefinitions. For example, a pulse of gains of sound that is perceived as a temporal object can be converted into a timbrical object by accelerating the velocity of the pulses. beyond 16 – 20 hertz it will be perceived as no longer as a pulse but as a pitched sound. I call this a parametric morphing in which a sound object is perceived in a way and then using changing only one feature of this sound object it is perceived around different parametric centrality.
You can see it in the video of the second movement where these concepts are more obvious.
https://youtu.be/9JiVon85I0c
In each section of the work I experiment with a different way in which a parameter can be perceived by utilizing another one
This work has been produced only with Csound using phase vocoding resynthesis and fof granular opcodes the sound synthesized was mixed in Pro-tools.
You can find a detailed explanation of the work in this paper the author
Intrasensory synesthesia in musical composition
https://www.academia.edu/36689384/Intrasensory_Synesthesia_in_Musical_Composition
The work is available at:
https://soundcloud.com/busevin/three-chants-for-computer