Next week I will take part in the Wilding AI Lab:
this four-day lab will assemble a group of participants selected via open call to learn about the application of generative AI in spatial audio, and collectively explore the wilder territories of AI. Shaped as a mix of theoretical and hands-on components, the lab runs 23 – 25 January, and culminates with a public presentation session Sunday 26 January.
The lab will take place at MONOM, a unique performance venue and spatial audio studio housed at Funkhaus Berlin. It consists in a 3D array of omnidirectional speakers arranged on columns and subwoofers placed underneath the audience, beneath an acoustically transparent floor.
I am pretty excited about experimenting with the Sophtar there, particularly given that each of its eight strings and the audio synthesis machine learning models can be routed to separate output channels, making it possible to place each sound source in a different point in space.
I look forward to meeting the other participants in person. Engaging with a community of practitioners, share skills and ideas, and take part in an open critical discussion on AI- and data-driven tools is crucial to address the rapid changes affecting our cultural landscape. I believe it is also very important to open the process to the public in order to demystify the black boxes, promote critical thinking, spread knowledge, and offer new narratives, so I am excited at the prospect of an open lab on the fourth day. As in Maurice Jones’ words during our first collective Zoom meeting “the design of how we gather is essential.”
Participants
- Daniel Limaverde
- Evangeline Y Brooks
- Federico Visi
- Gadi Sassoon
- Hyeji Nam
- Irini Kalaitzidi
- Nico Daleman
- Ninon and Jun Suzuki
- SENAIDA
- Three Amps
- Transient Cat
- TWEE