Category: Events

  • Embedding Algorithms Workshop

    Embedding Algorithms Workshop

    The Embedding Algorithms Workshop is an informal meeting of researchers and practitioners working with embedded systems in the fields of musical instrument design, wearable computing, interaction design, and performing arts. During the workshop, the participants will show and discuss:

    • designing physical objects (instruments, wearables, and more) that incorporate some algorithmic process that is central to their function or behaviour;
    • designing and implementing embedded algorithms;
    • using/performing/practising with objects with embedded algorithms;
    • and more.

    Time: Friday, 26th July 2024, 10:00-18:00
    Place: Universität der Künste Berlin – Berlin Open Lab – Mixed Reality Space (aka BOL2) – Einsteinufer 43, 10587 Berlin.
    NB: There is a large construction site in front of Einsteinufer 43. Once you have found your way around it and entered the building, go past the reception and through the glass doors. Turn left at the end of the corridor and go past another set of glass doors. BOL 2 is at the very end of the long corridor you will find ahead of you, last door on the right.

    The event is open and free to attend. Please register here just so we can get a better idea of how many people to expect.
    More info: mail(at)federicovisi(dot)com

    Overall schedule:
    10:00 – 12:30: presentations/discussions (session 1)
    12:30 – 13:30: exhibitions introduction and lunch break
    13:30 – 16:00: presentations/discussions (session 2)
    16:00 – 16:30: coffee break, exhibitions
    16:30 – 18:00: performances and closing remarks

    Presenters

    (15 min + 10 min Q&A each, distributed in 2 x 2.5-hr sessions)

    In no particular order:

    Talks

    In no particular order:

    Alberto de Campo & Bruno Gola: “From intuitive playing to Absolute Relativity – How tiny twists open possibilities for Multi-Agent Collaborative performance”

    This talk discusses the sequence of small twists in the long-term development of the NTMI project that opened up new perspectives. These turning points include the idea of replacing analytical control with intuitive influence, opening the interface options from a single custom device to various common interfaces, the idea to enable these influence sources to work simultaneously. 
    The latest step, adapting the influence mechanisms to be consistently relative, creates further options for multi-agent performance, including networks of influence sources and destinations, which may extend to nonhuman actors. 

    Eliad Wagner: “Modular synthesisers as cybernetic objects in musical performance”

    This presentation (performance demonstration accompanied by discussion) is a result of an ongoing artistic research on the topic. Its focus is twofold: the material algorithms that govern the cybernetic machine behaviour (sound creation) and the human gestures (incl. the points of control that allow them) that facilitate form and meaning.
    Central to the examination is the embedding of patch algorithms within the modular synthesiser, an instrument that lacks established techniques, canon, or prescribed usage. Each such interaction with the synthesiser reinvents its structure and interface, resulting in a constant process of instrumental grammatization. In this context, improvisation emerges as a useful and even necessary method for responding to the uncertainty and unpredictability of the machine. In a sense, the embedded algorithm informs both counterpart behaviours – machine and human, and potentially transforms the identity of the human component- from control to participation.

    Echo Ho: “Can Ancient Qin Fingering Methods Inspire AI Engineering for New Musical Expressions?”

    This short presentation explores the convergence of ancient Chinese qin-fingering methods and modern AI/ML techniques. The PhD project “qintroNix” reinterprets historical qin learning methods for contemporary art and music. The qin, a seven-stringed instrument with a three-millennia history, uses classifiers inspired by natural phenomena to depict fingering techniques. These classifiers, representing animals and plants, serve as mnemonic devices and metaphorical maps, linking physical movements with musical expression and philosophical models.
    The project speculates on integrating these ancient techniques into modern AI/ML frameworks. Like ancient classifiers distilled natural phenomena into musical gestures, AI/ML algorithms extract and model features from large datasets. This parallel provides a framework for developing AI/ML systems with human-interpretable metaphors, enhancing pattern recognition and classification transparency.
    Exploring phenological phenomena in qin music fosters an ecological consciousness, connecting musicians with an other-than-human world. This entangled approach aligns with contemporary efforts to bridge organic and digital realms; combining archaic knowledge with contemporary technology can unlock new possibilities for imagination, innovation, and a more profound understanding of music’s role in our world.

    Viola Yip: “Liminal Lines” 

    In Viola Yip’s latest solo work “Liminal Lines”, she developed a solo performance with her self-built electromagnetic feedback dress. This feedback dress is made of non-insulated audio cables attached to a soft PVC fabric. These cables allow audio signals to pass through. The signal is first captured by an electromagnetic microphone, and then the captured signals pass through guitar pedals and a mixer, and eventually back to the dress to complete the feedback loop. Her body, when wearing the dress, facilitates a wide range of distances, pressures, and speeds through her physical engagement (touching, squeezing, stretching, etc.). These body-and-instrument interactions physically manipulate the interferences and modulations of the electromagnetic fields, which allow various complex sonorities to emerge and modulate over time.
    In this symposium, she develops a new lecture performance with this dress, in which she will dive into her journey with physical touch, materialities and spaces within and surrounding the wearable instrument and the performer’s body. 

    Andrew McPherson: “Of algorithms and apparatuses: entangled design of digital musical instruments”

    Digital musical instruments are often promoted for their ability to reconfigure relationships between actions and sounds, or for those relationships to incorporate forms of algorithmic behaviour. In this talk, I argue that these apparently unlimited possibilities actually obscure strong and deeply ingrained ideologies which lead to certain design patterns appearing repeatedly over the years. Some such ideological decisions include the reliance on spatial metaphors and unidirectional signal flow models, and the supposition that analytical representations about music can be inverted into levers of control for creating it. I will present some work-in-progress theorising an alternative view based on Karen Barad’s agential realism, particularly Barad’s notions of the apparatus and the agential cut. In this telling, algorithmic instruments are not merely measuring and manipulating stable pre-existing phenomena; they are actively bringing those phenomena into existence. On closer inspection, the boundaries between designer and artefact, instrument and player, materiality and discourse, are more fluid than they first appear, which offers an opening for new approaches to design of algorithmic tools within and beyond music.

    Nicola L. Hein: “Cybernetic listening and embodiment in human-machine improvisation”

    In this talk, I will show and discuss several pieces of mine which operate within the domain of human-machine improvisation using musical agent systems. The focus of this talk will be the changing parameters of cybernetic listening, interaction, and embodiment between human and machine performers. By using my own works, which employ purely software-based musical agents, visual projections and light, robots, and varying perceptual modalities of musical agents as a matrix of changing parameters of the situated human-machine interaction, I will argue for the importance of embodiment as a central concern in musical human-machine improvisation. The term cybernetic listening will help to further elaborate on the systemic components of musical performance and interaction with musical agent systems.

    Federico Visi: “The Sophtar: a networkable feedback string instrument with embedded machine learning”

    The Sophtar is a tabletop string instrument with an embedded system for digital signal processing, networking, and machine learning. It features a pressure-sensitive fretted neck, two sound boxes, and controlled feedback capabilities by means of bespoke interface elements. The design of the instrument is informed by my practice with hyperorgan interaction in networked music performance. I discuss the motivations behind the development of the instrument and describe its structure, interface elements, and the hyperorgan and sound synthesis interactions approaches it implements. Finally, I reflect on the affordances of the Sophtar and the differences and similarities with other instruments and outline future developments and uses.

    Teresa Pelinski and Adam Pultz Melbye: “Building and performing with an ensemble of models”

    How can we then treat machine learning models as design material when building new musical instruments? Many of the tools available for embedding machine learning in musical practice have fixed model architectures, so that the practitioners’ involvement with the models is often limited to the training data. In this context, the design material appears to be the data. From a developer’s perspective, however, when training a model for a specific task, a ‘model’ does not refer to a single entity but to an ensemble instead: the architecture is trained varying the number of layers, blocks or embedding sizes (the hyperparameters), and training-specific parameters such as learning rate or dropout. 
    For the last month we have been working on a practice-based research project involving the FAAB (feedback-actuated augmented bass), the ai-mami system (AI-models as materials interface) and a modular synthesiser. In this talk, we will discuss our process when building this performance ensemble, our insights in using machine learning models as design material, and finally, the technical infrastructure of the project.

    John MacCallum: “OpenSoundControl Everywhere”

    In this talk/demo I will preview OSE, a lightweight, dynamic OpenSoundControl (OSC) server designed for rapid prototyping in heterogeneous environments. Since its development over 25 years ago, OSC has been a popular choice for those looking to expose software and hardware to dynamic control. OSC itself, however,
    is not dynamic—seamless integration of new nodes into an existing network is challenging, and OSC servers are not typically extensible at runtime. This lack of dynamism stands increasingly in the way of truly rapid prototyping in fields such as digital musical instrument design. OSE provides a solution to this challenge by implementing a general purpose OSC server that itself is an OSC server, allowing for dynamic control and configuration. I will discuss briefly some implications for design with such a system, the state of the current work, and its future.

    Exhibitions

    (the exhibitions will be briefly introduced right before the lunch break)

    • Zora Kutz and Stratos Bichakis: Beads Beats
    • Friederike Fröbel, Malte Bergmann: Student projects, MA Interface Cultures, Kunstuniversität Linz

    Zora Kutz & Stratos Bichakis: “Beads Beats”

    As one of the earliest forms of fabric decoration, found separately in cultures across the world, beading provides a rich history of techniques and materials that fit well with creating interactive textiles. Sensitive to touch and pressure, capacitive sensing allows for a range of input that can then be used for a variety of applications. In this project, we wish to create an artefact not unlike a musical instrument, used to control both synthetic soundscapes and the spatialisation of audio material. Along the process we are documenting findings for both the types of interaction we are researching, as well as the material and construction specifications.

    Friederike Fröbel & Malte Bergmann: “Experiments with textile interfaces and found sounds”

    This exhibition presents projects that explore meaningful sound interactions through textile-tangible interfaces using digital sound processing techniques. They were developed in two courses of the Masters Programme Interface Cultures at the University of Art and Industrial Design in Linz, Austria. The annual course explores the relationship between technology, fashion, craft and design with a particular focus on the idea of dynamic surfaces and soft circuits and how these can be realised through various textile processing techniques such as knitting, weaving, embroidery and many more. The course consisted of experimenting and exploring textile processing techniques using capacitive sensors and designing novel interaction prototypes with soft interfaces. Students worked on projects using Bela Mini, Trill Craft and Pure Data. Each student brought a personal field recording and a piece of clothing or textile to which they had a particular attachment. The materials students brought ranged from fish leather and stomach rumbles to suitcases and the rattling sounds of railway bridges. These materials would be ‘upcycled’ and combined as part of the group work, where they would be collectively combined to use soft textile sensing to generate expressive and playful ways of using clothing as a sound interface.

    Performances

    (~1 hour in total at the end of the workshop)

    • Echo Ho and Alberto de Campo
    • Adam Pultz Melbye and Teresa Pelinski
    • Eliad Wagner
    • Federico Visi
    • Absolute Relativity – flatcat, Kraken, nUFO, and more, group and audience welcome to play
    • Nicola Hein
  • Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    I was invited to participate in the Mapping Social Interaction through Sound symposium on 27-28 November 2020. The symposium is organised by Humboldt University, Berlin and – as it is customary these days – will take place on Zoom.

    This is the abstract of my talk.

    Building and exploring multimodal musical corpora:
    from data collection to interaction design using machine learning

    Musical performance is a multimodal experience, for performers and listeners alike. A multimodal representation of a piece of music can contain several synchronized layers, such as audio, symbolic representations (e.g. a score), videos of the performance, physiological and motion data describing the performers movements, as well as semantic labelling and annotations describing expressivity and other high-level qualities of the music. This delineates a scenario where computational music analysis can harness cross-modal processing and multimodal fusion methods to shift the focus toward the relationships that tie together different modalities, thereby revealing the links between low-level features and high-level expressive qualities.

    I will present two concurrent projects focussed on harnessing musical corpora for analysing expressive instrumental music performance and design musical interactions. The first project is centered on a data collection method – currently being developed by the GEMM research cluster at the School of Music in Piteå – aimed at bridging the gap between qualitative and quantitative approaches. The purpose of this method is to build a data corpus containing multimodal measurements linked to high-level subjective observations. By applying stimulated recall (a common qualitative research method in education, medicine, and psychotherapy) the embodied knowledge of music professionals is systematically included in the analytic framework. Qualitative analysis through stimulated recall is an efficient method for generating higher-level understandings of musical performance. Initial results suggest that this process is pivotal in building our multimodal corpus, providing insights that would be unattainable using quantitative data alone.

    The second project – a joint effort with the Computing Department at Goldsmiths, University of London – consists in a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between large sound corpora and motion sensor data. The design approach adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces. We refer to this interaction design approach as Assisted Interactive Machine Learning (AIML). While playing with a large corpus of sounds through gestural interaction by means of a motion sensor, the user can give feedback to an artificial agent about the gesture-sound mappings proposed by the latter. This iterative process results in an interactive exploration of the corpus, as well as in a way of creating and refining gesture-sound mappings.

    These projects are representative of how the development of methods for combining qualitative and quantitative data, in conjunction with the use of computational techniques such as machine learning, can be instrumental in the design of complex mappings between body movement and musical sound, and contribute to the study of the multiple facets of embodied music performance.

    Further reading

    Visi, F. G., Östersjö, S., Ek, R., & Röijezon, U. (2020). Method development for multimodal data corpus analysis of expressive instrumental music performance. Frontiers in Psychology, 11(576751), doi: 10.3389/fpsyg.2020.576751
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2021). Interactive Machine Learning of Musical Gesture. In E. R. Miranda (Ed.), Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity. Springer Nature, forthcoming.
    View on arXiv.org
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2020). Towards Assisted Interactive Machine Learning: Exploring Gesture-Sound Mappings Using Reinforcement Learning. In ICLI 2020 – the Fifth International Conference on Live Interfaces.
    Download PDF

    Presentation slides
    Download PDF

  • Physically Distant #2: more online talks on telematic performance

    Physically Distant #2: more online talks on telematic performance

    Tuesday 28 July 2020, 14:00 – 19:00 CEST
    Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7

    Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic have required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. In this second edition of Physically Distant, we wish to continue discussing telematic performance from perspectives suggested by the following questions: 

    What are the opportunities and challenges of telematic performance?
    What are the implications on how performing arts are conceived, developed, experienced?
    How are research and practice being reconfigured?
    How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces?
    How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?

    We wish to highlight two threads from the previous discussions. First, how telematic performance can be conceived of as protest, and second, the potential for telematic performance to expand the artistic and social potential in intercultural arts. Both of these threads imply a discussion of accessibility.

    Once again, the GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Tuesday 28 July 2020.
    The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion. 

    We are very happy to host a telematic performance by the Female Laptop Orchestra (FLO). The practice of this group is discussed in the talks by Franziska Schroeder and Nela Brown. A presentation of the conceptual backdrop for the performance can be found below.

    The structure of the event includes short breaks in between the sessions in order to avoid Zoom fatigue and allow for informal chats and continued discussion over a drink (not provided by the organisers). There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.

    28 July 2020 schedule (all times are CEST):

    • 14:00 Session 0: Introduction, results of the survey that followed Physically Distant #1
    • 14:30 Session 1: Ximena Alarcon, Franziska Schroeder
    • 15:30-15.50 — Performance by FLO (Female Laptop Orchestra) —
    • 15.50-16.00 — 10-min Break —
    • 16:00 Session 2: Nela Brown , Rebekah Wilson
    • 17:00 — 30-min Break —
    • 17:30 Session 3: OvO, Kaffe Matthews
    • 18:30 Session 4: Plenary
    • 19:00 — END —

    Moderators / instigators: Federico Visi, Stefan Östersjö

    Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7 

    We will send you a link to join a Zoom meeting on the day of the talks.
    NOTE: the talks will be recorded.

    A follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.

    Further info: mail@federicovisi.com

    Telematic performance by the Female Laptop Orchestra (FLO)

    Absurdity (concept by Franziska Schroeder and Matilde Meireles)
    A distributed performance using LiveSHOUT with members from the Female Laptop Orchestra (FLO).

    “Absurdity” is based around a short excerpt from one of Portugal’s most mysterious, elusive and peculiar writers, Fernando Pessoa. Pessoa’s multiplicities and his ways of thinking about life, engendering ideas that can feel manic-depressive, filled with buckets of self-pity, while being able to scratch the innermost parts of one’s soul, lie at the heart of this distributed performance.

    Members of FLO will stream sounds from several distributed places, including Crete, Italy, Brazil and the UK, while Franziska and Matilde will be delivering fragmented excerpts (in both English and Portuguese) alongside the LiveSHOUT streams. The idea of distributed creativity, where we combine sounds from several sites, inspired by Pessoa’s plurality of thoughts and philosophies; his multiplicities, his fictionality and his self alienation, will lead to a performance that aims to be absurd, dispersed, fragmented and multiple.

    “I’ve always belonged to what isn’t where I am and to what I could never be”.  (Pessoa In: Ciuraru, 2012).

    The FLO performers are:
    Franziska Schroeder – LiveSHOUT and Pessoa reading (English)
    Matilde Meireles – LiveSHOUT and Pessoa reading (Portuguese)
    Maria Mannone – LiveSHOUT streams of piano improv from Palermo
    Maria Papadomanolaki – LiveSHOUT streams of sounds from Crete
    Anna Xambó – LiveSHOUT streams of sounds from Sheffield
    Nela Brown – LiveSHOUT streams of sounds from London
    Ariane Stolfi – LiveSHOUT  streams of sounds from Porto Seguro and playsound.space

    Female Laptop Orchestra (FLO), a music research project established in 2014 by Nela Brown, connects female musicians, sound artists, composers, engineers and computer scientist globally, through co-located and distributed collaborative music creation. Each FLO performance is site-specific and performer-dependant, mixing location-based field recordings, live coding, acoustic instruments, voice, sound synthesis and real-time sound processing using Web Audio API’s and VR environments with audio streams arriving from different global locations (via the internet and mobile networks). From stereo to immersive 3D audio (and everything in between), FLO is pushing the boundaries of technology and experimentation within the context of ensemble improvisation and telematic collaboration.

    Female Laptop Orchestra: https://femalelaptoporchestra.wordpress.com/

    LiveSHOUT: http://www.socasites.qub.ac.uk/distributedlistening/liveSHOUT/

    Locus Sonus soundmap: https://locusonus.org/soundmap/051/

    Presenters Bios:

    Ximena Alarcón Díaz is a sound artist researcher interested in listening to in-between spaces: dreams, underground public transport, and the migratory context. She creates telematic sonic improvisations using Deep Listening, and interfaces for relational listening. She has a PhD in Music, Technology and Innovation from De Montfort University (2007), and is a Deep Listening® certified tutor. Her project INTIMAL is an “embodied” physical-virtual system for relational listening in telematic sonic performance (RITMO-UiO, 2017-2019, Marie Skłodowska Curie Individual Fellowship). She is currently a Senior Tutor in the online Deep Listening certification program offered by the Center for Deep Listening (RPI), and works independently in the second phase of the INTIMAL project that involves: an “embodied” physical-virtual system to explore sense of place and presence across distant locations; and a co-creation laboratory for listening to migrations with Latin American migrant women.
    http://ximenaalarcon.net

    Franziska Schroeder is an improviser and Reader, based at the Sonic Arts Research Centre, Queen’s University Belfast where she mainly teaches performance and improvisation.
    In 2007 she was the first AHRC Research Fellow in the Creative/Performing Arts to be awarded a 3 year grant to carry out research into virtual / network performance environments. Her writings on distributed creativity have been published by Routledge, Cambridge Scholars, and Leonardo. In 2016 she co-develop the distributed listening app LiveSHOUT.
    Within her research group “Performance without Barriers”, which she founded in 2016, Franziska currently designs VR instruments with and for disabled musicians.
    https://pure.qub.ac.uk/en/persons/franziska-schroeder

    Rebekah Wilson is an independent researcher, technologist and composer. Originating from New Zealand she studied instrumental and electroacoustic music composition, and taught herself computer technology. In the early 2000s she held the role of artistic co-director at STEIM, Amsterdam, where her passions for music, performance and technology became fused. Since 2005 she has been co-founder and technology director for Chicago’s Source Elements, developing services that exploit the possibilities of networked sound and data for the digital sound industry while continuing to perform and lecture internationally. Holding a masters in the field of networked music performance, her current research on the topic can be found on the Latency Native forum.
    https://forum.latencynative.com

    Nela Brown is an award-winning Croatian sound artist, technologist, researcher and lecturer living in London, UK. She studied jazz and music production at Goldsmiths, University of London, followed by a BA (Hons) in Sonic Arts at Middlesex University London. Since graduating in 2007, she worked as a freelance composer and sound designer on award-winning international projects including theatre performances, dance, mobile, film, documentaries and interactive installations. In 2014, she started Female Laptop Orchestra (FLO). In 2019, as part of the prestigious Macgeorge Fellowship Award, she was invited to join the Faculty of Fine Arts & Music at the University of Melbourne, Australia to deliver talks and workshops about collaborative music-making, laptop orchestras and hack culture, as well a number of performances with FLO. She is currently doing a PhD in Human-Computer Interaction and lecturing at the University of Greenwich in London.
    http://www.nelabrown.com/

    Italian noise-rock duo OvO has been at the center of the worldwide post-rock, industrial-sludge, and avant-doom scenes for nearly two decades. Their “always-on-tour” mentality, coupled with a DIY ethic, fearless vision, and pulverizing live shows have made them the Jucifer of Europe; impossible to categorize, but always there, appearing in your hometown, like a ghostly omnipresence. OvO’s fiercely independent ethos and grinding live schedule have earned the band a significant worldwide fanbase that have come to expect nothing but the most daring and innovative dark music presentations.
    OvO were on the road for their 20th anniversary European tour when the COVID-19 pandemic hit the continent. The band was forced to cancel the remaining gigs of the tour and drive back to their home country, which was suffering one of the worst health emergencies of its recent history. In the midst of the lockdown, they performed live on the stage of the Bronson club in Ravenna, Italy, and professionally live-streamed the entire concert on DICE.fm
    http://ovolive.blogspot.com

    Kaffe Matthews is a pioneering music maker who works live with space, data, things, and place to make new electroacoustic compositions. The physical experience of music for the maker and listener has always been central to her approach and to this end she has also invented some unique interfaces, the sonic armchair, the sonic bed and the sonic bike that not only enable new approaches to composition for makers but give immediate ways in to unfamiliar sound and music for wide ranging audience.
    Kaffe has also established the collectives Music for Bodies (2006) and The Bicrophonic Research Institute (2014) where ideas and techniques are developed within a pool of coders and artists using shared and open source approaches, publishing all outcomes online.
    During COVID times, Kaffe has produced new music by collaborating with other music makers through streaming platforms and has hosted live-streaming parties in her apartment in Berlin.
    https://www.kaffematthews.net

    Tuesday 28 July 2020, 14:00 – 19:00 CEST
    Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7

  • Physically distant: online talks on telematic performance

    Physically distant: online talks on telematic performance

    Wednesday 3 June 2020, 13:30 – 21:00 CEST

    Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. This scenario brought many questions related to telematic performance to the fore: what are the opportunities and challenges of telematic performance? What are the implications on how performing arts are conceived, developed, experienced? How are research and practice being reconfigured? How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces? How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?

    The GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Wednesday 3 June 2020.

    The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion. After each slot, there is going to be a 30-min break in order to avoid “Zoom fatigue.” There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.

    Schedule (all times are CEST):
    13:30 – 14:00 Session 0 : Welcome and introduction
    14:00 – 15:00 Session 1 : Roger Mills; Shelly Knotts
    15:00 – 15:30 Break
    15:30 – 16:30 Session 2 : Gamut inc./ Aggregate; Randall Harlow
    16:30 – 17:00 Break
    17:00 – 18:00 Session 3 : Alex Murray-Leslie; Atau Tanaka
    18:00 – 19:00 Dinner break (1 hr)
    19:00 – 20:00 Session 4 : Chris Chafe; Henrik von Coler
    20:00 – 21:00 Plenary

    Moderators: Federico Visi, Stefan Östersjö

    Anyone can join upon registration using this online form: https://forms.gle/1goB2TcjGKjL6nkT8  
    We will send you a link to join a Zoom meeting on the day of the talks.
    NOTE: the talks will be recorded.

    An additional networked performance curated by GEMM))) is taking place on Tuesday 2 June, followed by a short seminar and discussion. Everyone is welcome to also join this event,  we will circulate details to the registered email addresses and via social media.

    Programme (all times are CEST):
    14:00 – 14:15 networked performance with the Acusticum Organ: Robert Ek, Mattias Petersson, Stefan Östersjö
    14:15 – 14:30 Vong Co: networked performance with The Six Tones: Henrik Frisk, Stefan Östersjö & Nguyen Thanh Thuy
    14:40 – 15:00 Paragraph – a live coding front end for SuperCollider patterns: Mattias Petersson
    15:00 – 15:20 Discussion

    A follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.

    Further info: write me.

  • Workshop and Performance at Harvestworks, New York City

    Workshop and Performance at Harvestworks, New York City

    I recently ran a workshop and performed at Harvestworks in New York City. The workshop was done in collaboration with Andrew Telichan Phillips form the Music and Audio Research Laboratory at NYU Steinhardt. The amazing Ana García Caraballos performed with me my piece 11 Degrees of Dependence on alto sax, myo armbands, and live electronics. Here’s a video:

     

  • Performances at Peninsula Arts Contemporary Music Festival 2016

    Performances at Peninsula Arts Contemporary Music Festival 2016

    Very excited to be performing two pieces at this year’s Peninsula Arts Contemporary Music Festival.

    The super talented Esther Coorevits will once again join me to perform an updated version of Kineslimina, which will be performed at the Gala Concert on Saturday night and will feature some of the technologies I started working on while I was in New York last summer.

    On Sunday, the amazing Dr. Katherine Williams will play soprano sax and motion sensors for my new piece 11 Degrees of Dependence. Her movements will control control the parameters of synthetic flute.

    Check out the rest of the programme, there are some very exciting works you won’t be able to hear anywhere else.

  • Paper presentation at CMMR 2015

    Paper presentation at CMMR 2015

    Here you can download the paper I presented at CMMR 2015 in collaboration with Esther Coorevits from IPEM, Ghent University, and Rodrigo Schramm from Federal University of Rio Grande do Sul.

    It’s titled Instrumental Movements of Neophytes: Analysis of Movement Periodicities, Commonalities and Individualities in Mimed Violin Performance here is the abstract:

    Body movement and embodied knowledge play an impor- tant part in how we express and understand music. The gestures of a musician playing an instrument are part of a shared knowledge that contributes to musical expressivity by building expectations and influencing perception. In this study, we investigate the extent in which the movement vocabulary of violin performance is part of the embodied knowledge of individuals with no experience in playing the instrument. We asked people who cannot play the violin to mime a performance along an audio excerpt recorded by an expert. They do so by using a silent violin, specifically modified to be more accessible to neophytes. Preliminary motion data analyses suggest that, despite the individuality of each performance, there is a certain consistency among participants in terms of overall rhythmic resonance with the music and movement in response to melodic phrasing. Individualities and commonalities are then analysed using Functional Principal Component Analysis.

    PQoM

     

  • Kineslimina: a study for guitar, viola and motion sensors

    Kineslimina: a study for guitar, viola and motion sensors

    I’m giving the final touches to a piece for viola, guitar, motion sensors and live electronics that I have been working on as part of my PhD research project. It will be premiered during the Gala Concert of the 11th International Symposium on Computer Music Multidisciplinary Research (CMMR) on Tuesday, 16th June 2015. It will be performed by Esther Coorevits and me.

    Here’s an excerpt from the programme notes:

    Kineslimina is a piece for viola, electric guitar and live electronics that explores the use of the musicians’ instrumental gestures and movements as an expressive medium. Such gestures merge with the other musical features and become an integral part of the score. While playing their instruments, the musicians wear an armband fitted with motion sensors, which tracks their movements and sends the motion data to a computer. The computer then processes the movement data and sound, responding with a wide range of dynamics: from subtle timbral alterations that follow the movements of the bow during string changes to deeper resonances when more overt gestures are performed by the musicians.

    Inspired by the studies of musical gestures and embodied music cognition, the piece requires the performers to exceed the usual boundaries of their instrumental gestures, thus creating new challenges as well as new possibilities of expression and interplay.

  • Motion and Music Workshop at CMMR15

    Motion and Music Workshop at CMMR15

    I’m co-organinsing the Motion and Music Workshop that will take place at Plymouth University, Plymouth, UK, on 15 June 2015. It will be a satellite event of the 11th International Symposium on Computer Music Multidisciplinary Research – CMMR 2015: Music, Mind, and Embodiment, which will be held at Plymouth University on 16-19 June 2015.

    More info on the workshop webpage.

     

  • Unfolding | Clusters presented at the Peninsula Contemporary Music Festival 2015

    Unfolding | Clusters presented at the Peninsula Contemporary Music Festival 2015

    Unfolding | Clusters will be presented at the Peninsula Arts Contemporary Music Festival this weekend at the Immersive Vision Theatre, Plymouth University.

    The Motor Neurone Disease Association (MNDA) will be present to present their work and collect donations (be generous!) and will introduce the work with me, Duncan Williams and Giovanni Dothel during the presentation on Friday at 7pm.

    Here is the full schedule, check also the full festival programme.

    Friday 27 February

    19:00 (introduction)

    19:30 (performance)

    Saturday 28 February

    17:00 (performance)

    Sunday 1 March

    14:00 (performance)