Category: Research

  • Computer Music Journal article: “An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound”

    Computer Music Journal article: “An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound”

    I finally put my hands on the physical copy of the Computer Music Journal issue on Human-AI Cocreativity that includes our open-access article An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound.

    There we report on our work on an instrument that translates the electrical activity produced by muscles into synthesised sound. The article also describes our collaboration with the Chicks on Speed on the performance piece Noise Bodies (2019), which was part of the Up to and Including Limits: After Carolee Schneemann exhibition at Muzeum Susch.

    I co-authored the article with Atau Tanaka, Balandino Di Donato, Martin Klang, and Michael Zbyszyński, here is the abstract:

    This article presents a custom system combining hardware and software that senses physiological signals of the performer’s body resulting from muscle contraction and translates them to computer-synthesized sound. Our goal was to build upon the history of research in the field to develop a complete, integrated system that could be used by nonspecialist musicians. We describe the Embodied AudioVisual Interaction Electromyogram, an end-to-end system spanning wearable sensing on the musician’s body, custom microcontroller-based biosignal acquisition hardware, machine learning–based gesture-to-sound mapping middleware, and software-based granular synthesis sound output. A novel hardware design digitizes the electromyogram signals from the muscle with minimal analog preprocessing and treats it in an audio signal-processing chain as a class-compliant audio and wireless MIDI interface. The mapping layer implements an interactive machine learning workflow in a reinforcement learning configuration and can map gesture features to auditory metadata in a multidimensional information space. The system adapts existing machine learning and synthesis modules to work with the hardware, resulting in an integrated, end-to-end system. We explore its potential as a digital musical instrument through a series of public presentations and concert performances by a range of musical practitioners.

  • Organised Sound – Call for papers: Embedding Algorithms in Music and Sound Art

    Organised Sound – Call for papers: Embedding Algorithms in Music and Sound Art

    Excited to announce that I am co-editing with Thor Magnusson a thematic issue of Organised Sound on the topic of Embedding Algorithms in Music and Sound Art.
    Please find the full call below and on the journal’s website.

    The idea of a journal edited issue came after the Embedding Algorithms Workshop that took place at the Berlin Open Lab in July 2024.


    Feel free to share this call with others who might be interested in contributing.

    We look forward to receiving your submissions!

    Call for Submissions – Volume 31, Number 2
    Issue thematic title: Embedding Algorithms in Music and Sound Art
    Date of Publication: August 2026
    Publishers: Cambridge University Press
    Issue co-ordinators: Federico Visi (mail@federicovisi.com), Thor Magnusson (thormagnusson@hi.is)
    Deadline for submission: 15 September 2025

    Embedding Algorithms in Music and Sound Art
    Embedding algorithms into physical objects has long been part of sound art and electroacoustic music practice, with sound artists and researchers creating tools and instruments that incorporate some algorithmic process that is central to their function or behaviour. Such practice has evolved profoundly over the years, touching many aspects of sound generation, electroacoustic composition, and music performance.

    Whilst closely linked to technology, the practice of embedding algorithms into tools for sound making also transcends it. Different forms and concepts emerge in domains that can be analogue, digital, pertaining to recent or ancient technologies, tailored around the practice of a single individual or encompassing the behaviours of multiple players, even non-human ones.

    The processes and rules that can be inscribed into instruments may include assumptions about sound and music, influencing how these are understood, conceptualised, and created. Algorithms can encode complex behaviours that may make the instrument adapt to the way it is being played or give it a degree of autonomy and agency. These aspects can have profound effects on the aesthetics and creative processes of sound art and electroacoustic music, as artists are given the possibility of delegating some of the decisions to their tools.

    In the practice of embedding algorithms, knowledge and technology from other disciplines are appropriated and repurposed by sound artists in many ways. Sensors and algorithms developed in other research fields are used for making instruments. Similarly, digital data describing various phenomena – from environmental processes to the global economy – have been harnessed by practitioners for defining the behaviour of their sound tools, resulting in data-driven sound practices such as sonification.

    More recently, advances in artificial intelligence have made it possible to embed machine learning models into instruments, introducing new aesthetics and practices in which curating a dataset and training a model are part of the artistic process. These trends in music and sound art have sparked a broad discourse addressing notions of agency, autonomy, authenticity, authorship, creativity and originality. There are aesthetic, epistemological, and ethical implications arising from the practice of building sound-making instruments that incorporate algorithmic processes. How do intelligent instruments affect our musical practice?

    This demands an interdisciplinary critical enquiry. For this special issue of Organised Sound we seek articles that address the aesthetic and cultural implications that designing and embedding algorithms has on electroacoustic music practice and sound art. We are interested in the role of this new technological context on musical practice.

    Please note that we do not seek submissions that describe a project or a composition without a broad contextualisation and an underlying central question. We instead welcome submissions that aim at addressing one or more specific issues of relevance to the call and to the journal’s readership which may include works or projects as examples.

    Topics of interest include:

    • Aesthetics in performance of instruments with embedded algorithms
    • Phenomenology of electroacoustic performance with algorithmic instruments
    • Agency, autonomy, intentionality, and “otherness” of algorithmic instruments
    • Co-creation and authorship in creative processes involving humans and algorithmic instruments
    • Dynamics of feedback, adaptivity, resistance, and entanglement in player-instrument-algorithm assemblages
    • Human-machine electroacoustic improvisation
    • Critical reflections on the role of algorithms in the creation of sound tools and instruments and their impact on electroacoustic music practice
    • Meaning making, symbolic representations, and assumptions on music and sound embedded within instruments and sound tools
    • Knowledge through instruments: epistemic and hermeneutic relations in algorithmic instruments
    • Appropriation, repurposing, and “de-scripting” of hardware, algorithms, and data by sound artists
    • Exploration of individual and collective memory through algorithmic instrumental sound practice
    • Roles and understandings of the machine learning model as a tool in instrumental music practice
    • Use of algorithmic instruments in contexts informed by historical and/or non-Western traditions
    • Ethics, sociopolitics, and ideology in the design of algorithmic tools for electroacoustic music
    • Legibility and negotiability of algorithmic processes in electroacoustic music practice
    • Non-digital approaches: embedding algorithms in non-digital instruments
    • Postdigital and speculative approaches in electroacoustic music practice with algorithmic instruments
    • Data as instrument in electroacoustic music practice: curating and embedding data into instruments
    • Aesthetics and idiosyncrasies of networked, site-specific, and distributed instruments and performance environments
    • Entanglement, inter-subjectivity, and relational posthumanist paradigms in designing and composing with algorithmic instruments
    • More-than-human, inter-species, and ecological approaches in algorithmic instrumental practice

    Furthermore, as always, submissions unrelated to the theme but relevant to the journal’s areas of focus are always welcome.

    SUBMISSION DEADLINE: 15 September 2025

    SUBMISSION FORMAT:
    Notes for Contributors including how to submit on Scholar One and further details can be obtained from the inside back cover of published issues of Organised Sound or at the following url: https://www.cambridge.org/core/journals/organised-sound/information/author-instructions/preparing-your-materials.

    General queries should be sent to: os@dmu.ac.uk, not to the guest editors.

    Accepted articles will be published online via FirstView after copy editing prior to the full issue’s publication.

    Editor: Leigh Landy; Associate Editor: James Andean

    Founding Editors: Ross Kirk, Tony Myatt and Richard Orton†

    Regional Editors: Liu Yen-Ling (Annie), Dugal McKinnon, Raúl Minsburg, Jøran Rudi, Margaret Schedel, Barry Truax

    International Editorial Board: Miriam Akkermann, Marc Battier, Manuella Blackburn, Brian Bridges, Alessandro Cipriani, Ricardo Dal Farra, Simon Emmerson, Kenneth Fields, Rajmil Fischman, Kerry Hagan, Eduardo Miranda, Garth Paine, Mary Simoni, Martin Supper, Daniel Teruggi, Ian Whalley, David Worrall, Lonce Wyse

  • The Sophtar

    The Sophtar

    The Sophtar is a tabletop string instrument with an embedded system for digital signal processing, networking, and machine learning. It features a pressure-sensitive fretted neck, two sound boxes, and controlled feedback capabilities by means of bespoke interface elements. The design of the instrument is informed by my practice with hyperorgan interaction in networked music performance.

    I built the Sophtar in collaboration with Sukandar Kartadinata. I presented and performed with it at NIME 2024, here is the paper from the conference proceedings.

    At IIL I am working on extending the Sophtar with actuators and machine learning models to make it respond to my playing in ways that are not easy to predict yet meaningful and inspiring. In particular, I am working on:

    • an extension that allows the instrument to self-play by means of solenoids,
    • embedding notochord models,
    • per-string filtering for harmonic feedback.

    I see the research and development work on the Sophtar as a way to probe and engage with broader research questions on musical improvisation and co-creativity with machines and algorithms.

    I will present the Sophtar at an Open Lab on Friday, 27th September, 15:00-17:00.

    Playing the Sophtar at LNDW 2024 Photo: Christian Kielmann.

  • Intelligent Instruments Lab, Iceland

    Intelligent Instruments Lab, Iceland

    My family and I temporarily left Berlin and relocated to Reykjavik, where I am going to join the Intelligent Instrument Lab (IIL) at the University of Iceland. There, I am going to work on extending the capabilities of the Sophtar, the musical instrument I designed to address the needs stemming from my artistic practice involving hyperorgans, networked music performance, and interactive machine learning. I built the Sophtar in collaboration with Sukandar Kartadinata and I describe it more in detail in this NIME 2024 paper. I very much look forward to working with Thor Magnusson and the other super talented researchers at IIL, starting in just a few days!

  • Embedding Algorithms Workshop

    Embedding Algorithms Workshop

    The Embedding Algorithms Workshop is an informal meeting of researchers and practitioners working with embedded systems in the fields of musical instrument design, wearable computing, interaction design, and performing arts. During the workshop, the participants will show and discuss:

    • designing physical objects (instruments, wearables, and more) that incorporate some algorithmic process that is central to their function or behaviour;
    • designing and implementing embedded algorithms;
    • using/performing/practising with objects with embedded algorithms;
    • and more.

    Time: Friday, 26th July 2024, 10:00-18:00
    Place: Universität der Künste Berlin – Berlin Open Lab – Mixed Reality Space (aka BOL2) – Einsteinufer 43, 10587 Berlin.
    NB: There is a large construction site in front of Einsteinufer 43. Once you have found your way around it and entered the building, go past the reception and through the glass doors. Turn left at the end of the corridor and go past another set of glass doors. BOL 2 is at the very end of the long corridor you will find ahead of you, last door on the right.

    The event is open and free to attend. Please register here just so we can get a better idea of how many people to expect.
    More info: mail(at)federicovisi(dot)com

    Overall schedule:
    10:00 – 12:30: presentations/discussions (session 1)
    12:30 – 13:30: exhibitions introduction and lunch break
    13:30 – 16:00: presentations/discussions (session 2)
    16:00 – 16:30: coffee break, exhibitions
    16:30 – 18:00: performances and closing remarks

    Presenters

    (15 min + 10 min Q&A each, distributed in 2 x 2.5-hr sessions)

    In no particular order:

    Talks

    In no particular order:

    Alberto de Campo & Bruno Gola: “From intuitive playing to Absolute Relativity – How tiny twists open possibilities for Multi-Agent Collaborative performance”

    This talk discusses the sequence of small twists in the long-term development of the NTMI project that opened up new perspectives. These turning points include the idea of replacing analytical control with intuitive influence, opening the interface options from a single custom device to various common interfaces, the idea to enable these influence sources to work simultaneously. 
    The latest step, adapting the influence mechanisms to be consistently relative, creates further options for multi-agent performance, including networks of influence sources and destinations, which may extend to nonhuman actors. 

    Eliad Wagner: “Modular synthesisers as cybernetic objects in musical performance”

    This presentation (performance demonstration accompanied by discussion) is a result of an ongoing artistic research on the topic. Its focus is twofold: the material algorithms that govern the cybernetic machine behaviour (sound creation) and the human gestures (incl. the points of control that allow them) that facilitate form and meaning.
    Central to the examination is the embedding of patch algorithms within the modular synthesiser, an instrument that lacks established techniques, canon, or prescribed usage. Each such interaction with the synthesiser reinvents its structure and interface, resulting in a constant process of instrumental grammatization. In this context, improvisation emerges as a useful and even necessary method for responding to the uncertainty and unpredictability of the machine. In a sense, the embedded algorithm informs both counterpart behaviours – machine and human, and potentially transforms the identity of the human component- from control to participation.

    Echo Ho: “Can Ancient Qin Fingering Methods Inspire AI Engineering for New Musical Expressions?”

    This short presentation explores the convergence of ancient Chinese qin-fingering methods and modern AI/ML techniques. The PhD project “qintroNix” reinterprets historical qin learning methods for contemporary art and music. The qin, a seven-stringed instrument with a three-millennia history, uses classifiers inspired by natural phenomena to depict fingering techniques. These classifiers, representing animals and plants, serve as mnemonic devices and metaphorical maps, linking physical movements with musical expression and philosophical models.
    The project speculates on integrating these ancient techniques into modern AI/ML frameworks. Like ancient classifiers distilled natural phenomena into musical gestures, AI/ML algorithms extract and model features from large datasets. This parallel provides a framework for developing AI/ML systems with human-interpretable metaphors, enhancing pattern recognition and classification transparency.
    Exploring phenological phenomena in qin music fosters an ecological consciousness, connecting musicians with an other-than-human world. This entangled approach aligns with contemporary efforts to bridge organic and digital realms; combining archaic knowledge with contemporary technology can unlock new possibilities for imagination, innovation, and a more profound understanding of music’s role in our world.

    Viola Yip: “Liminal Lines” 

    In Viola Yip’s latest solo work “Liminal Lines”, she developed a solo performance with her self-built electromagnetic feedback dress. This feedback dress is made of non-insulated audio cables attached to a soft PVC fabric. These cables allow audio signals to pass through. The signal is first captured by an electromagnetic microphone, and then the captured signals pass through guitar pedals and a mixer, and eventually back to the dress to complete the feedback loop. Her body, when wearing the dress, facilitates a wide range of distances, pressures, and speeds through her physical engagement (touching, squeezing, stretching, etc.). These body-and-instrument interactions physically manipulate the interferences and modulations of the electromagnetic fields, which allow various complex sonorities to emerge and modulate over time.
    In this symposium, she develops a new lecture performance with this dress, in which she will dive into her journey with physical touch, materialities and spaces within and surrounding the wearable instrument and the performer’s body. 

    Andrew McPherson: “Of algorithms and apparatuses: entangled design of digital musical instruments”

    Digital musical instruments are often promoted for their ability to reconfigure relationships between actions and sounds, or for those relationships to incorporate forms of algorithmic behaviour. In this talk, I argue that these apparently unlimited possibilities actually obscure strong and deeply ingrained ideologies which lead to certain design patterns appearing repeatedly over the years. Some such ideological decisions include the reliance on spatial metaphors and unidirectional signal flow models, and the supposition that analytical representations about music can be inverted into levers of control for creating it. I will present some work-in-progress theorising an alternative view based on Karen Barad’s agential realism, particularly Barad’s notions of the apparatus and the agential cut. In this telling, algorithmic instruments are not merely measuring and manipulating stable pre-existing phenomena; they are actively bringing those phenomena into existence. On closer inspection, the boundaries between designer and artefact, instrument and player, materiality and discourse, are more fluid than they first appear, which offers an opening for new approaches to design of algorithmic tools within and beyond music.

    Nicola L. Hein: “Cybernetic listening and embodiment in human-machine improvisation”

    In this talk, I will show and discuss several pieces of mine which operate within the domain of human-machine improvisation using musical agent systems. The focus of this talk will be the changing parameters of cybernetic listening, interaction, and embodiment between human and machine performers. By using my own works, which employ purely software-based musical agents, visual projections and light, robots, and varying perceptual modalities of musical agents as a matrix of changing parameters of the situated human-machine interaction, I will argue for the importance of embodiment as a central concern in musical human-machine improvisation. The term cybernetic listening will help to further elaborate on the systemic components of musical performance and interaction with musical agent systems.

    Federico Visi: “The Sophtar: a networkable feedback string instrument with embedded machine learning”

    The Sophtar is a tabletop string instrument with an embedded system for digital signal processing, networking, and machine learning. It features a pressure-sensitive fretted neck, two sound boxes, and controlled feedback capabilities by means of bespoke interface elements. The design of the instrument is informed by my practice with hyperorgan interaction in networked music performance. I discuss the motivations behind the development of the instrument and describe its structure, interface elements, and the hyperorgan and sound synthesis interactions approaches it implements. Finally, I reflect on the affordances of the Sophtar and the differences and similarities with other instruments and outline future developments and uses.

    Teresa Pelinski and Adam Pultz Melbye: “Building and performing with an ensemble of models”

    How can we then treat machine learning models as design material when building new musical instruments? Many of the tools available for embedding machine learning in musical practice have fixed model architectures, so that the practitioners’ involvement with the models is often limited to the training data. In this context, the design material appears to be the data. From a developer’s perspective, however, when training a model for a specific task, a ‘model’ does not refer to a single entity but to an ensemble instead: the architecture is trained varying the number of layers, blocks or embedding sizes (the hyperparameters), and training-specific parameters such as learning rate or dropout. 
    For the last month we have been working on a practice-based research project involving the FAAB (feedback-actuated augmented bass), the ai-mami system (AI-models as materials interface) and a modular synthesiser. In this talk, we will discuss our process when building this performance ensemble, our insights in using machine learning models as design material, and finally, the technical infrastructure of the project.

    John MacCallum: “OpenSoundControl Everywhere”

    In this talk/demo I will preview OSE, a lightweight, dynamic OpenSoundControl (OSC) server designed for rapid prototyping in heterogeneous environments. Since its development over 25 years ago, OSC has been a popular choice for those looking to expose software and hardware to dynamic control. OSC itself, however,
    is not dynamic—seamless integration of new nodes into an existing network is challenging, and OSC servers are not typically extensible at runtime. This lack of dynamism stands increasingly in the way of truly rapid prototyping in fields such as digital musical instrument design. OSE provides a solution to this challenge by implementing a general purpose OSC server that itself is an OSC server, allowing for dynamic control and configuration. I will discuss briefly some implications for design with such a system, the state of the current work, and its future.

    Exhibitions

    (the exhibitions will be briefly introduced right before the lunch break)

    • Zora Kutz and Stratos Bichakis: Beads Beats
    • Friederike Fröbel, Malte Bergmann: Student projects, MA Interface Cultures, Kunstuniversität Linz

    Zora Kutz & Stratos Bichakis: “Beads Beats”

    As one of the earliest forms of fabric decoration, found separately in cultures across the world, beading provides a rich history of techniques and materials that fit well with creating interactive textiles. Sensitive to touch and pressure, capacitive sensing allows for a range of input that can then be used for a variety of applications. In this project, we wish to create an artefact not unlike a musical instrument, used to control both synthetic soundscapes and the spatialisation of audio material. Along the process we are documenting findings for both the types of interaction we are researching, as well as the material and construction specifications.

    Friederike Fröbel & Malte Bergmann: “Experiments with textile interfaces and found sounds”

    This exhibition presents projects that explore meaningful sound interactions through textile-tangible interfaces using digital sound processing techniques. They were developed in two courses of the Masters Programme Interface Cultures at the University of Art and Industrial Design in Linz, Austria. The annual course explores the relationship between technology, fashion, craft and design with a particular focus on the idea of dynamic surfaces and soft circuits and how these can be realised through various textile processing techniques such as knitting, weaving, embroidery and many more. The course consisted of experimenting and exploring textile processing techniques using capacitive sensors and designing novel interaction prototypes with soft interfaces. Students worked on projects using Bela Mini, Trill Craft and Pure Data. Each student brought a personal field recording and a piece of clothing or textile to which they had a particular attachment. The materials students brought ranged from fish leather and stomach rumbles to suitcases and the rattling sounds of railway bridges. These materials would be ‘upcycled’ and combined as part of the group work, where they would be collectively combined to use soft textile sensing to generate expressive and playful ways of using clothing as a sound interface.

    Performances

    (~1 hour in total at the end of the workshop)

    • Echo Ho and Alberto de Campo
    • Adam Pultz Melbye and Teresa Pelinski
    • Eliad Wagner
    • Federico Visi
    • Absolute Relativity – flatcat, Kraken, nUFO, and more, group and audience welcome to play
    • Nicola Hein
  • AI in Music symposium

    AI in Music symposium

    I look forward to taking part in the AI in Music – Agency, Performance, Production and Perception symposium organised by the University of Music Trossingen’s members of the KISS project. The event will run for two days, 15th and 16th of December 2023. The programme looks very interesting, with keynotes, panels, and concerts addressing different aspects of the use of AI in music. I will contribute to the panel on AI in performance together with Thor Magnusson and Anna Xambó. Here is the panel abstract:

    The panel discusses how Artificial Intelligence can offer novel possibilities for music performance. The panel examines the utilization of algorithms as co-performers and machine learning as a means of enhancing the interface between human bodies and sound production. Furthermore, the panel considers how machine learning itself can become a central element of performance and how bodies of data can be made performatively perceptible. It gathers viewpoints aimed at understanding the impact of creative AI on our interactions with technology, social dynamics, and knowledge creation.

    Talking with Anna and Thor about these topics is going to be such a treat.

  • Interwoven Sound Spaces

    Interwoven Sound Spaces

    Interwoven Sound Spaces is an interdisciplinary project which brought bringing together telematic music performance, interactive textiles, interaction design, and artistic research. A team of researchers collaborated with two professional contemporary music ensembles based in Berlin, Germany, and Piteå, Sweden, and four composers, with the aim of creating a telematic distributed concert taking place simultaneously in two concert halls and online. Central to the project was the development of interactive textiles capable of sensing the musicians’ movements while playing acoustic instruments, and generating data the composers used in their works. Musicians, instruments, textiles, sounds, halls, and data formed a network of entities and agencies that was is reconfigured for each piece, showing how networked music practice enables distinctive musicking techniques.
    https://www.interwovensoundspaces.com

  • Successful funding application

    Successful funding application

    The news are in: our proposal titled “Music of the Indeterminate Place: telematic performance and composition intersecting physical and network spaces” was awarded a 3-year artistic research grant from the Swedish Research Council! This will support a very substantial amount of hyperorgan and networked music performance work at Luleå University of Technology with international collaborations with other organ halls and research institutions. We will build upon the work we’ve been doing at the GEMM))) Gesture Embodiment and Machines in Music research cluster since 2019, including the performances of the TCP/Indeterminate Place global hyperorgan quartet and the gesture-organ interactions developed in collaborations Opera Mecatronica. There will be networked organs sounds!

  • Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    I was invited to participate in the Mapping Social Interaction through Sound symposium on 27-28 November 2020. The symposium is organised by Humboldt University, Berlin and – as it is customary these days – will take place on Zoom.

    This is the abstract of my talk.

    Building and exploring multimodal musical corpora:
    from data collection to interaction design using machine learning

    Musical performance is a multimodal experience, for performers and listeners alike. A multimodal representation of a piece of music can contain several synchronized layers, such as audio, symbolic representations (e.g. a score), videos of the performance, physiological and motion data describing the performers movements, as well as semantic labelling and annotations describing expressivity and other high-level qualities of the music. This delineates a scenario where computational music analysis can harness cross-modal processing and multimodal fusion methods to shift the focus toward the relationships that tie together different modalities, thereby revealing the links between low-level features and high-level expressive qualities.

    I will present two concurrent projects focussed on harnessing musical corpora for analysing expressive instrumental music performance and design musical interactions. The first project is centered on a data collection method – currently being developed by the GEMM research cluster at the School of Music in Piteå – aimed at bridging the gap between qualitative and quantitative approaches. The purpose of this method is to build a data corpus containing multimodal measurements linked to high-level subjective observations. By applying stimulated recall (a common qualitative research method in education, medicine, and psychotherapy) the embodied knowledge of music professionals is systematically included in the analytic framework. Qualitative analysis through stimulated recall is an efficient method for generating higher-level understandings of musical performance. Initial results suggest that this process is pivotal in building our multimodal corpus, providing insights that would be unattainable using quantitative data alone.

    The second project – a joint effort with the Computing Department at Goldsmiths, University of London – consists in a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between large sound corpora and motion sensor data. The design approach adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces. We refer to this interaction design approach as Assisted Interactive Machine Learning (AIML). While playing with a large corpus of sounds through gestural interaction by means of a motion sensor, the user can give feedback to an artificial agent about the gesture-sound mappings proposed by the latter. This iterative process results in an interactive exploration of the corpus, as well as in a way of creating and refining gesture-sound mappings.

    These projects are representative of how the development of methods for combining qualitative and quantitative data, in conjunction with the use of computational techniques such as machine learning, can be instrumental in the design of complex mappings between body movement and musical sound, and contribute to the study of the multiple facets of embodied music performance.

    Further reading

    Visi, F. G., Östersjö, S., Ek, R., & Röijezon, U. (2020). Method development for multimodal data corpus analysis of expressive instrumental music performance. Frontiers in Psychology, 11(576751), doi: 10.3389/fpsyg.2020.576751
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2021). Interactive Machine Learning of Musical Gesture. In E. R. Miranda (Ed.), Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity. Springer Nature, forthcoming.
    View on arXiv.org
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2020). Towards Assisted Interactive Machine Learning: Exploring Gesture-Sound Mappings Using Reinforcement Learning. In ICLI 2020 – the Fifth International Conference on Live Interfaces.
    Download PDF

    Presentation slides
    Download PDF

  • Physically Distant #3: the network, the pandemic, and telematic performance

    Physically Distant #3: the network, the pandemic, and telematic performance

    PD#3 will be part of Ecology, Site And Place – Piteå Performing Arts Biennial 2020. Participation in the conference is free, but registration is compulsory. Register by sending an email to piteabiennial@gmail.com

    After the two previous editions in June and July, the third Physically Distant Talks will take place on 26 and 27 October 2020. The talks are going to be part of the online event of Ecology Site and Place – Piteå Performing Arts Biennial.

    The format will be different this time, as there are going to be more telematic performances and the talks will be structured in three panels. Each panel member is invited to prepare a 3-minute provocation/reflection related to the topic. This collection of provocations from the panelists will set the tone for an open discussion in the style of the previous Physically Distant talks. As in the previous editions of the talks, Stefan Östersjö and myself, Federico Visi, will be moderating the discussion.

    Programme (all times are CET)

    Monday, 26 October 2020

    17:30 Introduction. Stefan Östersjö and Federico Visi
    17:40 Simon Waters and Paul Stapleton: Musicking online: your technical problem is actually a social problem. A performative conversation.

    18:00-19:00 Panel I. Instrumentality in Networked Performance
    Panelists: Nela Brown, Nicholas Brown, Juan Parra Cancino, Franziska Schroeder, Henrik Von Coler.

    19:00-19:45 Telematic Performance: A concert hall organ in the network.
    Live-streaming from Studio Acusticum. Telematic performances with the University Organ remotely controlled from several locations.
    Robert Ek, clarinet, performing in Piteå (SE)
    Mattias Petersson, live-coding, performing in Piteå (SE)
    Federico Visi, electronics, electric guitar, performing in Berlin (DE)
    Scott Wilson, live coding, performing in Birmingham (UK)
    Stefan Östersjö, electric guitar, performing in Stockholm (SE)

    19:45-20:00 Break

    20:00-21:00 Panel II. Network ecology: Communities of practice for the digital arts
    Panelists: Shelly Knotts, Thor Magnusson, Mattias Petersson, Rebekah Wilson, Scott Wilson.

    Tuesday, 27 October 2020

    17:45-18:00 Marcin Pączkowski: rehearsing music online: possibilities and limitations

    18:00-19:00 Panel III. The network as place
    Panelists: Ximena Alarcon Diaz, David Brynjar/Angela Rawlings/Halla Stefansdottir, Chicks on Speed (Melissa Logan, Alex Murray-Leslie), Maja Jantar, Marcin Paczkowski, Roger Mills, Luca Turchet.

    19:00-19:30 Telematic Performance: iða
    David Brynjar Franzson, technical concept and streaming (US)
    Maja Jantar, performer and composer of visual score (BE)
    Angela Rawlings, performer and composer of visual score (IS/CA)
    Halla Steinunn Stefánsdóttir, performer and composer of visual score (SE)

    19:30-20:00 Break

    20:00-21:00 Where do we go from here? (plenary discussion)

    For more details on the Ecology, Site And Place – Piteå Performing Arts Biennial 2020 online event, download the book of abstracts.