Tag: music

  • Computer Music Journal article: “An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound”

    Computer Music Journal article: “An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound”

    I finally put my hands on the physical copy of the Computer Music Journal issue on Human-AI Cocreativity that includes our open-access article An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound.

    There we report on our work on an instrument that translates the electrical activity produced by muscles into synthesised sound. The article also describes our collaboration with the Chicks on Speed on the performance piece Noise Bodies (2019), which was part of the Up to and Including Limits: After Carolee Schneemann exhibition at Muzeum Susch.

    I co-authored the article with Atau Tanaka, Balandino Di Donato, Martin Klang, and Michael Zbyszyński, here is the abstract:

    This article presents a custom system combining hardware and software that senses physiological signals of the performer’s body resulting from muscle contraction and translates them to computer-synthesized sound. Our goal was to build upon the history of research in the field to develop a complete, integrated system that could be used by nonspecialist musicians. We describe the Embodied AudioVisual Interaction Electromyogram, an end-to-end system spanning wearable sensing on the musician’s body, custom microcontroller-based biosignal acquisition hardware, machine learning–based gesture-to-sound mapping middleware, and software-based granular synthesis sound output. A novel hardware design digitizes the electromyogram signals from the muscle with minimal analog preprocessing and treats it in an audio signal-processing chain as a class-compliant audio and wireless MIDI interface. The mapping layer implements an interactive machine learning workflow in a reinforcement learning configuration and can map gesture features to auditory metadata in a multidimensional information space. The system adapts existing machine learning and synthesis modules to work with the hardware, resulting in an integrated, end-to-end system. We explore its potential as a digital musical instrument through a series of public presentations and concert performances by a range of musical practitioners.

  • Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    I was invited to participate in the Mapping Social Interaction through Sound symposium on 27-28 November 2020. The symposium is organised by Humboldt University, Berlin and – as it is customary these days – will take place on Zoom.

    This is the abstract of my talk.

    Building and exploring multimodal musical corpora:
    from data collection to interaction design using machine learning

    Musical performance is a multimodal experience, for performers and listeners alike. A multimodal representation of a piece of music can contain several synchronized layers, such as audio, symbolic representations (e.g. a score), videos of the performance, physiological and motion data describing the performers movements, as well as semantic labelling and annotations describing expressivity and other high-level qualities of the music. This delineates a scenario where computational music analysis can harness cross-modal processing and multimodal fusion methods to shift the focus toward the relationships that tie together different modalities, thereby revealing the links between low-level features and high-level expressive qualities.

    I will present two concurrent projects focussed on harnessing musical corpora for analysing expressive instrumental music performance and design musical interactions. The first project is centered on a data collection method – currently being developed by the GEMM research cluster at the School of Music in Piteå – aimed at bridging the gap between qualitative and quantitative approaches. The purpose of this method is to build a data corpus containing multimodal measurements linked to high-level subjective observations. By applying stimulated recall (a common qualitative research method in education, medicine, and psychotherapy) the embodied knowledge of music professionals is systematically included in the analytic framework. Qualitative analysis through stimulated recall is an efficient method for generating higher-level understandings of musical performance. Initial results suggest that this process is pivotal in building our multimodal corpus, providing insights that would be unattainable using quantitative data alone.

    The second project – a joint effort with the Computing Department at Goldsmiths, University of London – consists in a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between large sound corpora and motion sensor data. The design approach adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces. We refer to this interaction design approach as Assisted Interactive Machine Learning (AIML). While playing with a large corpus of sounds through gestural interaction by means of a motion sensor, the user can give feedback to an artificial agent about the gesture-sound mappings proposed by the latter. This iterative process results in an interactive exploration of the corpus, as well as in a way of creating and refining gesture-sound mappings.

    These projects are representative of how the development of methods for combining qualitative and quantitative data, in conjunction with the use of computational techniques such as machine learning, can be instrumental in the design of complex mappings between body movement and musical sound, and contribute to the study of the multiple facets of embodied music performance.

    Further reading

    Visi, F. G., Östersjö, S., Ek, R., & Röijezon, U. (2020). Method development for multimodal data corpus analysis of expressive instrumental music performance. Frontiers in Psychology, 11(576751), doi: 10.3389/fpsyg.2020.576751
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2021). Interactive Machine Learning of Musical Gesture. In E. R. Miranda (Ed.), Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity. Springer Nature, forthcoming.
    View on arXiv.org
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2020). Towards Assisted Interactive Machine Learning: Exploring Gesture-Sound Mappings Using Reinforcement Learning. In ICLI 2020 – the Fifth International Conference on Live Interfaces.
    Download PDF

    Presentation slides
    Download PDF

  • Physically Distant #3: the network, the pandemic, and telematic performance

    Physically Distant #3: the network, the pandemic, and telematic performance

    PD#3 will be part of Ecology, Site And Place – Piteå Performing Arts Biennial 2020. Participation in the conference is free, but registration is compulsory. Register by sending an email to piteabiennial@gmail.com

    After the two previous editions in June and July, the third Physically Distant Talks will take place on 26 and 27 October 2020. The talks are going to be part of the online event of Ecology Site and Place – Piteå Performing Arts Biennial.

    The format will be different this time, as there are going to be more telematic performances and the talks will be structured in three panels. Each panel member is invited to prepare a 3-minute provocation/reflection related to the topic. This collection of provocations from the panelists will set the tone for an open discussion in the style of the previous Physically Distant talks. As in the previous editions of the talks, Stefan Östersjö and myself, Federico Visi, will be moderating the discussion.

    Programme (all times are CET)

    Monday, 26 October 2020

    17:30 Introduction. Stefan Östersjö and Federico Visi
    17:40 Simon Waters and Paul Stapleton: Musicking online: your technical problem is actually a social problem. A performative conversation.

    18:00-19:00 Panel I. Instrumentality in Networked Performance
    Panelists: Nela Brown, Nicholas Brown, Juan Parra Cancino, Franziska Schroeder, Henrik Von Coler.

    19:00-19:45 Telematic Performance: A concert hall organ in the network.
    Live-streaming from Studio Acusticum. Telematic performances with the University Organ remotely controlled from several locations.
    Robert Ek, clarinet, performing in Piteå (SE)
    Mattias Petersson, live-coding, performing in Piteå (SE)
    Federico Visi, electronics, electric guitar, performing in Berlin (DE)
    Scott Wilson, live coding, performing in Birmingham (UK)
    Stefan Östersjö, electric guitar, performing in Stockholm (SE)

    19:45-20:00 Break

    20:00-21:00 Panel II. Network ecology: Communities of practice for the digital arts
    Panelists: Shelly Knotts, Thor Magnusson, Mattias Petersson, Rebekah Wilson, Scott Wilson.

    Tuesday, 27 October 2020

    17:45-18:00 Marcin Pączkowski: rehearsing music online: possibilities and limitations

    18:00-19:00 Panel III. The network as place
    Panelists: Ximena Alarcon Diaz, David Brynjar/Angela Rawlings/Halla Stefansdottir, Chicks on Speed (Melissa Logan, Alex Murray-Leslie), Maja Jantar, Marcin Paczkowski, Roger Mills, Luca Turchet.

    19:00-19:30 Telematic Performance: iða
    David Brynjar Franzson, technical concept and streaming (US)
    Maja Jantar, performer and composer of visual score (BE)
    Angela Rawlings, performer and composer of visual score (IS/CA)
    Halla Steinunn Stefánsdóttir, performer and composer of visual score (SE)

    19:30-20:00 Break

    20:00-21:00 Where do we go from here? (plenary discussion)

    For more details on the Ecology, Site And Place – Piteå Performing Arts Biennial 2020 online event, download the book of abstracts.