I recently ran a workshop and performed at Harvestworks in New York City. The workshop was done in collaboration with Andrew Telichan Phillips form the Music and Audio Research Laboratory at NYU Steinhardt. The amazing Ana García Caraballos performed with me my piece 11 Degrees of Dependence on alto sax, myo armbands, and live electronics. Here’s a video:
Some experiments I did together with Andrew Telichan Phillips and some very nice and talented musicians at NYU Steinhardt and at The Sweatshop. We used Myo sensor armbands and Machine Learning to adapt control parameters to the movements of musicians playing different musical instruments.
Credits: Alto Sax: Ana Garcia Drums: Kim Deuss Tenor Sax: Timo Vollbrecht Flute: Rachel Bittner
Related posts: pt. 1, pt. 2, pt.3, pt.4. This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.
This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.
In the past couple of weeks we used two Myos at the same time to evaluate more high-level features of the movement such as Symmetry, Contraction and even full-body weight shifting, which worked surprisingly well when combining and comparing the orientation data of the two IMUs inside the Myos at the same time.
In addition to that we used Machine Learning in Max to map the data from the Myos to a complex granular sampler with resonators. We used the excellent ml.lib library for Max to quickly map arm postures and muscular efforts to multiple parameters of the granular engine in order to control real-time processing of the audio signal coming from an electric guitar. The cool thing about this approach is that you don’t have to spend time mapping and rescaling control values since you can easily map complex expressive gestures to multiple synthesis parameters at once. Check the video above for a demo.
Speaking of inspiring music I’ve been listening to lately, check this amazing performance by Colin Stetson and Sarah Neufeld. They are great musicians and the way they interact and move on stage is really compelling. I listened to their record pretty much every day for a couple weeks on my subway commute between Manhattan and Greenpoint.
This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.
I’m currently in New York and in the past weeks I have designed a set of Max objects that make use of the motion data obtained from 9DoF IMUs for musical purposes. Antonio Camurri and his colleagues at InfoMus – Casa Paganini have made extensive use of various motion descriptors throughout the years. I tried to adapt their concepts to the data obtained from the IMUs. In this paper you can find an interesting overview of some of the techniques they employ for analysing movement expressivity.
Since at the moment I’m mostly using Thalmic Labs’ Myo, I also further developed part of the MuMyo Max patch that Kristian Nymoen, Mari Romarheim Haugen, and Alexander Refsum Jensenius from fourMs (University of Oslo) presented at NIME this year. For example, I added a way to centre the yaw orientation value in Max, as shown in the video below. Easily centring the yaw value is useful also because the orientation data of the Myo is affected by yaw drifting. I haven’t experienced a massive amount of drift when using the device so periodically centring seems like an acceptable solution in my case, however it might be worth trying to implement algorithms to dynamically compensate yaw drift such as Madgwick’s filter.
Andrew (my collaborator here at NYU) is working on a real-time DSP/synthesis engine that we will control through musicians’movements sensed by the Myo. I look forward tot try it myself and with other musicians I’ve met here at NYU. I also involved Rodrigo Schramm, with whom I had the pleasure of working several times before. He has recently completed his brilliant PhD thesis on computational analysis of music-related movements and I’m very happy to collaborate with him again.
I used some simple maths to convert the orientation data to XY position in a 2D space, which comes in handy when using some sort of XY pad like Max’s [nodes] object to control musical parameters. In addition to the orientation, I also mapped two subsets of the EMG data to the size of the nodes, which creates some interesting global effects when increasing the effort in a movement.
I also built a patch dedicated to recording the sensor data synced with audio sources, which will be very useful for research and analysis. The recorder will also come in handy when using various machine learning techniques to recognise certain movements. I’m particularly interested in recording whole performances and and compare a recording with the real-time data stream during a live performance. To do so I’ll use Baptiste Caramiaoux’s Gesture Variation Follower, which is available both for Max and C++.
Enough with the technicalities for this post, let’s talk about music. I’ve been going to The Stone every week since I arrived here, which is unique venue for amazing, mind-blowing, genre-defying music and a constant source of inspiration for what I’m doing. Every Sunday at 3pm different musicians perform a selection of new compositions by John Zorn called “The Bagatelles”. In small venues such as The Stone it is possible to hear (or should I say “feel”) every single detail of the performance, appreciate the texture of the sound, the presence of the performers, their movements and their interplay. Extremely recommended.
I will soon post some more “musical” tests, also involving other musicians!
This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.
For the next few weeks I will be in New York working on a collaborative project with the Music and Audio Research Laboratory (MARL) at NYU Steinhardt School of Music and Performing Arts Practice.
The goal of this project is to develop software tools that allow one to harness wearable sensor technologies in order to provide tools for body movement research and interactive music performance. In particular, the project will focus on the use of 9 Degrees of Freedom Inertial Measurement Units (9DoF IMUs) coupled with a form of muscle sensing, such as electromyography (EMG) or mechanomyography (MMG).
This project has a twofold purpose. One is to develop dedicated applications to process the motion data from the sensors in real time. This will allow performers to interact with music and with each other through their movements, extending the possibilities of their musical instruments. In addition to that, the software will be a useful tool for researchers to study body motion and collect sensor data for analysis.
During the first week we focused on obtaining a stable stream of data from the Myo armband. The Myo features a 9DoF IMU which provides tridimensional acceleration and angular velocity in addition to orientation data obtained by sensor fusion, both in Euler angles and quaternion format. Along with the IMU data, the Myo provides 8-channel EMG data, which is a unique feature of the device. I will work with other sensors in the near future since I wouldn’t like to limit the software I’m working on to the Myo. I’ve already tried other IMUs and I look forward to work with Marco Donnarumma’s new version of the Xth Sense, which is currently being tested and will be soon available through xth.io. However, at the moment the Myo provides a good hardware platform for prototyping algorithms and trying ideas out since it is a fairly well-engineered and compact device.
I started working on real-time implementations of movement descriptors traditionally used with optical motion capture systems, such as Quantity of Motion, and Smoothness. This descriptors allow to extract expressive features from the movement data, which are then useful for interactive music applications and movement analysis. The main challenge here is to adapt the ideas behind this descriptors to the data provided by the wearable sensors, which is completely different from the data obtained by optical devices such as the Kinect and marker-based MoCap systems.
In addition to this rather technical work, I will test the software in actual music performances, collaborating with other musicians. I believe this is a vital and essential part of the research, without which the project might steer too far away from what it is actually all about: music. While in New York, I will also try to take advantage of the fervent and inexhaustible offer of live music this city has always had, which is a great source of inspiration for what I’m doing. I’ve already been to a few excellent concerts and performances of various kinds and observing the behaviours of the musicians performing has already lead to some ideas I will want to try in the coming days.
I will try to write more posts about our progress if time allows it, possibly including videos and pictures. Alright, now back to work.
This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.