Author: _FV

  • New modosc objects for EMG & MoCap processing in Max

    New modosc objects for EMG & MoCap processing in Max

    During November and December 2018, I had the opportunity to spend 5 weeks as a visiting researcher at RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, an amazing centre of excellence recently inaugurated at the University of Oslo. In mid November, at the beginning of my stay, Luke Dahl and I presented modosc, our Max library for real time motion capture analysis to the attendees of the RITMO International Motion Capture Workshop. The library is the results of a collaboration between Luke and myself, and has been presented at various conferences in 2018, including NIME1 (paper) , and MOCO2 (paper).

    While in Oslo, I had the chance to spend time in the RITMO Motion Capture lab and use their Delsys Trigno wireless EMG system synchronised with their Qualisys cameras. With that gear, I coded three new modosc objects for real-time processing of EMG signals synchronised with MoCap:

    • mo.qtmSig: binds data from QTM analog boards to new signal addresses in the modosc namespace (under /modosc/signals);
    • mo.zcr: calculates the zero crossing rate of a signal (useful feature for classification tasks);
    • mo.tkeo: calculates the Teager-Kaiser energy-tracking operator (TKEO) of a signal (useful for onset detection and other things, to learn more check out Eivind Kvedalen’s PhD thesis: http://folk.uio.no/eivindkv/ek-thesis-2003-05-12-final-2.pdf). I got the idea of implementing this interesting feature from Geert Roks, a student at Goldsmiths University of London currently collaborating with Prof Atau Tanaka.

    Here are some video tutorials to get you started with modosc.

  • modosc: Mocap & Max video tutorials

    modosc: Mocap & Max video tutorials

    These are some introductory video tutorials about processing motion capture data in real time in Max using the modosc library.

    Modosc is a set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time.


  • Building a swarm poly synth using Max 8 new MC objects

    Building a swarm poly synth using Max 8 new MC objects

    I just downloaded the new Max 8 and here is a simple synth I built using the new MC (multichannel) objects. Each voice has 32 sawtooth oscillators, so with 6-voice polyphony you can get up to 192 oscillators playing at the same time. The dials control pitch spread and “deviated” release (meaning that each oscillator inside each voice will have a slightly different release time).

    Since few people on social media asked me to share the patch, I made it available for download here. EDIT: I moved the files to GitHub: https://github.com/federicoVisi/max_mc_swarm_polysynth

    NOTE: the patch is a quick and dirty experiment I did to try out the sound synthesis capabilities of the MC objects in Max 8. It is not a finished instrument and has some inconsistencies that should be fixed. You’re very welcome to edit the patch and get in touch to share ideas, although be aware that I might not have the time to provide technical support.

  • Workshop and Performance at Harvestworks, New York City

    Workshop and Performance at Harvestworks, New York City

    I recently ran a workshop and performed at Harvestworks in New York City. The workshop was done in collaboration with Andrew Telichan Phillips form the Music and Audio Research Laboratory at NYU Steinhardt. The amazing Ana García Caraballos performed with me my piece 11 Degrees of Dependence on alto sax, myo armbands, and live electronics. Here’s a video:

     

  • Testing the XTH Sense with Physical Models and Machine Learning

    Testing the XTH Sense with Physical Models and Machine Learning

    I recently had the chance to play with a prototype version of the new XTH Sense. I met up with Marco Donnarumma and Balandino Di Donato at Integra Lab in Birmingham and we spent a couple of days experimenting with this interesting and yet unreleased device. It is a small, wireless, wearable unit that comprises a Mechanomyogram (MMG) sensor for capturing the sounds produced by muscular activity and a 9DoF IMU unit, which returns various motion features, such as acceleration, angular velocity, and orientation.

    I had already been working with 9DoF IMUs data during my research collaboration at NYU Steinhardt in New York and for previous performances, so I knew what to I could expect in that department. However, one of the main peculiarities of the XTH Sense is the MMG sensor. While in New York, I had worked with Thalmic Labs’ Myo, which employs Electromyogram (EMG) for muscle sensing. I won’t go too deep into the technical differences between MMG and EMG, suffice it to say that EMG senses the electrical impulses sent by the brain to cause muscle contraction, while MMG consists of sounds that your muscles produce during contraction and extension[ref]If you want to learn more, Marco covered these topics thoroughly in this article written with Baptiste Caramiaux and Atau Tanaka, plus here is another article that compares the two technologies from a biomedical point of view.[/ref].  In terms of expressive interaction, what I find interesting about the MMG sensor of the XTH Sense is the distinctive way it responds to movements and gestures. Unlike EMG, the control signals obtained from the XTH Sense peak at movement onsets and remain relatively low if you keep your muscles contracted. This is neither better nor worse than EMG, it’s different.

    While adapting my code, I started noticing how the response of the XTH Sense made me interact differently with the machine learning and physical modelling patches I had previously built using the Myo. I guess with a fair deal of signal processing I could make the two devices behave in a virtually similar way, but this in my opinion would be rather pointless. One of the exciting things about having to deal with a new device is to embrace its interface idiosyncrasies and explore their expressive potential. As a simple example, in the physical modelling patch I built for the rain stick demo we filmed in Birmingham, the amount excitation sent to the model depended on one of the MMG control features. Had I used EMG I would have obtained a steady excitation signal by firmly squeezing the stick, while the response of the MMG required me to perform a more iterative gesture — like repeatedly tapping my fingers on the stick — if I wanted to obtained a louder sound. This somehow reminded me of the gestures involved in playing a wind instruments and this idea influenced the whole interaction design I eventually implemented.

    I will soon be back in New York for a workshop and a performance at Harvestworks on May 8th, where I’ll show some of the tools and methodologies I use in my research and practice, including those I experimented with playing with the new XTH Sense for the first time. If you’re in the area and want to attend register here or if you just want to know more about it drop me a line.

  • Performances at Peninsula Arts Contemporary Music Festival 2016

    Performances at Peninsula Arts Contemporary Music Festival 2016

    Very excited to be performing two pieces at this year’s Peninsula Arts Contemporary Music Festival.

    The super talented Esther Coorevits will once again join me to perform an updated version of Kineslimina, which will be performed at the Gala Concert on Saturday night and will feature some of the technologies I started working on while I was in New York last summer.

    On Sunday, the amazing Dr. Katherine Williams will play soprano sax and motion sensors for my new piece 11 Degrees of Dependence. Her movements will control control the parameters of synthetic flute.

    Check out the rest of the programme, there are some very exciting works you won’t be able to hear anywhere else.

  • At New York University to work on sensors for music performance – pt. 5: tests with musicians

    Some experiments I did together with Andrew Telichan Phillips and some very nice and talented musicians at NYU Steinhardt and at The Sweatshop.
    We used Myo sensor armbands and Machine Learning to adapt control parameters to the movements of musicians playing different musical instruments.

    Credits:
    Alto Sax: Ana Garcia
    Drums: Kim Deuss
    Tenor Sax: Timo Vollbrecht
    Flute: Rachel Bittner

    Related posts: pt. 1pt. 2pt.3, pt.4.
    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 4: Talk at NYU Steinhardt

    Tomorrow I am going to deliver a talk at the NYU Music and Audio research laboratory about my research at the Interdisciplinary Centre for Computer Music Research (ICCMR) in Plymouth.

    Click on the poster below to learn more.
    Poster_20150910_Visi

     

    Related posts: pt. 1pt. 2, pt.3.

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 3: Machine Learning

     

    In the past couple of weeks we used two Myos at the same time to evaluate more high-level features of the movement such as Symmetry, Contraction and even full-body weight shifting, which worked surprisingly well when combining and comparing the orientation data of the two IMUs inside the Myos at the same time. 

    In addition to that we used Machine Learning in Max to map the data from the Myos to a complex granular sampler with resonators. We used the excellent ml.lib library for Max to quickly map arm postures and muscular efforts to multiple parameters of the granular engine in order to control real-time processing of the audio signal coming from an electric guitar. The cool thing about this approach is that you don’t have to spend time mapping and rescaling control values since you can easily map complex expressive gestures to multiple synthesis parameters at once. Check the video above for a demo.

    Speaking of inspiring music I’ve been listening to lately, check this amazing performance by Colin Stetson and Sarah Neufeld. They are great musicians and the way they interact and move on stage is really compelling. I listened to their record pretty much every day for a couple weeks on my subway commute between Manhattan and Greenpoint.

    Related posts: pt. 1, pt. 2.

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 2: Making sense of IMU Motion Data

    I’m currently in New York and in the past weeks I have designed a set of Max objects that make use of the motion data obtained from 9DoF IMUs for musical purposes. Antonio Camurri and his colleagues at InfoMus – Casa Paganini have made extensive use of various motion descriptors throughout the years. I tried to adapt their concepts to the data obtained from the IMUs. In this paper you can find an interesting overview of some of the techniques they employ for analysing movement expressivity.

    Since at the moment I’m mostly using Thalmic Labs’ Myo, I also further developed part of the MuMyo Max patch that Kristian Nymoen, Mari Romarheim Haugen, and Alexander Refsum Jensenius from fourMs (University of Oslo) presented at NIME this year. For example, I added a way to centre the yaw orientation value in Max, as shown in the video below. Easily centring the yaw value is useful also because the orientation data of the Myo is affected by yaw drifting. I haven’t experienced a massive amount of drift when using the device so periodically centring seems like an acceptable solution in my case, however it might be worth trying to implement algorithms to dynamically compensate yaw drift such as Madgwick’s filter.

    Andrew (my collaborator here at NYU) is working on a real-time DSP/synthesis engine that we will control through musicians’movements sensed by the Myo. I look forward tot try it myself and with other musicians I’ve met here at NYU. I also involved Rodrigo Schramm, with whom I had the pleasure of working several times before. He has recently completed his brilliant PhD thesis on computational analysis of music-related movements and I’m very happy to collaborate with him again.

    I used some simple maths to convert the orientation data to XY position in a 2D space, which comes in handy when using some sort of XY pad like Max’s [nodes] object to control musical parameters.  In addition to the orientation, I also mapped two subsets of the EMG data to the size of the nodes, which creates some interesting global effects when increasing the effort in a movement.

    I also built a patch dedicated to recording the sensor data synced with audio sources, which will be very useful for research and analysis. The recorder will also come in handy when using various machine learning techniques to recognise certain movements. I’m particularly interested in recording whole performances and and compare a recording with the real-time data stream during a live performance. To do so I’ll use Baptiste Caramiaoux’s Gesture Variation Follower, which is available both for Max and C++.

    Enough with the technicalities for this post, let’s talk about music. I’ve been going to The Stone every week since I arrived here, which is unique venue for amazing, mind-blowing, genre-defying music and a constant source of inspiration for what I’m doing. Every Sunday at 3pm different musicians perform a selection of new compositions by John Zorn called “The Bagatelles”. In small venues such as The Stone it is possible to hear (or should I say “feel”) every single detail of the performance, appreciate the texture of the sound, the presence of the performers, their movements and their interplay. Extremely recommended.

    I will soon post some more “musical” tests, also involving other musicians!

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.