Blog

  • Building a swarm poly synth using Max 8 new MC objects

    Building a swarm poly synth using Max 8 new MC objects

    I just downloaded the new Max 8 and here is a simple synth I built using the new MC (multichannel) objects. Each voice has 32 sawtooth oscillators, so with 6-voice polyphony you can get up to 192 oscillators playing at the same time. The dials control pitch spread and “deviated” release (meaning that each oscillator inside each voice will have a slightly different release time).

    Since few people on social media asked me to share the patch, I made it available for download here. EDIT: I moved the files to GitHub: https://github.com/federicoVisi/max_mc_swarm_polysynth

    NOTE: the patch is a quick and dirty experiment I did to try out the sound synthesis capabilities of the MC objects in Max 8. It is not a finished instrument and has some inconsistencies that should be fixed. You’re very welcome to edit the patch and get in touch to share ideas, although be aware that I might not have the time to provide technical support.

  • Workshop and Performance at Harvestworks, New York City

    Workshop and Performance at Harvestworks, New York City

    I recently ran a workshop and performed at Harvestworks in New York City. The workshop was done in collaboration with Andrew Telichan Phillips form the Music and Audio Research Laboratory at NYU Steinhardt. The amazing Ana García Caraballos performed with me my piece 11 Degrees of Dependence on alto sax, myo armbands, and live electronics. Here’s a video:

     

  • Testing the XTH Sense with Physical Models and Machine Learning

    Testing the XTH Sense with Physical Models and Machine Learning

    I recently had the chance to play with a prototype version of the new XTH Sense. I met up with Marco Donnarumma and Balandino Di Donato at Integra Lab in Birmingham and we spent a couple of days experimenting with this interesting and yet unreleased device. It is a small, wireless, wearable unit that comprises a Mechanomyogram (MMG) sensor for capturing the sounds produced by muscular activity and a 9DoF IMU unit, which returns various motion features, such as acceleration, angular velocity, and orientation.

    I had already been working with 9DoF IMUs data during my research collaboration at NYU Steinhardt in New York and for previous performances, so I knew what to I could expect in that department. However, one of the main peculiarities of the XTH Sense is the MMG sensor. While in New York, I had worked with Thalmic Labs’ Myo, which employs Electromyogram (EMG) for muscle sensing. I won’t go too deep into the technical differences between MMG and EMG, suffice it to say that EMG senses the electrical impulses sent by the brain to cause muscle contraction, while MMG consists of sounds that your muscles produce during contraction and extension[ref]If you want to learn more, Marco covered these topics thoroughly in this article written with Baptiste Caramiaux and Atau Tanaka, plus here is another article that compares the two technologies from a biomedical point of view.[/ref].  In terms of expressive interaction, what I find interesting about the MMG sensor of the XTH Sense is the distinctive way it responds to movements and gestures. Unlike EMG, the control signals obtained from the XTH Sense peak at movement onsets and remain relatively low if you keep your muscles contracted. This is neither better nor worse than EMG, it’s different.

    While adapting my code, I started noticing how the response of the XTH Sense made me interact differently with the machine learning and physical modelling patches I had previously built using the Myo. I guess with a fair deal of signal processing I could make the two devices behave in a virtually similar way, but this in my opinion would be rather pointless. One of the exciting things about having to deal with a new device is to embrace its interface idiosyncrasies and explore their expressive potential. As a simple example, in the physical modelling patch I built for the rain stick demo we filmed in Birmingham, the amount excitation sent to the model depended on one of the MMG control features. Had I used EMG I would have obtained a steady excitation signal by firmly squeezing the stick, while the response of the MMG required me to perform a more iterative gesture — like repeatedly tapping my fingers on the stick — if I wanted to obtained a louder sound. This somehow reminded me of the gestures involved in playing a wind instruments and this idea influenced the whole interaction design I eventually implemented.

    I will soon be back in New York for a workshop and a performance at Harvestworks on May 8th, where I’ll show some of the tools and methodologies I use in my research and practice, including those I experimented with playing with the new XTH Sense for the first time. If you’re in the area and want to attend register here or if you just want to know more about it drop me a line.

  • Performances at Peninsula Arts Contemporary Music Festival 2016

    Performances at Peninsula Arts Contemporary Music Festival 2016

    Very excited to be performing two pieces at this year’s Peninsula Arts Contemporary Music Festival.

    The super talented Esther Coorevits will once again join me to perform an updated version of Kineslimina, which will be performed at the Gala Concert on Saturday night and will feature some of the technologies I started working on while I was in New York last summer.

    On Sunday, the amazing Dr. Katherine Williams will play soprano sax and motion sensors for my new piece 11 Degrees of Dependence. Her movements will control control the parameters of synthetic flute.

    Check out the rest of the programme, there are some very exciting works you won’t be able to hear anywhere else.

  • At New York University to work on sensors for music performance – pt. 5: tests with musicians

    Some experiments I did together with Andrew Telichan Phillips and some very nice and talented musicians at NYU Steinhardt and at The Sweatshop.
    We used Myo sensor armbands and Machine Learning to adapt control parameters to the movements of musicians playing different musical instruments.

    Credits:
    Alto Sax: Ana Garcia
    Drums: Kim Deuss
    Tenor Sax: Timo Vollbrecht
    Flute: Rachel Bittner

    Related posts: pt. 1pt. 2pt.3, pt.4.
    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 4: Talk at NYU Steinhardt

    Tomorrow I am going to deliver a talk at the NYU Music and Audio research laboratory about my research at the Interdisciplinary Centre for Computer Music Research (ICCMR) in Plymouth.

    Click on the poster below to learn more.
    Poster_20150910_Visi

     

    Related posts: pt. 1pt. 2, pt.3.

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 3: Machine Learning

     

    In the past couple of weeks we used two Myos at the same time to evaluate more high-level features of the movement such as Symmetry, Contraction and even full-body weight shifting, which worked surprisingly well when combining and comparing the orientation data of the two IMUs inside the Myos at the same time. 

    In addition to that we used Machine Learning in Max to map the data from the Myos to a complex granular sampler with resonators. We used the excellent ml.lib library for Max to quickly map arm postures and muscular efforts to multiple parameters of the granular engine in order to control real-time processing of the audio signal coming from an electric guitar. The cool thing about this approach is that you don’t have to spend time mapping and rescaling control values since you can easily map complex expressive gestures to multiple synthesis parameters at once. Check the video above for a demo.

    Speaking of inspiring music I’ve been listening to lately, check this amazing performance by Colin Stetson and Sarah Neufeld. They are great musicians and the way they interact and move on stage is really compelling. I listened to their record pretty much every day for a couple weeks on my subway commute between Manhattan and Greenpoint.

    Related posts: pt. 1, pt. 2.

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 2: Making sense of IMU Motion Data

    I’m currently in New York and in the past weeks I have designed a set of Max objects that make use of the motion data obtained from 9DoF IMUs for musical purposes. Antonio Camurri and his colleagues at InfoMus – Casa Paganini have made extensive use of various motion descriptors throughout the years. I tried to adapt their concepts to the data obtained from the IMUs. In this paper you can find an interesting overview of some of the techniques they employ for analysing movement expressivity.

    Since at the moment I’m mostly using Thalmic Labs’ Myo, I also further developed part of the MuMyo Max patch that Kristian Nymoen, Mari Romarheim Haugen, and Alexander Refsum Jensenius from fourMs (University of Oslo) presented at NIME this year. For example, I added a way to centre the yaw orientation value in Max, as shown in the video below. Easily centring the yaw value is useful also because the orientation data of the Myo is affected by yaw drifting. I haven’t experienced a massive amount of drift when using the device so periodically centring seems like an acceptable solution in my case, however it might be worth trying to implement algorithms to dynamically compensate yaw drift such as Madgwick’s filter.

    Andrew (my collaborator here at NYU) is working on a real-time DSP/synthesis engine that we will control through musicians’movements sensed by the Myo. I look forward tot try it myself and with other musicians I’ve met here at NYU. I also involved Rodrigo Schramm, with whom I had the pleasure of working several times before. He has recently completed his brilliant PhD thesis on computational analysis of music-related movements and I’m very happy to collaborate with him again.

    I used some simple maths to convert the orientation data to XY position in a 2D space, which comes in handy when using some sort of XY pad like Max’s [nodes] object to control musical parameters.  In addition to the orientation, I also mapped two subsets of the EMG data to the size of the nodes, which creates some interesting global effects when increasing the effort in a movement.

    I also built a patch dedicated to recording the sensor data synced with audio sources, which will be very useful for research and analysis. The recorder will also come in handy when using various machine learning techniques to recognise certain movements. I’m particularly interested in recording whole performances and and compare a recording with the real-time data stream during a live performance. To do so I’ll use Baptiste Caramiaoux’s Gesture Variation Follower, which is available both for Max and C++.

    Enough with the technicalities for this post, let’s talk about music. I’ve been going to The Stone every week since I arrived here, which is unique venue for amazing, mind-blowing, genre-defying music and a constant source of inspiration for what I’m doing. Every Sunday at 3pm different musicians perform a selection of new compositions by John Zorn called “The Bagatelles”. In small venues such as The Stone it is possible to hear (or should I say “feel”) every single detail of the performance, appreciate the texture of the sound, the presence of the performers, their movements and their interplay. Extremely recommended.

    I will soon post some more “musical” tests, also involving other musicians!

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • Video: Kineslimina performed at CMMR 2015

    While I’m in New York working on motion sensors for music performance, here is a video of my piece Kineslimina performed last June at 11th International Symposium on Computer Music Multidisciplinary Research (CMMR) in Plymouth, UK. I’m not 100% happy with the sound quality and I couldn’t access the raw footage to edit it myself but the guys filming the CMMR performances did a great job nevertheless.

    The piece was performed by Esther Coorevits and me and I can’t stress enough how important Esther’s contribution to this piece was. Her feedback during rehearsals was vital for the development of the piece and her performance superlative. The piece will be performed again at the 2016 Peninsula Arts Contemporary Music Festival in February next year.

  • At New York University to work on sensors for music performance – pt. 1

    At New York University to work on sensors for music performance – pt. 1

    For the next few weeks I will be in New York  working on a collaborative project with the Music and Audio Research Laboratory (MARL) at NYU Steinhardt School of Music and Performing Arts Practice.

    The goal of this project is to develop software tools that allow one to harness wearable sensor technologies in order to provide tools for body movement research and interactive music performance. In particular, the project will focus on the use of 9 Degrees of Freedom Inertial Measurement Units (9DoF IMUs) coupled with a form of muscle sensing, such as electromyography (EMG) or mechanomyography (MMG).

    This project has a twofold purpose. One is to develop dedicated applications to process the motion data from the sensors in real time. This will allow performers to interact with music and with each other through their movements, extending the possibilities of their musical instruments. In addition to that, the software will be a useful tool for researchers to study body motion and collect sensor data for analysis.

    During the first week we focused on obtaining a stable stream of data from the Myo armband. The Myo features a 9DoF IMU which provides tridimensional acceleration and angular velocity in addition to orientation data obtained by sensor fusion, both in Euler angles and quaternion format. Along with the IMU data, the Myo provides 8-channel EMG data, which is a unique feature of the device. I will work with other sensors in the near future since I wouldn’t like to limit the software I’m working on to the Myo. I’ve already tried other IMUs and I look forward to work with Marco Donnarumma’s new version of the Xth Sense, which is currently being tested and will be soon available through xth.io. However, at the moment the Myo provides a good hardware platform for prototyping algorithms and trying ideas out since it is a fairly well-engineered and compact device.

    I started working on real-time implementations of movement descriptors traditionally used with optical motion capture systems, such as Quantity of Motion, and Smoothness. This descriptors allow to extract expressive features from the movement data, which are then useful for interactive music applications and movement analysis. The main challenge here is to adapt the ideas behind this descriptors to the data provided by the wearable sensors, which is completely different from the data obtained by optical devices such as the Kinect and marker-based MoCap systems.

    In addition to this rather technical work, I will test the software in actual music performances, collaborating with other musicians. I believe this is a vital and essential part of the research, without which the project might steer too far away from what it is actually all about: music. While in New York, I will also try to take advantage of the fervent and inexhaustible offer of live music this city has always had, which is a great source of inspiration for what I’m doing. I’ve already been to a few excellent concerts and performances of various kinds and observing the behaviours of the musicians performing has already lead to some ideas I will want to try in the coming days.

    I will try to write more posts about our progress if time allows it, possibly including videos and pictures. Alright, now back to work.

    Read pt.2 here.

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.