I recently had the chance to play with a prototype version of the new XTH Sense. I met up with Marco Donnarumma and Balandino Di Donato at Integra Lab in Birmingham and we spent a couple of days experimenting with this interesting and yet unreleased device. It is a small, wireless, wearable unit that comprises a Mechanomyogram (MMG) sensor for capturing the sounds produced by muscular activity and a 9DoF IMU unit, which returns various motion features, such as acceleration, angular velocity, and orientation.
I had already been working with 9DoF IMUs data during my research collaboration at NYU Steinhardt in New York and for previous performances, so I knew what to I could expect in that department. However, one of the main peculiarities of the XTH Sense is the MMG sensor. While in New York, I had worked with Thalmic Labs’ Myo, which employs Electromyogram (EMG) for muscle sensing. I won’t go too deep into the technical differences between MMG and EMG, suffice it to say that EMG senses the electrical impulses sent by the brain to cause muscle contraction, while MMG consists of sounds that your muscles produce during contraction and extension[ref]If you want to learn more, Marco covered these topics thoroughly in this article written with Baptiste Caramiaux and Atau Tanaka, plus here is another article that compares the two technologies from a biomedical point of view.[/ref]. In terms of expressive interaction, what I find interesting about the MMG sensor of the XTH Sense is the distinctive way it responds to movements and gestures. Unlike EMG, the control signals obtained from the XTH Sense peak at movement onsets and remain relatively low if you keep your muscles contracted. This is neither better nor worse than EMG, it’s different.
While adapting my code, I started noticing how the response of the XTH Sense made me interact differently with the machine learning and physical modelling patches I had previously built using the Myo. I guess with a fair deal of signal processing I could make the two devices behave in a virtually similar way, but this in my opinion would be rather pointless. One of the exciting things about having to deal with a new device is to embrace its interface idiosyncrasies and explore their expressive potential. As a simple example, in the physical modelling patch I built for the rain stick demo we filmed in Birmingham, the amount excitation sent to the model depended on one of the MMG control features. Had I used EMG I would have obtained a steady excitation signal by firmly squeezing the stick, while the response of the MMG required me to perform a more iterative gesture — like repeatedly tapping my fingers on the stick — if I wanted to obtained a louder sound. This somehow reminded me of the gestures involved in playing a wind instruments and this idea influenced the whole interaction design I eventually implemented.
I will soon be back in New York for a workshop and a performance at Harvestworks on May 8th, where I’ll show some of the tools and methodologies I use in my research and practice, including those I experimented with playing with the new XTH Sense for the first time. If you’re in the area and want to attend register here or if you just want to know more about it drop me a line.