Back in Berlin after a few days of work down in the R1 Reaktorhallen, KTH KTH Royal Institute of Technology for a unique opera piece: The Tale of the Great Computing Machine. A project led by Åsa Unander-Scharin and Carl Unander-Scharin. I took care of designing the gestural interactions with the Skandia pipe organ inside R1, the interactions with a set of speakers mounted on motorised winches (which we call “the Suspended Choir), as well the interactions between the organ and robots that will perform alongside humans. It’s all going to be live, and there are several other talented collaborators that are taking care of live visuals, lights, sound, and more. We had to network quite a few computers in order to make everything work in such a big and unique space.
The opera is based on the novel “The Tale of the Big Computer” written by Olof Johannesson in the 1960s. The book describes the rise of an intelligent network of computers and its relationship with humans. Olof Johannesson is actually a pseudonym of Hannes Alfvén, a physicist who would win the Nobel Prize for his work on magnetohydrodynamics just a few years after the book was published.
Premiere on the first of December, and apparently many shows are already sold out!
Author: _FV
Hyperorgan interactions, suspended choirs, and the The Tale of the Great Computing Machine
Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin
I was invited to participate in the Mapping Social Interaction through Sound symposium on 27-28 November 2020. The symposium is organised by Humboldt University, Berlin and – as it is customary these days – will take place on Zoom.
This is the abstract of my talk.
Building and exploring multimodal musical corpora:
from data collection to interaction design using machine learningMusical performance is a multimodal experience, for performers and listeners alike. A multimodal representation of a piece of music can contain several synchronized layers, such as audio, symbolic representations (e.g. a score), videos of the performance, physiological and motion data describing the performers movements, as well as semantic labelling and annotations describing expressivity and other high-level qualities of the music. This delineates a scenario where computational music analysis can harness cross-modal processing and multimodal fusion methods to shift the focus toward the relationships that tie together different modalities, thereby revealing the links between low-level features and high-level expressive qualities.
I will present two concurrent projects focussed on harnessing musical corpora for analysing expressive instrumental music performance and design musical interactions. The first project is centered on a data collection method – currently being developed by the GEMM research cluster at the School of Music in Piteå – aimed at bridging the gap between qualitative and quantitative approaches. The purpose of this method is to build a data corpus containing multimodal measurements linked to high-level subjective observations. By applying stimulated recall (a common qualitative research method in education, medicine, and psychotherapy) the embodied knowledge of music professionals is systematically included in the analytic framework. Qualitative analysis through stimulated recall is an efficient method for generating higher-level understandings of musical performance. Initial results suggest that this process is pivotal in building our multimodal corpus, providing insights that would be unattainable using quantitative data alone.
The second project – a joint effort with the Computing Department at Goldsmiths, University of London – consists in a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between large sound corpora and motion sensor data. The design approach adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces. We refer to this interaction design approach as Assisted Interactive Machine Learning (AIML). While playing with a large corpus of sounds through gestural interaction by means of a motion sensor, the user can give feedback to an artificial agent about the gesture-sound mappings proposed by the latter. This iterative process results in an interactive exploration of the corpus, as well as in a way of creating and refining gesture-sound mappings.
These projects are representative of how the development of methods for combining qualitative and quantitative data, in conjunction with the use of computational techniques such as machine learning, can be instrumental in the design of complex mappings between body movement and musical sound, and contribute to the study of the multiple facets of embodied music performance.Further reading
Visi, F. G., Östersjö, S., Ek, R., & Röijezon, U. (2020). Method development for multimodal data corpus analysis of expressive instrumental music performance. Frontiers in Psychology, 11(576751), doi: 10.3389/fpsyg.2020.576751
Download PDF (pre-print)Visi, F. G., & Tanaka, A. (2021). Interactive Machine Learning of Musical Gesture. In E. R. Miranda (Ed.), Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity. Springer Nature, forthcoming.
View on arXiv.org
Download PDF (pre-print)Visi, F. G., & Tanaka, A. (2020). Towards Assisted Interactive Machine Learning: Exploring Gesture-Sound Mappings Using Reinforcement Learning. In ICLI 2020 – the Fifth International Conference on Live Interfaces.
Download PDFPresentation slides
Download PDFPhysically Distant #3: the network, the pandemic, and telematic performance
PD#3 will be part of Ecology, Site And Place – Piteå Performing Arts Biennial 2020. Participation in the conference is free, but registration is compulsory. Register by sending an email to piteabiennial@gmail.com
After the two previous editions in June and July, the third Physically Distant Talks will take place on 26 and 27 October 2020. The talks are going to be part of the online event of Ecology Site and Place – Piteå Performing Arts Biennial.
The format will be different this time, as there are going to be more telematic performances and the talks will be structured in three panels. Each panel member is invited to prepare a 3-minute provocation/reflection related to the topic. This collection of provocations from the panelists will set the tone for an open discussion in the style of the previous Physically Distant talks. As in the previous editions of the talks, Stefan Östersjö and myself, Federico Visi, will be moderating the discussion.
Programme (all times are CET)
Monday, 26 October 2020
17:30 Introduction. Stefan Östersjö and Federico Visi
17:40 Simon Waters and Paul Stapleton: Musicking online: your technical problem is actually a social problem. A performative conversation.18:00-19:00 Panel I. Instrumentality in Networked Performance
Panelists: Nela Brown, Nicholas Brown, Juan Parra Cancino, Franziska Schroeder, Henrik Von Coler.19:00-19:45 Telematic Performance: A concert hall organ in the network.
Live-streaming from Studio Acusticum. Telematic performances with the University Organ remotely controlled from several locations.
Robert Ek, clarinet, performing in Piteå (SE)
Mattias Petersson, live-coding, performing in Piteå (SE)
Federico Visi, electronics, electric guitar, performing in Berlin (DE)
Scott Wilson, live coding, performing in Birmingham (UK)
Stefan Östersjö, electric guitar, performing in Stockholm (SE)19:45-20:00 Break
20:00-21:00 Panel II. Network ecology: Communities of practice for the digital arts
Panelists: Shelly Knotts, Thor Magnusson, Mattias Petersson, Rebekah Wilson, Scott Wilson.Tuesday, 27 October 2020
17:45-18:00 Marcin Pączkowski: rehearsing music online: possibilities and limitations
18:00-19:00 Panel III. The network as place
Panelists: Ximena Alarcon Diaz, David Brynjar/Angela Rawlings/Halla Stefansdottir, Chicks on Speed (Melissa Logan, Alex Murray-Leslie), Maja Jantar, Marcin Paczkowski, Roger Mills, Luca Turchet.19:00-19:30 Telematic Performance: iða
David Brynjar Franzson, technical concept and streaming (US)
Maja Jantar, performer and composer of visual score (BE)
Angela Rawlings, performer and composer of visual score (IS/CA)
Halla Steinunn Stefánsdóttir, performer and composer of visual score (SE)19:30-20:00 Break
20:00-21:00 Where do we go from here? (plenary discussion)
For more details on the Ecology, Site And Place – Piteå Performing Arts Biennial 2020 online event, download the book of abstracts.
Physically Distant #2: more online talks on telematic performance
Tuesday 28 July 2020, 14:00 – 19:00 CEST
Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic have required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. In this second edition of Physically Distant, we wish to continue discussing telematic performance from perspectives suggested by the following questions:
What are the opportunities and challenges of telematic performance?
What are the implications on how performing arts are conceived, developed, experienced?
How are research and practice being reconfigured?
How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces?
How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?We wish to highlight two threads from the previous discussions. First, how telematic performance can be conceived of as protest, and second, the potential for telematic performance to expand the artistic and social potential in intercultural arts. Both of these threads imply a discussion of accessibility.
Once again, the GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Tuesday 28 July 2020.
The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion.We are very happy to host a telematic performance by the Female Laptop Orchestra (FLO). The practice of this group is discussed in the talks by Franziska Schroeder and Nela Brown. A presentation of the conceptual backdrop for the performance can be found below.
The structure of the event includes short breaks in between the sessions in order to avoid Zoom fatigue and allow for informal chats and continued discussion over a drink (not provided by the organisers). There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.
28 July 2020 schedule (all times are CEST):
- 14:00 Session 0: Introduction, results of the survey that followed Physically Distant #1
- 14:30 Session 1: Ximena Alarcon, Franziska Schroeder
- 15:30-15.50 — Performance by FLO (Female Laptop Orchestra) —
- 15.50-16.00 — 10-min Break —
- 16:00 Session 2: Nela Brown , Rebekah Wilson
- 17:00 — 30-min Break —
- 17:30 Session 3: OvO, Kaffe Matthews
- 18:30 Session 4: Plenary
- 19:00 — END —
Moderators / instigators: Federico Visi, Stefan Östersjö
Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7
We will send you a link to join a Zoom meeting on the day of the talks.
NOTE: the talks will be recorded.A follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.
Further info: mail@federicovisi.com
Telematic performance by the Female Laptop Orchestra (FLO)
Absurdity (concept by Franziska Schroeder and Matilde Meireles)
A distributed performance using LiveSHOUT with members from the Female Laptop Orchestra (FLO).“Absurdity” is based around a short excerpt from one of Portugal’s most mysterious, elusive and peculiar writers, Fernando Pessoa. Pessoa’s multiplicities and his ways of thinking about life, engendering ideas that can feel manic-depressive, filled with buckets of self-pity, while being able to scratch the innermost parts of one’s soul, lie at the heart of this distributed performance.
Members of FLO will stream sounds from several distributed places, including Crete, Italy, Brazil and the UK, while Franziska and Matilde will be delivering fragmented excerpts (in both English and Portuguese) alongside the LiveSHOUT streams. The idea of distributed creativity, where we combine sounds from several sites, inspired by Pessoa’s plurality of thoughts and philosophies; his multiplicities, his fictionality and his self alienation, will lead to a performance that aims to be absurd, dispersed, fragmented and multiple.
“I’ve always belonged to what isn’t where I am and to what I could never be”. (Pessoa In: Ciuraru, 2012).
The FLO performers are:
Franziska Schroeder – LiveSHOUT and Pessoa reading (English)
Matilde Meireles – LiveSHOUT and Pessoa reading (Portuguese)
Maria Mannone – LiveSHOUT streams of piano improv from Palermo
Maria Papadomanolaki – LiveSHOUT streams of sounds from Crete
Anna Xambó – LiveSHOUT streams of sounds from Sheffield
Nela Brown – LiveSHOUT streams of sounds from London
Ariane Stolfi – LiveSHOUT streams of sounds from Porto Seguro and playsound.spaceFemale Laptop Orchestra (FLO), a music research project established in 2014 by Nela Brown, connects female musicians, sound artists, composers, engineers and computer scientist globally, through co-located and distributed collaborative music creation. Each FLO performance is site-specific and performer-dependant, mixing location-based field recordings, live coding, acoustic instruments, voice, sound synthesis and real-time sound processing using Web Audio API’s and VR environments with audio streams arriving from different global locations (via the internet and mobile networks). From stereo to immersive 3D audio (and everything in between), FLO is pushing the boundaries of technology and experimentation within the context of ensemble improvisation and telematic collaboration.
Female Laptop Orchestra: https://femalelaptoporchestra.wordpress.com/
LiveSHOUT: http://www.socasites.qub.ac.uk/distributedlistening/liveSHOUT/
Locus Sonus soundmap: https://locusonus.org/soundmap/051/
Presenters Bios:
Ximena Alarcón Díaz is a sound artist researcher interested in listening to in-between spaces: dreams, underground public transport, and the migratory context. She creates telematic sonic improvisations using Deep Listening, and interfaces for relational listening. She has a PhD in Music, Technology and Innovation from De Montfort University (2007), and is a Deep Listening® certified tutor. Her project INTIMAL is an “embodied” physical-virtual system for relational listening in telematic sonic performance (RITMO-UiO, 2017-2019, Marie Skłodowska Curie Individual Fellowship). She is currently a Senior Tutor in the online Deep Listening certification program offered by the Center for Deep Listening (RPI), and works independently in the second phase of the INTIMAL project that involves: an “embodied” physical-virtual system to explore sense of place and presence across distant locations; and a co-creation laboratory for listening to migrations with Latin American migrant women.
http://ximenaalarcon.netFranziska Schroeder is an improviser and Reader, based at the Sonic Arts Research Centre, Queen’s University Belfast where she mainly teaches performance and improvisation.
In 2007 she was the first AHRC Research Fellow in the Creative/Performing Arts to be awarded a 3 year grant to carry out research into virtual / network performance environments. Her writings on distributed creativity have been published by Routledge, Cambridge Scholars, and Leonardo. In 2016 she co-develop the distributed listening app LiveSHOUT.
Within her research group “Performance without Barriers”, which she founded in 2016, Franziska currently designs VR instruments with and for disabled musicians.
https://pure.qub.ac.uk/en/persons/franziska-schroederRebekah Wilson is an independent researcher, technologist and composer. Originating from New Zealand she studied instrumental and electroacoustic music composition, and taught herself computer technology. In the early 2000s she held the role of artistic co-director at STEIM, Amsterdam, where her passions for music, performance and technology became fused. Since 2005 she has been co-founder and technology director for Chicago’s Source Elements, developing services that exploit the possibilities of networked sound and data for the digital sound industry while continuing to perform and lecture internationally. Holding a masters in the field of networked music performance, her current research on the topic can be found on the Latency Native forum.
https://forum.latencynative.comNela Brown is an award-winning Croatian sound artist, technologist, researcher and lecturer living in London, UK. She studied jazz and music production at Goldsmiths, University of London, followed by a BA (Hons) in Sonic Arts at Middlesex University London. Since graduating in 2007, she worked as a freelance composer and sound designer on award-winning international projects including theatre performances, dance, mobile, film, documentaries and interactive installations. In 2014, she started Female Laptop Orchestra (FLO). In 2019, as part of the prestigious Macgeorge Fellowship Award, she was invited to join the Faculty of Fine Arts & Music at the University of Melbourne, Australia to deliver talks and workshops about collaborative music-making, laptop orchestras and hack culture, as well a number of performances with FLO. She is currently doing a PhD in Human-Computer Interaction and lecturing at the University of Greenwich in London.
http://www.nelabrown.com/Italian noise-rock duo OvO has been at the center of the worldwide post-rock, industrial-sludge, and avant-doom scenes for nearly two decades. Their “always-on-tour” mentality, coupled with a DIY ethic, fearless vision, and pulverizing live shows have made them the Jucifer of Europe; impossible to categorize, but always there, appearing in your hometown, like a ghostly omnipresence. OvO’s fiercely independent ethos and grinding live schedule have earned the band a significant worldwide fanbase that have come to expect nothing but the most daring and innovative dark music presentations.
OvO were on the road for their 20th anniversary European tour when the COVID-19 pandemic hit the continent. The band was forced to cancel the remaining gigs of the tour and drive back to their home country, which was suffering one of the worst health emergencies of its recent history. In the midst of the lockdown, they performed live on the stage of the Bronson club in Ravenna, Italy, and professionally live-streamed the entire concert on DICE.fm
http://ovolive.blogspot.comKaffe Matthews is a pioneering music maker who works live with space, data, things, and place to make new electroacoustic compositions. The physical experience of music for the maker and listener has always been central to her approach and to this end she has also invented some unique interfaces, the sonic armchair, the sonic bed and the sonic bike that not only enable new approaches to composition for makers but give immediate ways in to unfamiliar sound and music for wide ranging audience.
Kaffe has also established the collectives Music for Bodies (2006) and The Bicrophonic Research Institute (2014) where ideas and techniques are developed within a pool of coders and artists using shared and open source approaches, publishing all outcomes online.
During COVID times, Kaffe has produced new music by collaborating with other music makers through streaming platforms and has hosted live-streaming parties in her apartment in Berlin.
https://www.kaffematthews.netTuesday 28 July 2020, 14:00 – 19:00 CEST
Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7Physically distant: online talks on telematic performance
Wednesday 3 June 2020, 13:30 – 21:00 CEST
Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. This scenario brought many questions related to telematic performance to the fore: what are the opportunities and challenges of telematic performance? What are the implications on how performing arts are conceived, developed, experienced? How are research and practice being reconfigured? How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces? How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?
The GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Wednesday 3 June 2020.
The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion. After each slot, there is going to be a 30-min break in order to avoid “Zoom fatigue.” There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.
Schedule (all times are CEST):
13:30 – 14:00 Session 0 : Welcome and introduction
14:00 – 15:00 Session 1 : Roger Mills; Shelly Knotts
15:00 – 15:30 Break
15:30 – 16:30 Session 2 : Gamut inc./ Aggregate; Randall Harlow
16:30 – 17:00 Break
17:00 – 18:00 Session 3 : Alex Murray-Leslie; Atau Tanaka
18:00 – 19:00 Dinner break (1 hr)
19:00 – 20:00 Session 4 : Chris Chafe; Henrik von Coler
20:00 – 21:00 PlenaryModerators: Federico Visi, Stefan Östersjö
Anyone can join upon registration using this online form: https://forms.gle/1goB2TcjGKjL6nkT8
We will send you a link to join a Zoom meeting on the day of the talks.
NOTE: the talks will be recorded.An additional networked performance curated by GEMM))) is taking place on Tuesday 2 June, followed by a short seminar and discussion. Everyone is welcome to also join this event, we will circulate details to the registered email addresses and via social media.
Programme (all times are CEST):
14:00 – 14:15 networked performance with the Acusticum Organ: Robert Ek, Mattias Petersson, Stefan Östersjö
14:15 – 14:30 Vong Co: networked performance with The Six Tones: Henrik Frisk, Stefan Östersjö & Nguyen Thanh Thuy
14:40 – 15:00 Paragraph – a live coding front end for SuperCollider patterns: Mattias Petersson
15:00 – 15:20 DiscussionA follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.
Further info: write me.
Towards Assisted Interactive Machine Learning
In a sentence: Assisted Interactive Machine Learning (AIML) is an interaction design method based on deep reinforcement learning that I started developing for the purpose of exploring the vast space of possible mappings between gesture and sound synthesis.
I am presenting a research paper and a live multimedia performance on AIML at ICLI 2020 – the fifth International Conference on Live Interfaces taking place at the Norwegian University of Science and Technology in Trondheim, Norway.
The paper (PDF)
We present a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between input sensor data streams and sound synthesis parameters. The user can give feedback to an artificial agent about the mappings proposed by the latter while playing the synthesiser and trying the new mappings on the fly. The design approach we adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces.
About the performance (PDF)
“My phone beeps. A notification on the home screen says “You have a new memory”. It happens at times, unsupervised learning algorithms scan your photos and videos, look at their features and metadata, and then you get a nice slideshow of that trip to South America, or those shows you went to while you were in Hamburg or London. There is something ridiculous about this (the music they put on the slideshows, for example) as well as something eerie, something even slightly distressing perhaps.”
“You Have a New Memory” (2020) makes use of the AIML interaction paradigm to navigate a vast corpus of audio material harvested from the messaging applications, videos, and audio journals recorded on the author’s mobile phone. This corpus of sonic memories is then organised using audio descriptors and navigated with the aid of an artificial agent and reinforcement learning.
The title of the piece – “You Have a New Memory” – refers to the notifications that a popular photo library application occasionally send to mobile devices to prompt their users to check an algorithmically generated photo gallery that collects images and videos related to a particular event or series of events in their lives.I started developing these concepts in Summer 2019 in Berlin after a few informal meetings with Atau Tanaka, then Edgard-Varèse guest professor at TU Berlin. Development took place during a 1-month postdoc at Goldsmiths, University of London, in September 2019, and continued with Stefan Östersjö and the GEMM))) Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology, Sweden.
Paper presentation at ICLI2020, Trondheim, Norway:
NIME 2019 Music Proceedings
As one of the NIME 2019 Music co-chairs, I promoted the establishment of Music Proceedings:
Since NIME began nearly two decades ago, this is the first event where composers and creators of the music pieces in the concert programme have been invited to publish an extended abstract of their work. These documents, describing the aesthetic and technical characteristics of the music pieces, are collected here, in the Music Proceedings.
We believe Music Proceedings are an important step towards a consistent and richer means of documenting the performances taking place at NIME. This will be a useful resource for researchers, and provides an alternative voice for contributors to speak about their artistic practice in NIME research.
Download the PDF here.
SloMo study #2
This piece was performed at NIME 2018 (both at the Virginia Tech’s Moss Arts Center and at the NIME performance night organised by University of Virginia in Charlottesville) and at MOCO 2018, held at InfoMus – Casa Paganini.
I composed SloMo study #2 to explore the use of slow and microscopic body movements in electronic music performance, and the role of rhythmic visual cues and breathing in the perception of movement and time. To do so, I used wearable sensors (the EMG sensors and IMUs found in Myo armbands), variable-frequency stroboscopic lights, an electronic stethoscope, and a body-worn camera for face tracking.
Here is a short video excerpt that I used to accompany my NIME and MOCO submissions. Unfortunately the effects of slowly changing the frequency of the strobes cannot be captured in videos with standard frame rates.
Speaking of NIME, I’m going to be a Music co-chair for NIME 2019 and I’m really looking forward to seeing what NIME artists have come up with this year.
New modosc objects for EMG & MoCap processing in Max
During November and December 2018, I had the opportunity to spend 5 weeks as a visiting researcher at RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, an amazing centre of excellence recently inaugurated at the University of Oslo. In mid November, at the beginning of my stay, Luke Dahl and I presented modosc, our Max library for real time motion capture analysis to the attendees of the RITMO International Motion Capture Workshop. The library is the results of a collaboration between Luke and myself, and has been presented at various conferences in 2018, including NIME1 (paper) , and MOCO2 (paper).
While in Oslo, I had the chance to spend time in the RITMO Motion Capture lab and use their Delsys Trigno wireless EMG system synchronised with their Qualisys cameras. With that gear, I coded three new modosc objects for real-time processing of EMG signals synchronised with MoCap:
- mo.qtmSig: binds data from QTM analog boards to new signal addresses in the modosc namespace (under /modosc/signals);
- mo.zcr: calculates the zero crossing rate of a signal (useful feature for classification tasks);
- mo.tkeo: calculates the Teager-Kaiser energy-tracking operator (TKEO) of a signal (useful for onset detection and other things, to learn more check out Eivind Kvedalen’s PhD thesis: http://folk.uio.no/eivindkv/ek-thesis-2003-05-12-final-2.pdf). I got the idea of implementing this interesting feature from Geert Roks, a student at Goldsmiths University of London currently collaborating with Prof Atau Tanaka.
Here are some video tutorials to get you started with modosc.
modosc: Mocap & Max video tutorials
These are some introductory video tutorials about processing motion capture data in real time in Max using the modosc library.
Modosc is a set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time.