Body movement constitutes an essential part of music performance. Although physical consciousness in instrumental learning is highly approached from technical and injury-preventive perspectives, research conducted over the past two decades has evidenced musician’s ability to connect and express themselves through their moving bodies. Gestural elements may facilitate structural and expressive intentions (e.g., singers bend forward to execute loud passages), establish communication with the audience and co-performers (e.g., flutists and clarinettists nod rhythmically for coordination purposes) and illustrate musical characters and atmospheres (large, exposed gestures in opposition to contained, introverted ones) (Bishop & Goebl, 2018; Davidson, 2001, 2012). The fact that biomechanical features of musical instruments restrict performer’s ability to move freely leads to the development of specific motion cues per instrument type, which brings us to the main questions of this research. How do saxophonists move while performing? What gestures are the core of the sax player’s body language, and what inherent intentions do they carry? Are we able to augment meaningful performances by matching our body talk with the impeccable musical result we practice for countless hours? In this paper we present the early stage of an on-going PhD research, involving multimodal data collection and processing of saxophone performances. A hybrid presentation is intended, coarticulating theoretical discussion with video extracts of motion capture sessions demonstrating several procedures of data gathering in the studio. Eleven saxophone players were invited to play five excerpts of standard saxophone repertoire (including works by Creston, Debussy, Ibert and Glazounov) in two different modes: immobile and projected expressiveness. Each individual session combined audio and video recording with motion capture from an optical-passive system comprising ten infrared cameras, which allows for precise movement tracking and mapping into computational 3D models. These models translate into uncharacterized animations evidencing performer’s movements only (no visual appearance attached), from which parameters such as trajectory or amplitude may be analysed. An integrated analysis of synchronized motion and audio data will be posteriorly conducted to compare interpretations of each excerpt in the two performative modalities and understand how bodily movements relate to musical parameters. We expect this research to build new insights concerning the development of an expressive gesture vocabulary for saxophone performance and pedagogy.
|12/07/21 → 17/07/21