Arshia Cont

Date: 14:30, Tuesday, March 10, 2015
Speaker: Arshia Cont
Venue: Salzburg

Time: 14:30. Google Hangouts on Air link:

A system capable of undertaking automatic musical accompaniment with human musicians should be minimally able to undertake real-time listening of incoming music signals from human musicians, and synchronize its own actions in real-time with that of musicians according to a music score. To this, one must also add the following requirements to assure correctness: Fault-tolerance to human or machine listening errors, and best-effort (in contrast to optimal) strategies for synchronizing heterogeneous flows of information. Our approach in Antescofo consists of a tight coupling of real-time Machine Listening and Reactive and Timed-Synchronous systems. The machine listening in Antescofo is in charge of encoding the dynamics of the outside environment (i.e. musicians) in terms of incoming events, tempo and other parameters from incoming polyphonic audio signal; whereas the synchronous timed and reactive component is in charge of assuring correctness of generated accompaniment. The novelty in Antescofo approach lies in its focus on Time as a semantic property tied to correctness rather than a performance metric. Creating automatic accompaniment out of symbolic (MIDI) or audio data follows the same procedure, with explicit attributes for synchronization and faulttolerance strategies in the language that might vary between different styles of music. In this sense, Antescofo is a cyber-physical system featuring a tight integration of, and coordination between heterogeneous systems including human musicians in the loop of computing. We will present current research problems in this settings and showcase some challenges around the embedding of automatic accompaniment procedures

Posted in RiSE Seminar