[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

meet the musical future!



Remember the wonderful liner notes on the back of Gould's album of the Liszt piano transcription of the Beethoven symphony? The Dakota psychiatrist accused GG of megalomania for wanting to be an entire symphony orchestra. And the Socialist reviewer criticized him for stealing the bread from the mouths of 60 musicians and their families.
 
On "Saturday Night Live," the talentless lounge singer Bill Murray used to point to his cheesy, annoying little percussion machine and ask the audience to give a big round of applause to "the Univox 4000."
 
We all laughed that a box might ever replace a human musician (even a drummer, a stretch both for "human" and "musician").
 
Let laughter cease. I present, without further comment, The Future:
 
==================
 
Amherst College
(Amherst, Massachusetts USA)
Mathematics and Computer Science Colloquium
 
Professor Chris Raphael
University of Massachusetts, Amherst [USA]
 
Music Plus One
 
I discuss my ongoing work in creating a computer system that plays the role of a sensitive musical accompanist in a non-improvisatory composition for soloist and accompaniment.
 
An accompanist must synthesize a number of different sources of information. First of all, the accompanist must perform a real-time analysis of the soloist's acoustic signal, enabling the accompanist to "hear" the soloist. The accompaniment must also understand the basic template for musical performance that is described in the musical score (notes, rhythms, etc.), thereby allowing the system to "sight-read" (perform with no training) credibly. However, the acocompanist must also be able to improve over succcessive rehearsals, much as live musicians do; thus the accompanist must be capable of learning from training data.
 
I present a probabilistic model -- a Bayesian Belief Network that represents these disparate knowledge sources in a coherent framework. Nodes in the network represent observable variables, such as estimated note onset times, and unobserable variables, such as local tempo and rhythmic stress. The connectivity of the graph expresses various conditional independence assumptions which are key in making the computations feasible in real-time.
 
In a series of rehearseas the model is trained from both solo and accompaniment data to represent a rhythmic interpretation for a specific piece of music. During live performance, the accompanist "listens" to the soloist by using a hidden Markov model and makes principled real-time decisions that incorporate all currently available information. I will provide a live demonstration of my system on several examples including Robert Schumann's 1st Romance for Oboe and Piano.
 
Wednesday 27 March 2002, 4 p.m.
Seeley Mudd 207
 
Refeshments will be served in Seeley Mudd 208 at 3:30 p.m.
 
[NOTE: I don't know if Raphael is the oboe or the piano.]