interactive composition

Published on January 2017 | Categories: Documents | Downloads: 29 | Comments: 0 | Views: 231
of 2
Download PDF   Embed   Report

Comments

Content

SPAA- AN AGENT BASED INTERACTIVE COMPOSITION
Michael Spicer
Singapore Polytechnic
School of Info-Communications Technology

ABSTRACT
An interactive composition for flute and computer is
presented, entitled SPAA. This piece uses an interactive
composition environment that makes use of an
intelligent agent to realise the computer part. This agent
plays back samples of the live flute, with appropriate
transformations, in order to achieve a predetermined
musical structure. The agent decides which samples to
play, and what signal processing to apply by analysing
its’ recent output combined with the live flute part.
1.

INTRODUCTION

SPAA 1 is the first piece in a suite of interactive
compositions that feature the interactions between
synthetic performers and an improvising flutist. (The
flute was chosen, as I play flute, and it has a pure tone
that I thought would be easy to analyze. It is obviously
also possible to use other instruments). The synthetic
performer is implemented as an autonomous agent
written in C++. SPAA stands for Signal Processing
Autonomous Agent. The design of this agent is loosely
derived from the basic agent design used in two other
goal driven agent based Interactive composition
environments I have built, AALIVENET [1] and
AALIVE [2]. The biggest difference is that instead of
the agent controlling a MIDI synthesizer, as in the
previous two systems, the agent in SPAA manipulates
samples of the live flute performance. This is not a
purely free form improvisation environment. The overall
form of the piece is pre-determined, which provides a set
of goal states to guide the agent in its decision making
process. The live performer and agent performer
combine to try and realize this form as closely as
possible. The synthetic performer may cooperate
(reinforce) or may interfere (contradict) with the live
performer in order to achieve the goals as they change
throughout the duration of the piece. My motivation for
this piece was an interest in the interplay between the
live flute player and the agent as they try to realize the
form.
2.

STRUCTURAL OUTLINE OF THE PIECE

The structure of the piece is articulated as a set of goal
states, indicating how various musical dimensions change
over the course of the piece. These dimensions include:
x Average pitch
x Average note rate
x Timbre (amount of high frequencies present).
x Average Loudness

The states are stored in an array of C++ objects that
encapsulate a four dimensional vector. The performance
of the flute is be periodically analyzed, as is the output
of the agent. An error is calculated, based on the values
of these four dimensions. The agent then chooses which
buffer to play so as to minimize the magnitude of this
error.
3.

AGENT DESIGN

The agent is based on a sample playback system, which
could be considered a descendent of the tape delay/
digital delay looping systems that have been commonly
used over the last forty years. The structure of the agent
is shown in Figure 1 below.

Figure 1. Overall structure of the SPAA agent.

The agent has a number of C++ objects (currently ten,
but the final version will have many more), each
containing a buffer of audio input from the live flute
performance, lasting about two seconds. The agent
program, invoked by the PortAudio callback function,
chooses which buffer to play, and how the audio in each
buffer can be processed during playback. These
decisions are made by comparing the combination of last
live sample the agent has “heard” (analyzed), as well as
its own output, with the current goal state.
The agents’ percepts are derived from analysis
of the incoming audio stream. Each of the C++ objects
containing the audio also has methods used for this

analysis and attributes to store the results. These buffers
are filled in the PortAudio callback function that runs in
its own thread. The analysis is done asynchronously in
the main thread. To do the analysis, the buffer is
divided into a number of windows, each 1024 samples
long. The R.M.S. and a F.F.T. of each window are
calculated, and from these, a normalized measure of the
average amplitude, average pitch, average amount of
high frequencies present, and number of note attacks, can
be calculated. These values are stored in a four
dimensional vector that is representative of the flute
performance stored in the buffer.
The analysis result is stored in the same form (a
four dimensional vector) as the goal states that are used
to specify the desired evolution of the piece, so it is
simple to calculate the current error by subtracting the
sum of the analysis results from the most recent input and
the currently playing buffer from the goal state. The
buffer with the smallest magnitude error vector is chosen
as the next playback buffer.
One area that is currently being explored is
simple real-time DSP techniques so as to reduce the
errors in the timbral and average rate dimensions.
Waveshaping and amplitude modulation/ring modulation
can cheaply add more high frequency components. Low
frequency amplitude modulation with suitably shaped
ramp waveforms can create the effect of more note
attacks.
4.

IMPLEMENTATION

The system is written in C++, and developed on a G4
Macintosh Powerbook, using Xcode. In order to make
the program as portable as possible, I have used no
platform specific API's. PortAudio is used to handle the
audio I/O, OpenGL for the display, and Glut is used for
handling the user interface (keyboard, mouse and
menus). All of the analysis of the input buffers is done
during the glut idle time function. I am using some global
boolean variables to act as semaphores, so as to avoid the
analysis and callback threads to interfere with each other.
This is seems to be working ok.
In order to get a clean signal, without a lot of
spill, I am close micing the flute with a dynamic
microphone. It is important to set the gain of the
microphone correctly, so that the amplitude
measurements corresponded appropriately with the
system goal states.
5.

FUTURE WORK AND CONCLUSION

The current state of the software (June 2005) is at the
“proof on concept” stage. All the parts are in place, and
they work, but are quite crude. Much work needs to be
done on the analysis stage, so as to extract higher level
knowledge from the raw input signal. The agent function
could be improved by adding some classification
capability. The DSP routines used to enhance the sample
playback are very rudimentary. Also, the user interface is
not very friendly. Even so, the system as it stands is
usable and shows the potential for using the agent
approach for building interactive composition systems.

6.

REFERENCES

[1] Spicer, M.J. ''AALIVENET: An agent based
distributed interactive composition environment
paper'', Proceedings of the International
Computer Music Conference, Miami, USA,
2004.
[2] Spicer, M.J., Tan, B.T.G. and Tan, C.L “A
Learning Agent Based Interactive Performance
System.” In Proceedings of the International
Computer Music Conference, pp. 95–98. San
Francisco: International Computer Music
Association.2003

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close