Sound Design

Published on February 2017 | Categories: Documents | Downloads: 61 | Comments: 0 | Views: 206
of 10
Download PDF   Embed   Report

Comments

Content


Max Kliegle
Professor Harfenist
Physics of Music
4/4/2014
Electronic Sound Design and Synthesis
Music made solely with acoustic instruments has died. All commercially available music
and almost all home recorded music made this year will be processed with a computer and
altered by digital means to better suit the artist’s vision. While some old school producers and
engineers are reluctant to embrace all the new tools available, most dive right in and begin
finding ways to use these tools out of their original purpose, manipulating sounds beyond what
the human mind could have imagined one hundred years ago. There are a couple of different
methods for sound design employed by artists today. The main ones include: additive synthesis,
subtractive synthesis, sampling, FM Synthesis, granular synthesis, physical modeling synths, and
wavetable synthesis.
Additive synthesis is the stacking of sine waves or other types of waves (depending on
the synthesizer) to create a new wave completely.
.
This illustration shows the sum wave of additive synthesis from 6 different sine waves at
different frequencies
A synthesizer is a sound or wave source that hosts all of these different types of
synthesis. Essentially it is an instrument that could be inside a computer (software synthesizer)
or its own machine with its own keys (a hardware synthesizer.) This sound is then (or sometimes
in between the different wave additions, depending on the synthesizer’s capability) fed through
effects such as amplification (to make it louder), distortion (analog hardclipping emulation,
which is essentially playing the sound too loud so the higher harmonics begin to fuzz and at low
levels of distortion can enrich the sound), flanger (which takes the original source of audio,
delays it and also detunes it in a sweeping manner), phaser (does the same thing as a flanger but
on a much smaller time scale, for a different sounding effect), and finally any sort of filter
(controls the volume of different portions of the frequency spectrum.)
Subtractive synthesis starts with a waveform rich in harmonics (the simplest being a
pulse, saw, or square waves) and then it gets fed through a filter to shape the harmonics down
(by turning down the volume of certain frequency bands), and then eventually gets fed through
the effects typical of the additive synthesizer.


This picture represents graphically how a filter manipulates sound. The lighter grey
area is silent, the darker gray areas are present and the dotted line shows where the frequency
cutoff setting is on the filter. In this particular graphic, the resonance is boosted as well, which
means that at the cutoff point, the volume of those frequencies are higher than anywhere else.
Today most software synthesizers combine the capabilities of both of these; by first
creating your wave form with an additive synthesizer then feeding it through subtractive filters
(like subtractive synthesis). Sampling (originally known as Musique Concrete, mentioned later)
is the method of taking real life recorded sounds or musical phrases, and then manipulating these
by adding effects or manipulating other things in the recording such as pitch, or timing.
FM Synthesis (Frequency Modulation synthesis) is more complicated than the previous
sound design methods. Even many producers that are veterans of sound design and sampling
cannot fully master FM synthesis as they can others. FM synthesis works by starting with a sine
wave (early FM synthesis used exclusively sine waves but again, these days synthesizers can use
many different type of waves, some even custom waveforms) and then taking another wave
form, and using it to modulate the frequency of the first wave. This very high frequency of
modulation takes a normal wave and nearly distorts it because of all of the very fast oscillations.

This figure shows the wave forms that modulate each other in Frequency Modulation.
One wave is a carrier which is the main wave heard first to start with. The modulating oscillator
is changing the frequency of the carrier at the frequency of the modulating wave, which results
in the bottom wave. What makes this different from additive synthesis is that the modulating
wave is having a direct effect on the frequency of the carrier wave, rather than two waves just
being heard at one time.
Sounds created by FM synthesizers are often very unique (although with the innovation
of software synthesizers today, the lines are being blurred). These sounds tend to be very rich in
harmonics, often times very heavy. The ability that FM Synthesis has at creating sounds not
heard naturally makes it popular for composers writing music, especially sound effects for
movies, television or videogames.
Granular synthesis is a more recent addition to the sound design world. It builds off of
the original subtractive sound design technique, but adds the capability to time shift, pitch shift,
as well as other effects. Granular synthesis is also popular in the cinematic world as a way to
create artificial background landscape sounds.
Physical modeling synthesis is something that has somewhat hit the backburner, but some
new innovative synths have been trying to modernize this synthesis technique for the 21
st
century
composer. Physical modeling works by taking a source sound, and using an algorithm, tries to
recreate the sound synthetically. Then the sound is fed through similar effect chains one would
find on a subtractive or additive synthesizer.
Wavetable synthesis is a very popular form of synthesis now; due to Native Instrument’s
software synthesizer “Massive.” Wavetable synthesis works as a subtractive synthesizer and
additive synthesizer, but instead of using conventional wave forms, the waves are combined –
that is; the waves used are not the standard pulse, saw, square, triangle, or sine waves, but rather
combinations of two of these waves, or even combinations of more complex waves that have
been created in its own additive/FM/subtractive synthesizer. The wavetable sound at the start is
one wave against another wave with a knob to morph these two waves together along the wave
table.

Best shown visually, here are the two waves that make up the popular “Modern Talking”
wavetable in Native Instrument’s “Massive” synthesizer. If the wave table position knob were
all the way to the left, then only the wave shown on the left would be heard, if it were all the way
to the right, only the right wave would be heard. But what makes wavetable synthesis unique is
that you can crossfade in between the two waves, if the knob is more towards the left you hear
more of the left wave if it is more to the right you hear more of the right wave. (Similar to a turn
table crossfader that DJ’s would use.)
At one end of the spectrum you are solely hearing the first wave, and as you scroll
towards the second wave the waves are morphed until you reach the other side, where the wave
present is solely the second wave on the wavetable. This sound is then fed through effect chains
to produce an end result.
Sound design outside of acoustic instruments took off in the 1930’s when composers
such as Paul Hindemith and Ernest Toch began experimenting with found sounds (sounds
recorded by the composer with his own field recorder in his own natural environment, not an
instrument or a sound purposely made to be musical) and tape manipulation [Chadabe 28]. The
first real purely electronic instrument however, was the Theremin originally called the
Aetherphone. This was developed in Moscow in 1920 by Leon Theremin. [Chadabe 8] The
Theremin was not only revolutionary for being the first playable purely electronic instrument,
but also the manner in which the performer played the instrument seemed almost magical at the
time. The Theremin worked by utilizing two antennas, one for amplitude (volume) and one for
frequency (pitch.) By waving their hands in the air, the performer controlled these two sound
qualities through adjusting the height of one hand for pitch, and the distance from the instrument
for volume. This created a sound that is synonymous with retro horror movies from the
twentieth century. Before long, there were performers playing Bach using the Theremin on
stage, with accompaniment, even a Theremin Concerto [Martin]. After the introduction of the
Theremin to the general public, kits became available for kids (and adults) to make their own
Theremin’s. A young Robert Moog discovered these kits and began experimenting with them on
his own, building Theremin’s without the use of a kit [Fjellestad]. Soon, Moog began discussing
his ideas with others who were mesmerized by the capabilities that electronics could offer for
making music. In 1964 Robert Moog created what we know as the first voltage controlled audio
filter [Chadabe 142]. Throughout the 1960’s a boom occurred in the synthesizer industry. It’s
important to note that at the time these synthesizers were expensive, experimental, and very
oversized, being larger than a concert organ, and requiring a vast amount of knowledge to
maintain and operate. Nonetheless Moog kept at it, designing synthesizers and taking ideas and
specifications written by others in the field and putting them to life in his own synthesizers. In
the mid 1960’s, the envelope generator (known today as the ADSR or attack-delay-sustain-
release envelope) was invented. It was around this time that Moog shifted his focus from
academic music to commercial music [Chadabe 142]. In 1969, the game had been changed by
Moog as he released the “Minimoog.” The Minimoog took the confusing and cluttering patch
cords that made creating sounds feel more like being a phone operator, and turned it into simple,
easy to use knobs. Moog stated that people still questioned his sanity over all the knobs when he
tried to introduce it to music dealers. In 1969, due to the efforts by David Van Koevering, the
Minimoog was used to play an amazing solo by Keith Emerson. The song Lucky Man got very
popular, and very soon after; if you didn’t know your way around a Moog synthesizer then it
would be difficult to be a success unless you were proficient with a Moog [Chadabe 155]. Right
around the Moog boom, there was man named John Chowning who wasn’t satisfied with the
sounds of the day’s synthesizers. While searching for a more rich sound and experimenting
heavily with vibrato, he noticed that at very high frequencies, the pitch modulation that was
occurring became instead a change in timbre. It was this amazing discovery that he shopped
around to companies, but was denied business over and over again. Eventually Yamaha licensed
his technique in 1974, and received a patent in 1977 [Chadabe 117]. This technique became
known as FM Synthesis and flew largely under the radar until the Yamaha DX-7 came out in
1983 [Chadabe 197]. This was one of the most amazing synthesizers to hit the market yet, being
able to play 16 notes at the same time and with amazing timbre changes integrated within the FM
system.
These days it takes very little knowledge of voltage control or programming to create a
software synthesizer. There are a multitude of synthesizers on the market today (hardware or
purely software), some even available for free. It is because of this that music today is the most
exciting. Artists have so many tools at their disposal, and with each of them working
differently, it allows people who may not get too excited about the acoustic piano or guitar, to
really get motivated by sounds created that are beyond what is natural. Artists today push the
limits to the edge, combining every use of sound design. The ability to manipulate any possible
recorded sound by processing it through effects has enabled the biggest change in music in
history. Sonically, the music of today has a higher perceived loudness, as well as the busiest
music of any generation. This is because artists took their favorite qualities of music, and found
easier ways to achieve them and go beyond them digitally.
Bibliography
Berg, Richard E., and David G. Stork. The Physics of Sound. Upper Saddle River, NJ: Pearson
Prentice-Hall, 2005. Print.
Chadabe, Joel. Electric Sound: The past and Promise of Electronic Music. Upper Saddle River,
NJ: Prentice Hall, 1997. Print.
Hosken, Daniel W. Music Technology and the Project Studio: Synthesis and Sampling. New
York: Routledge, 2012. Print.
Moog. Dir. Hans Fjellestad. Perf. Charlie Clouser, Herbert Deutsch, Keith Emerson. ZU33,
2004. DVD.
Theremin: An Electronic Odyssey. Dir. Steven M. Martin. By Steven M. Martin. Perf. Leon
Theremin and Robert Moog. Kaga Bay, 1994. Videocassette.
http://www.planetoftunes.com/synthesis/sy_media/subtractive/resonance.jpg
http://www.planetoftunes.com/synthesis/sy_media/types/additive.gif
https://documentation.apple.com/en/logicstudio/instruments/Art/L00/L0005_FMSynthesis.
png


Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close