My
history of working with guitar
synthesizers goes back to a
guitar-following device called the EWE,
which stands for Electro Wagnerian
Emancipator. There's only one of them; it
was designed for me by Bob Easton at 360
Systems. If you played a single note, all
12 notes of the chromatic scale would be
ringing, and you could make a decision as
to which of those 12 to leave on and
which to leave off - and thereby select a
chord that would follow parallel to
whatever you played on the guitar. It
worked; the only problem was the timbre
of the synthesizer sound that came out
was, I would say, fairly unattractive - a
real square wave sound. That is now
gathering dust in the warehouse. I tried
to use it on "Big Swifty" from Waka/Jawaka
- Hot Rats, but it didn't end up on
the final track. On "Be In
My Video" [Them or Us], a
couple of cuts from Thing Fish,
and the entire Francesco Zappa
album, I used the Synclavier with no
sampling - just the synthesizer sounds.
The first time I used the polyphonic
sampling was on Frank Zappa Meets the
Mothers of Prevention; all of the
material on side two, except for the
little instrumental section with Thing
Fish talking over it, was done with the
machine. And on side one, "Yo
Cats," which sounds like a little
jazzbo lounge group, has real drums and
Ike Willis' voice, but everything else is
out of the Synclavier.
I
recently tried out the Modulus Graphite
controller, and it had problems similar
to the guitar controllers I've played
before - it just felt better as a guitar.
It seemed to be a better instrument than
the Roland I tried out originally with
the Synclavier. The difference is that
the Modulus Graphite's neck is supposed
to be more stable, and you're supposed to
have better isolation, less false
triggering, and so forth. There was less
false triggering, but for the way I play,
there was still too much. I've only tried
the early Roland model they had with the
Synclavier, and it just wasn't right for
me. If it were, I'd have the thing up and
running right now. I wouldn't dissuade
somebody else from buying one -
apparently there are other people who can
play it and make it do wonders - but for
the way I play the guitar, and for the
uses I want it for, it just seemed wrong
to buy it. Instead, I enter all the data
through the keyboard or the typewriter.
The
Roland electronics won't read things like
hammering on the string with a pick; it
just chokes on that. The way in which I
finger the instrument apparently is too
slovenly for it to read. I don't mute
every string after I play it; there's no
Berklee technique involved here - I grab
it and whack it. You can adjust the
sensitivity within certain parameters,
but if you adjust the sensitivity higher,
that means it's going to pick up fewer
nuances - so where do you draw the line?
Depending
on what data I'm entering, it can be
inconvenient to have to do it all on
keyboard or typewriter. For example, I
can't just sit there and play a solo on
the keyboard, because I don't think in
those terms - certainly not in real time,
although I can slow the sequence down and
play stuff and get some styling and
phrasing. But the way in which the data
is entered has a "drier" feel
than if it had been played on the guitar.
For one thing, on the guitar, you get to
wiggle the intonation a little bit to
make more subtle things happen. The
tradeoff there, though, is that as you
wiggle the strings and intonate it - all
that nuance stuff - you generate masses
and masses of numbers that fill your
sequencer up very fast. That's certainly
a liability because you can't do as many
tracks of information. The bulk of the
sequence is being filled up with
inaudible data dealing with microtonal
pitch adjustments. So a single line you
might play of a minute-and-a-half
duration, if you're using raw pitch, will
fill up your sequence of 30,000 notes.
Whereas if you type the pitches in,
there's no dynamic or nuance pitch
information entered that way. So you can
have many more notes in your sequence and
fill up all 16 tracks and still have
space left over.
Allan
Holdsworth came by the house with his
SynthAxe, and that had some similar
problems as well as some different ones,
for me. One similarity is the string
delay. The delay in the SynthAxe is
caused by the MIDI delay, since the
instrument doesn't have to count
frequency like the Roland does. One set
of strings tells you that the note has
been initiated, and the other set tells
you what the pitch is. It's insignificant
delay on that end, but the MIDI delay is
something I can feel. This doesn't bother
some people, but for my ear and the way I
want to use it, it just seemed
inconvenient. Any time I can hear the
sound of the pick and then after it the
sound of whatever's supposed to come out,
it bothers me. Even though it's just a
few milliseconds apart, it makes me feel
awkward.
I would
have preferred it if the angle of the
SynthAxe's neck had been adjustable by
some sort of pivot. Everyone's body is
different - arm lengths are different;
trunk lengths are different. The
engineering principle is interesting and
has a lot of merit, but it would be more
useful if you could readjust the neck
angle.
The
main advantage for me with the Synclavier
is that I can imagine rhythms that human
beings have difficulty contemplating, let
alone executing. When I'm writing for a
live band, I'm constantly limited by the
physical liabilities of the people who
are going to play the parts. On the one
hand, you can say, "You just keep
practicing, be insistent, and you'll
eventually get the rhythm." The
truth of the matter is, the more you
practice, the more the musician hates it;
you're never really going to get it spot
on if the person is suffering to play the
rhythm. Why subject the musician to that
punishment and torture when you can just
type it in and get the thing
mathematically exact? I'd say that a
musical ideal would be a combination of
the things live musicians do best and the
things the machine does best blended into
a type of composition that lets each
element shine.
In
terms of the sounds, the samples we use
are mostly homemade. They're pure digital
samples done in my studio. Often the
factory samples originally start off on
analog tape, and sometimes you can play a
sample up and down the keyboard and hear
this residual, unwanted, blurry gunk
traveling with it. In the case of the
drum set, we recorded each drum and
cymbal in isolation, so that there's no
residual hardware ring from other objects
on the drum set. And we recorded each
drum in stereo - the snare, the kick
drum, each tom, everything. The net
result when you put this together into a
patch and play it back on the keyboard is
something surrealistically clean and
perfect. I also have samples of lots of
different types of guitars - a really
nice classical guitar, several
steel-string guitars done with pick and
with fingers all in stereo, both ambient
and close-up recording. I've got tenor
sax done with full-length, eight-second
tones, with natural vibrato, in stereo,
with the Ben Webster "flaw
attack" [laughs] - a really
unctuous tenor sax sound. We also have
samples of whole orchestral chords, from
the London Symphony Orchestra - Zappa
album. The LSO album was done on
a 24-track digital multitrack, so you can
single out string, brass, or wind
sections to get really high-quality,
isolated digital specimens.
For the
sounds that don't resemble any other
instruments, we have a whole
classification of noises: one being the
Evolver, where a sound starts off to be
one type of an instrument, and by the
time the note is finished, it's been
turned into maybe two or three other
instruments, all with a smooth
transition. Then we have Resolvers, where
different types of resynthesized vocal or
instrumental timbres are located on each
of the four partials, and by depressing a
single key on the keyboard, you get a
four-note chord that is actually four
independent melody lines that resolve
against each other to a final payoff.
Then if you depress the key at the end of
the payoff, you get a bonus of another
bent. So you can have little melismas,
little eight-note melodies, that occur,
and all you do is push the key down, and
it sings some kind of Renaissance cadence
or whatever. Two of the partials could be
resynthesized voices, one could be a
resynthesized violin, and the other a
resynthesized bassoon. Instant
Renaissance ensemble when you hit each
key. So you can imagine what happens when
you play a chord [laughs] - it
gets very ab- surd. It enables you to
write things that you couldn't deal with
under any other circumstance.
An
example of one of the
"impossible" things I've done
is: While one instrument is keeping a
steady pulse at 120, another instrument
is doing a ritard, where each successive
note is five milliseconds later than the
note before, over a period of time.
There's no accurate way to notate that,
but it has an interesting feel to it
because it slows down so gradually and so
mathematically. You can do things like
that within a bar - little accelerandos
and ritardandos in five-millisecond
increments inside of one bar.
Eventually,
everything goes to tape because that's
how the album is manufactured, but prior
to that you can do all sorts of things;
there's no question that that's a big
advantage. Most of the editing I do after
the basic composition is entered is done
on what Synclavier calls the "G
Page," which shows you three tracks,
each with three columns of information.
One column tells you the start time,
another tells you the pitch, and another
tells you the duration. Once you learn
how to read that, you can edit on that
page very fast, and you still have access
to playing things on the keyboard while
you're doing your editing - which you
don't have when you're editing in the
music-printing function. The keyboard is
disabled when you're dealing with that;
you have to do it either on the screen or
in your head, or keep bouncing back and
forth. If you do it on the G Page, you
don't actually see any notes or staff or
anything - you see numbers that represent
in fractions where the beat is located in
the bar.
When
you're typing in the music printing
section, there's a process called
tupletization. It's not a real word, but
it's an accurate description of what the
machine does. You push a button, and then
it says "tuplet." Then it asks
you what kind of a tuplet you want. You
give it a flavor, like 11. Then you tell
the machine 11 over how many beats - for
instance, 11 over 3. Let's say it's all
in 4/4: You type in tuplet 11 over 3
beats, and hit Return. The screen
re-draws, and now the first three beats
of a 4/4 bar have been restructured. The
way you deal with entering information in
the music printing is, each bar of music
is divided into what they call "edit
blocks." For example, if you're in
4/4 and choose edit resolution 32, every
time you move the cursor one degree to
the right, you're moving it one 32nd-note
edit block to the right. So in 4/4 with
edit resolution 32, there are 32 edit
blocks in the bar. If you do this tuplet
thing that I just described, there are
now probably 44 edit blocks over the
first three beats. If you want to enter
an 11-tuplet in there, you just give a
couple of commands and it locates pitches
inside this imaginary framework of an
11-tuplet over 3. And you're not limited
to just entering 11 notes; you could
enter four 32nd-notes for each
eighth-note in the 11-tuplet, if you
wanted, thereby winding up with a
44-tuplet. Also, if you decided that
right in the middle of your 11-tuplet you
wanted to have a quintuplet that began on
the third note of the 11-tuplet, you give
a second instruction and it gives you a
second-level tuplet. We're building what
is described as a nested polyrhythm - one
polyrhythm living inside of another
polyrhythm. With this machine you can
nest three of them. After you've entered
your quintuplet starting on the third
beat of the 11-tuplet, you could then
decide you wanted to have a septuplet
that began on the second beat of the
quintuplet inside of the 11-tuplet. Or
any kind of tuplet you wanted, up to the
maximum resolution of the machine. After
you've typed in all the stuff, you push
the Play button, and by golly, there it
comes - and it's on time.
If you
really want to get abstract and build
your composition just on the G Page,
instead of dealing with tuplets, you can
deal in milliseconds. The rhythm on the G
Page is determined by the start time of
the note - that's the data that lives in
the left-hand column. So you can read
data on the G Page in three different
modes: in terms of seconds, beats, or
SMPTE [time code] numbers. If you're
looking at a 4/4 bar at 120 in the mode
that shows you beats per bar, beat 1 is
the first quarter-note, beat 2 is the
second quarter-note, etc., and beat 5 is
the downbeat of the next bar. But inside
of that, you have resolution down to five
milliseconds. You can add and delete
notes on the G Page. So you can build a
list that would say: There's a note on
beat 1 and there's another note on beat
1.005. Then the next one could be on any
arbitrary number - you can just enter any
kind of numerical scheme you want for the
rhythm. For some of the kinds of rhythms
I type in, my G Page tuplets look like
beat 1, beat 1.07, beat 1.14 - in other
words, this whole series of notes is
going to be 70 milliseconds apart. You
can have a series of nine notes 70
milliseconds apart, followed by a series
of five notes 90 milliseconds apart,
followed by any number of notes you want
any number of milliseconds apart. You
don't even need to worry about tuplets
anymore - just go for the flow. You can
have these notes be totally distinct from
one another, or you can have them
overlapping each other to make chordal
arpeggios, just by changing the duration
in the far right-hand column. In other
words, if you want the notes to overlap -
if they are 70 units apart, and you want
every three of them to overlap - on the
page it would look like the third note
would last 70 units, the second note
would last 140, and the first would last
210. The first note that plays would last
the longest. The effect is like a little
three-note arpeggio. That's what I do for
12 and 14 hours a day - sit there and
deal with those kinds of numbers. It is
the only way to write that kind of music.
MIDI
needs to be faster, because if I'm
writing in those small increments of time
just for the rhythm, with the types of
delays that are built into using MIDI to
hook things together- it gets a little
tedious when things [i.e., synth modules]
talk late.
With
any of the systems that are
pitch-to-voltage, before the voltage
counter can figure out what frequency the
string is vibrating at, it has to wait
until the decay of the blast of white
noise that occurs when the pick hits the
string goes away completely. There's no
way around that that I know of The
problem with fret-switching is that it's
taking us all the way back to the
Guitorgan. If the fret-switching is going
to tell you what the pitch is, then
you've kind of got what's happening in
the SynthAxe.
FZ
|