Tutorial: Using Your Brain (Part 1)

Woody Allen famously said that his brain is his second favorite organ. When people ask me what instrument I play I’m beyond tempted to paraphrase that line. The question of what instrument a musician is proficient at usually reveals a fundamental misunderstanding of what creating music is all about.

While there is no formula for how music is created here is, in my very humble breakdown, the way it should happen:

  1. Your right-brain gets an impulse to express something (magic happens here)
  2. Your left-brain maps a series of aural textures and frequency intervals that it knows will lead to the expression of that impulse (discipline happens here)
  3. Your nervous system sends signal to your hands to drag the mouse, hit the MIDI keyboard, pluck the string, drag the bow, etc. (major expenditure at Guitar Center happens here)

Step (1) is the nebulous mushy part that no one can describe or calculate, although it seems to help to be from a broken home or (even better) orphanage, been ridiculed or (even better) beaten up in school, survived a life threatening trauma or (best of all) been misunderstood by your suburban parents combined with a pimple on prom night.

Because the listening public only sees step (3) they assume that’s where the music is. Don’t believe it.

Ironically, most musicians I see struggling with music think they are having trouble with step (3) — which they might, this site is only successful if finding the right software and working with computer music is hard. (God knows the self-promos aren’t what’s bringing people back here.)

This article is about step (2) and helping you unblock the fundamentally unnatural process of translating emotion into sound and making it second nature.

The human ear, especially of the Western variety, has been conditioned to relate certain sounds with certain emotions. You are welcome to ignore this but if your intention is for your music to have impact on the listener it’s best to pay attention to that relationship. They call it “distortion” for a reason. White noise is, well… noise. Organized white noise is still noise. If the only sound that comes out of your scratching is white noise then you should be prepared for people’s reaction. Ideally you will have anticipated their reaction because you were trying to express “noisy” emotions. If you were trying for “romantic” with white noise then you’re Brian Eno and you’re not reading this.

There are several characteristics of musical sound but I’m going to focus on pitches (a.k.a. tones, frequencies, notes).

Musicologists have labeled all the relationships between pitches. They don’t always agree on the name or notation. If you plan to converse regularly with musicologists then you should get one of those books with all the permutations of the 12 notes in the Western scale and start memorizing. If skip this step you’ll be in decent company: John Lennon and Strictly Kev. And Brian Eno.

This step is optional because really, who cares the what the relationship is called? “When I put my fingers on this string at that fret it sounds tense” is just a valid as saying “When I play a dominant chord with a raised 9th it sounds tense.” The only thing that matters is that you were trying for “tension” and you hear what that sounds like, specifically, you hear in your head, what the notes sound like that illicit a tense response from listeners. If you also happen to know the sound you’re hearing is a dom7+9 that’s great, but that’s a another mapping, irrelevant to the creation of music. I associate that chord with opening riff of Purple Haze. Just as valid because the sound is in my head and that’s all that matters.

If what you hear is specifically a C7+9 that’s yet another mapping called “perfect pitch” also irrelevant to making good music.

If you can map that sound to an instrument or plucking notes out on a MIDI piano roll in Sonar, then congratulations you’re on to step (3).

If you can sing Do Re Mi (relatively in tune) then you are already mapping notes to a naming convention, albeit, not the one used by musicologists. Still, you’ve mapped the ptiches of a diatonic major scale to a naming scheme. See? Doesn’t help you make emotive, impactful music.

There’s a common meme amongst street musicians that when a musician learns “all that theory stuff” she “loses her feel.” Just don’t tell this to Quincy Jones or Frank Zappa. (Although that would explain why having just learned how to read and score music for a ballet, Elvis Costello turned in one of his weakest rock albums ever last year.)

What’s actually happening in that case, however, is that the musician has allowed herself to get bogged down in the irrelevant mapping of sounds to note names and it is interfering with her ability to map straight from emotion to sound. This doesn’t have to happen, it just does all the time.

Having made the case for ignoring music theory here, in the Part 2 of this article I cover the parts that you should (sorry) actually know.

7 thoughts on “Tutorial: Using Your Brain (Part 1)

  1. Bob

    I make no claim to being a music engineer. I am a church musician, violin, piano, voice, and pertaining to this inquiry, a bell tower keeper.
    I write my own songs to play on the bell tower, or synthesize others from MIDI sources, recording them eventually to an audio file that will play on a computer sound card. Once in the sound card, the carillon amplifier takes it from there and plays the music on the bell tower speakers. The problem is, that apparently, when the masterminds made the 128 MIDI instruments, they saw no need to make a pretty sounding set of church bells (all the way from deep boomers, to tinkle bells in the carillon songs.)

    I came here looking for a live human being, and hopefully, one or more download programs, that will simply let me compose a song in Noteworthy composer, and use some sort of a sound font player, to serve as a MIDI instrument (in place of the GS wavetables, etc.) If I can get it playing in bell voices, I am able to do the recording, audio file synthesis, editing, CD burning, etc. Been there, done that.

    Any help?


    (plays “first chair steeple” in this town.)

  2. victor

    er, not exactly on-topic now are we bob? ;)

    The site you want sounds like http://kvraudio.com

    look for the sfz sound font player. The ‘pro’ version can hold 16 different soundfonts (or 16 instances of the same one.)

    It’s a VST and there are 100s of hosts. If you load sfz into multiple tracks you’ll have n*16 instruments where n is the number of tracks.

    Good luck finding the soundfont, though.

    I’ve written up a few places to look here


  3. Pingback: beatmixed

  4. naturally yours


    I’m a “street wise” musician who used to be so intimidated by my Berkeley-ed perfectly-pitched seniors. Not anymore!

    Thanks for the resonance.

    Off to part II….

  5. Lucas Gonze

    I love this piece. Great idea, well expressed.

    A thing about the Purple Haze chord being also the dom #9 is that those are somewhat different ideas. Hendrix laid down on the chord as a tonic and then went to the flat 3 major rather than using it as a passing V chord on the way to I. So the concept changes with the name, to some extent anyway.

  6. fourstones

    sure, to the same extent that when I use the same hue of red to represent both blood and a sunset. this type of ear-training is aimed at recognizing similar sounds in different contexts – assuming uncomplicating things is a goal.

Comments are closed.