I’m partway through the first track on FOURIER’S ALGORITHM by connect_icut (free download). A Fourier algorithm is a mathematical operation to do with (I think) signal processing that I do not understand. It’s entirely possible that I’m missing something crucial because of this lack of mathematical comprehension. Perhaps this deconstruction of the first four seconds of the Velvet Underground’s "Sunday Morning" is being run through an emulated Fourier process. The first four seconds of one of those songs that made generations run out (or stay in) and form a band. Subjected to what almost seems to me like an Alvin Lucier protocol, an "I Am Sitting In A Room" slowmotion shattering of the original audio.

In Lucier’s "I Am Sitting In A Room," the audio is played into a room, recorded, and replayed. Some pieces of the audio survive the process. Some frequencies embed themselves in the walls, become lost to the recorder. Imagine the first four seconds of "Sunday Morning" played in a bedroom, the bedroom that all bedroom indiepop will be made in after the playing of "Sunday Morning." Hear the chimes survive the process. Ringing down the years, even as all else fades and the sounds of vinyl crackle and CD bitrot get louder.

*(I grow ever more convinced that "I Am Sitting In A Room" is a multivalent metaphor for half the things in this ghostridden time we’re floating in (sitting in) right now.)*

The basic idea of the Fourier transform is that every possible waveform can be represented as a whole bunch of simple sine waves added together, and the mathematical operation converts any waveform into the individual frequencies that make it up. If you whistle, the time-based information is a simple sine wave; its Fourier transform is just a line indicating the frequency of the sine wave. A more complex piece of music is harder to see from its waveform, but its Fourier transform will show you where the fundamentals and chords are. Richard David James (Aphex Twin) used the algorithm in the other direction to put images into his music: http://www.moillusions.com/2006/10/aphex-twins-devil-face-illusion.html

Can I come over and hang out? (I will be good, I’ll just sit in a corner, honest.)

There’s a really easy way to think about them: it’s the relationship between the music you hear, and the music you see written down on a piece of sheet music. On one side, you think of the sound as a bunch of vibrations in the air, and on the other, you think of it as a bunch of musical (or not so musical) notes.

Technically, there are a lot of reasons why that’s not quite right. What Max said above is *far* more correct — this is just the version a mentor of mine once told me, “in words I can explain to my mom.”

The *algorithm* part of it is that, as it turns out, there are ways to compute these transformations very, very quickly.

The Aphex Twin easter egg is really neat, though if I remember right it’s very oddly scaled on the frequency axis.

You can also apply it in two dimensions. Here’s an intro to that, with graphic examples: http://sharp.bu.edu/~slehar/fourier/fourier.html

That thought’s occurred to me about “I am Sitting in a Room…”. The text would just be typical minimalist-era literal description except for that “…different from the one you are in now.” Is it? Thought patterns echoing out into infinity like everything else. Gets me every time.

all of i sudden i´m back in my Engineering mathematics class at university and my head is hurting. Thanks…

I first encountered Fourier via my interest in synthesizers and quickly learned that adding up sine waves to create a semi-new sound with which to play music was like giving yourself a colonoscopy with a rope. I’m not seeking to trash the results entirely, because many sounds so derived are beautiful and useful, but the process is so labor-intensive, diminishing returns kick in rather early. Adjusting 256 partials to get a bass sound is near-bogus and a poor way to waste a week. There are very similar and better ways to reach that general tonal family, such as using analog sources, fancy effects processing, resynthesis and sampling. Computer power is such that you can now do that additive sine thing as easily as show yer arse on ArseFacebook, but doing a cost/benefit analysis will usually lead a serious musician to more immediate tools.

Also interesting re Fourier transforms: the cochlea– that is, the physical structure in your ear which translates incoming sound waves into neural impulses– is basically a biological structure which performs a Fourier transform. This Fourier-domain representation established by the cochlea is maintained throughout the auditory processing stream, from brainstem up through primary auditory cortex and beyond.

Which, long story short, means that whenever you listen to that bit in Windowlicker, Richard D. James is drawing a picture of himself with your brainwaves.

Fourier came up with his algorithm in the same paper as the transform; it’s simply the most painful way ever to take down some data (sometimes music; how about Dolby 7.1 instruments, then; sometimes images; we know h.264 and/or jpeg 2000 hold much more to profit by) into energy bands within a distribution. (It was also no problem, because it worked and thus was good enough to get a degree! And on we went!)

That thing about ears is simplifying a bit; plenty of distortion and flanging are ours for a twitch in sensibilities, as opposed to serious data decimation work. Hence having to will and practice a Golden Ear to have one.

The Algorithm was miserable because (unlike any wavelet work, e.g. that dating from 15 seconds before Fourier published) it only worked well on sines in gaussian media, and it had needless sections that canceled because Fourier was unsure they weren’t part of commutability to other media. Water or air for example, get complex (not least by the hectare.) We shall listen with different ears to weather, memes, gluons or superstrings. And order the right BD format of that edition of _Gravikords, Whirligigs and Pyrophones_, I hope.