Untitled

Part 1 to my latest work One Nation Under God.  My piece is not fully mastered and so it might be best to hear it on really good headphones.

The Age Of Computer Music

Innovations happened throughout human history and there were major changes over time.  In Japan there were the samurai class which later became extinct thanks to the gun.  Horse and buggies were a common site for thousands of years, but later were replaced by cars.  For trains, the steam locomotive was the locomotive used throughout most of railroad history, only to be replaced by the diesel around the mid 20th century.  In the past people who traveled from places like Europe to the Americas had to use boats, but now almost everyone uses planes.  The type writer was a very common tool through much of the first three quartets of the 20th century only to later be replaced by the computer, etc.

The same thing happened in the music world as well.  In the past  many composers were able to adapt to new technology much easier than they do today in some regards.  The first forte-piano was made in 1698, but it wasn’t until the second half of the 18th century until it started to replace the harpsichord.  Composers for centuries didn’t have as much of a problem with adapting to the technology of one’s time as they do today.  Composers such as Mozart, C. P. E. Bach, Haydn, etc. all started off with the harpsichord, but later in life they were writing for the forte-piano. 

Today it is a lot harder for composers living in the 21st century to adapt to 20th/21st century technology, because composers are ill-equipped because of the radical difference between 19th century technology and 20th/21st century technology.  Most composers today including those currently in their 20s would much rather stick with what their teachers know best (19th century European instruments) instead of embrace computer technology that they grew up around which is embraced in so many other fields.

The piano played a very crucial role in bringing music from the Classical era to the Romantic Era up to the Impressionist Era. We would not get brilliant masterpieces for the piano from composers like Chopin, Debussy, etc. if they did not have access or take advantage of the current technology of their day as they fortunately did.  Liszt himself was very popular when he was young and the audiences were stunned, because the piano was still a new instrument (it came out of its predecessor the forte-piano).  The Donauschinger Festival and many other festivals that label themselves as ‘new’ music festivals do not really play any electronic music which I find to be very absurd especially given that our age is so radically different with our technology being so radically different than that of the early 20th century and earlier.

Many if not most professional composers today have most likely started music at a younger age.  Many of them will start off playing the flute, piano, guitar, etc. as a child and are part of the band, choir, or orchestra in school or sometimes church.  During their time in school they learn nothing about how to work with synthesizers, creating music on the computer, etc.  Today there are very simple and basic programs that can be used in electronic music like Granulab or Audacity that can be downloaded for free on the Internet.  Yet, for the most part there aren’t any required courses on it in most music colleges. 

In college, composers can get away without knowing anything about electronic music or even incorporating it in their own music.  One of the problems is that many of these composers have spent countless hours reading music scores and learning how to write for certain instruments.  Many are not required to learn about the acoustics and science behind the sounds.  Adjustments are very difficult because they will have to completely change their minds in order to write for computers.  Many will have to learn how to use a sound editor like ProTools instead of putting dots on a music score or know how to write something like a Csound score.   Countless  musicians are like the Luddites where they have been hostile towards the use of integrating computers in music and the few that may use it will only use it as a background instrument.  A lot of this close-mindedness is similar to how people are raised in religion where they are taught a certain religion at a young age and keep the religion their parents raised them in despite it being very archaic, contradictory, and nonsensical.   As a joke I often refer to composers that are resistant to 20th/21st century technology as Amish composers.  The term ‘Amish composer’ could also be referred to composers that write music that sound like it was written in a previous era.  The similarities between the Amish and most composers is sadly very similar in many ways.  

Most of the instrumental music that is written uses conventional instruments like the piano, violin, trumpet, saxophone, etc.  You hardly hear composers writing for other acoustical instruments like the banjo, jaw harp, sitar, etc.  One of the big reasons for this is that composers will have performers at their disposal.  A performer, ensemble, etc. commissions a composer to write something and then the composer writes for that medium.  Many composers have successfully made livings by appeasing performers.  Orchestras in places like Germany are subsidized by the German government so this 19th century medium can prosper.  It’s like if the Luddites were to gain power and give money to politicians so they can practice their trades and get subsidized by the government.

Serious music seems to be an area that is so uniquely resistant to technology.  When I had art class back in elementary school we were encouraged to experiment and we did things like make collages, drawings, paintings, etc.  In fifth or six grade in elementary school we learned to use a drawing program for the Apple II computer and we tried to make the image of a castle on the computer.  In high school we were taught to use a paint programs like Photoshop.  In music it was a different story.  In  elementary school, high school, and even my first 2 years in college all we did was perform music.  There weren’t even any computers in the classrooms!  Of course computers were less common in the 1990s, but we never went to use them the way we did with art or even writing.  Music on computers is mostly seen with commercial music while music on antique instruments is associated with art music.  Many of the colleges around the United States have very small electronic music programs and may only have one professor on staff.  There are probably many similarities with other developed countries as well.

Xenakis was already pioneering the UPIC which was an electronic instrument that can be used by children, but yet the descendants of the program are hardly ever exposed to children  in music programs in elementary school to my recollection.  Elementary schools that still have music programs (some schools don’t have any thanks to the budget cuts) ought to offer children the opportunity to learn to use a simple music software like Audacity the same way that they teach children how to draw using drawing programs or how they teach children to sing or play an instrument.  Children should be given the chance to do something like record themselves with a program like Audacity and then learn how to speed up a sound or take a recording of someone giving a speech and then distorting it and using many other basic techniques.  They should learn about the basic building blocks of sound like the sine wave, white noise, saw tooth wave, etc.   In college music students planning to take a degree should be forced to not take one, but a few classes in knowing at least some of the very basics to working with programs like CSound or Max/MSP.

We’re not living in the fifties or sixties where one has to pay $200 an hour ($200 during that time that is!) just to use a computer as Max Matthews did.  Many of us are lucky to not be old enough to remember what Max Matthews had to experience on a day to day basis.  When Ligeti was working on his three electronic pieces that he wrote during the late 50s he also  experienced a lot and had to have other people help him.  It’s no surprise that he eventually abandoned electronic music altogether.   His Pièce électronique no. 3 had to wait 40 years in order to be fully realized because the technology of the time was too primitive.  Today things have changed greatly for the better where one can just download free programs that are far superior than what Ligeti was forced to work with.  Very few people would ever want to go through all of the intense labor just to write a short 3’15” minute piece as Stockhausen did with his work Etude back in 1952.  If Stockhausen were still around and writing the same piece it would be a lot easier for him (although still not easy overall).  When he worked on it back in 1952 he only was able to use the studio once a week for only an hour.  Today he can just download Audacity off the Internet for free, take a $5 microphone (which I don’t recommend!) that he bought at his local computer store, and start recording the prepared low piano strings that he used in his piece.  He doesn’t need to have a separate machine for transposing notes because a downloadable free program Audacity already has it, plus more than what Stockhausen had at the time.  He also doesn’t have to physically cut the tape and measure the tape carefully with a ruler.  Instead he can just move sound objects on the computer screen and measure the sound by the milliseconds.  There is no worry about destroying the sound object on the computer.  Of course one has to make sure that they save their work and for safety make a copy of the sound files just in case if the computer or flash drive dies. 

A simple free program like Audacity, PD, SuperCollider, or Csound is more powerful than an entire orchestra that has all of the acoustical instruments on the planet combined.   I haven’t even mentioned the programs that will cost you money.  The computer even has the power to emulate acoustical instruments.  Acoustical instruments do a much worse job emulating electronic music than a guitar soloist trying to play a Mahler symphony on a single guitar.  Composers who are often resistant to writing for electronic music will many time attempt to make their instrumental music sound electronic and getting very awkward results like trying to play an entire Mahler symphony on a guitar.    I wouldn’t recommend using a computers for imitating acoustical instruments to play classical music unless if the technology gets so advanced that you can’t tell the difference or if there aren’t enough resources.  Today the electronic piano seems to be replacing the acoustical piano in many areas in music.  The electronic piano is more convenient where it is portable, doesn’t need to be tuned, cheaper, etc.  Also, many if not most electronic piano can imitate other keyboard instruments like the harpsichord and the organ.  Most people prefer the sound of real organ, but I guess its convenient to have something much lighter to move around.  After all, in the 1800s there weren’t always orchestras available and so people like Liszt would transcribe a lot of orchestral music to the piano.  Many people during that time would have their first exposure to Beethoven’s 5th Symphony by hearing Liszt play it on the piano. 

Many times I’ll run across musicians who will say that I should write for acoustical instruments.  I’ve had times where people will talk about how electronic music is missing “a certain element”.  Many have rejected electronic music altogether where I’d often hear things like “I don’t give a shit about music that comes out of speakers” or “I don’t care about music that is recorded”.  Many people have told me they are not interested in electronic music concerts because it’s “not live” and they can hear a recording instead.  Although to be frank I haven’t found much acoustical music live to be any more rewarding either.  I’ve heard Stockhausen’s electronic music and instrumental music played live with the composer present and I personally found his electronic music more rewarding to hear live than his instrumental music.   Many will argue that I’m taking away the human element of music by not having performers.  Why don’t these same people who protest about electronic music not being live also protest about movies not being ‘live’ as well?  Aren’t movies dehumanizing everything as well?   Why don’t they say “well, I can just watch the movie at home and I don’t need to go to a movie theater”?  Some will complain that electronic music concerts lack a visual element.  In the pop music scene many of today’s pop stars lip synch concerts and people still enjoy their concerts.  I’m not necessarily against lip synching (as long as the performers don’t lie about it).  Pop music today is heavily computer edited that it is necessary for the computer to edit nearly everything to get certain sounds that would be impossible to get in real life.  Movie directors are doing the same thing only with visuals.  Many of the effects that you hear in mainstream pop music for example would be impossible to replicate unless if you are using some live electronic programs.  An electronic musician can just go on stage with their laptop.  Does it really make that much of a difference if the person is pressing buttons a lot or not pressing anything at all?  Seeing a saxophone player’s face turn all red on stage while they are pressing buttons may seem like fun to some, but to many it’s like visual masturbation.  Someone dancing, acting, or gymnastics would probably be more appealing visually to most people than watching a person pressing buttons on a musical instrument.  I’m not going to do a survey, but my guess is that most people would much rather hear music and see no visual element than watch a video of a person playing with no sound coming out.

I’m not in favor of just throwing away acoustical instruments completely.  The harpsichord is a far less important instrument today than it was during the Baroque era, but there were still pieces of some importance written for it like Ligeti’s Continuum.  I do think that electronics need to play a far more crucial role in today’s contemporary music scene since acoustical instruments are very old fashioned, limited, and can not play the role they once played when they were first new.  Electronics have a lot more to offer in continuing the music tradition with the unlimited amount of sound sources that one can get and computer technology is getting  more accessible, cheaper, better, and many times free if you have a computer handy.  I think that many contemporary music institutions are right now doing a very horrific job as far as technology goes. Composers in college are not taught to work more with electronic programs since writing music on a multitrack and writing electronic music in general is an entire different process than writing out scores for conventional antique instruments for the most part.  Often at times electronic music is seen as techno music found on car commercials.  Many professors did not really have much of the opportunity when they were going to college and so they spread their technological ignorance to the next generation as I’ve already witnessed (even among people who were born in the 1990s).

One of the big problems that electronic music composers face today is that many have to work alone in the medium alone.  It can sometimes be a problem for some.  When a composer writes for acoustical instruments they will have performers ready to play their works.  The performer will spend hours and hours of his or her time trying to master someone’s music.  Often at times composers will write very demanding scores and it may take the performers about six months or possible years to play the music.  Composers who write electronic music often do not have sound engineers or anyone like that at their disposal ready to help them.  There is hardly a network for them compared to what composers for the orchestra have.  Some electronic music composers hate working alone and miss the whole interaction process and some have turned back to old fashioned musical instruments.  I think that there needs to be bigger and better networks.  Who knows.  Maybe in the future a person will commission an electronic composer to write a piece using their favorite parrot as the main sound source.  There are already many collaborations in electronic music going around.  There are laptop ensembles, duos, etc.  We have yet to see more of what the future will have.