Untitled
Expanding the Fixed Electroacoustic Medium

Since ancient times people have worked together in an art medium.  If we go back to ancient Greek theater there were several actors, chorus, and an orchestra.  A great play wright like Sophocles had people performing in his plays.  If we take western classical music, throughout it’s history you’ll find a single composer writing out parts for performers whether it be in a duo, trio, quartet, band, chorus, orchestra, etc.  The composer may be able to play a few instruments, but generally they will write instruments that they don’t know how to play.  A composer that writes for the orchestra doesn’t need to know how to play the flute.  They generally will have an idea of the limitations to the flute or the flute performer.   

Since the 20th century we have seen the rise of the movie theater.  The early silent film maker D. W. Griffith started out his career as a play wright, but later switched over to writing film. The film maker Jean Cocteau started out with writing plays, but later on he started creating films.  One can make the analogies of plays and movies.  Movies came out of plays.  Today you can find many movies that are based on plays.  While plays are still around, most people will attend the movies rather than plays.  The obvious advantages that a movie has over a play is the fact that the movie you are given many takes to get it right.  Movies can portray events that are nearly impossible to portray in real life or on stage.  If you take movies like Metropolis or 2001: Space Odyssey, they are nearly impossible to portray in the form of a play.   Now with CGI it’s unlimited with what directors can come up with.  Of course like plays, you will need crews to create movies.  Stanley Kubrick was not alone when he created his 2001: Space Odyssey.  The film was wildly known for its stunning special effects, but it was Douglass Trumbull who gave the special effects to the film.  There are many other people that worked with Kubrick on his film. 

There are countless examples of people working together to create a film, but I am only touching the surface here.  One classic film that I can think of that has a very limited budget of only $7,000 and doesn’t have as big of a film crew as other films is Shane Carruth’s Primer.  Shane Carruth is perhaps in a similar low budget situation as most electroacoustic composers are today.  He is not only the director, but the producer, writer, star, composer, and the editor of his movie Primer.  Primer is his first feature film.  It is common for electroacoustic composers to work this way, but the only thing is that Carruth’s film has a few actors in it.  However, if we go to his next feature film Upstream Color you will find more people working on the film as opposed to his previous film.  From seeing the film, my guess is that the budget was bigger than his previous film.  Like his previous film, he is still the director, producer, writer, star, composer, and editor, but this time he has more producers, another editor, and more actors.   The director hasn’t become more lazy, but has got more resources to add more people to his films. 

The analogies I said about movies can be said about instrumental music and electronic music.  Electronic music came out of instrumental music in the way that film came out of plays.  The first fixed electroacoustic music started in the 1940s where composers were creating music on tape.  Many of the early fixed electronic works during the 1950s had people working on the pieces.  The composers Karlheinz Stockhausen and Gyorgy Ligeti for example both had Gottfried Michael Koenig helping them with their pieces.  Collaborators were very common throughout the 50s and 60s, but started to become less common later on.  Stockhausen of course was rich enough where he always had collaborators at his disposal.          

Today most electroacoustic composers will work entirely by themselves. Most of the collaboration done in electronic music is more with live electronic music. It’s very common for there to be laptop orchestras or duos. It is no longer common to see collaboration done in fixed electroacoustic music as was seen during the 1950s and 1960s.Today it’s very common for electroacoustic composers to work at home.  If they work at a studio at their college, they will usually be working on their pieces alone.  While I think it is necessary for electroacoustic composers to create music by themselves, I do think that a crew can do a lot.     

As I mentioned before, composers that write for the orchestra don’t need to know how to play all of the instruments in the orchestra to write an orchestral piece.  With the electroacoustic composer, they are limited to what he/she is capable of.  As I said before, Kubrick was not the main special effects expert in 2001 and that it was Trumball who created the special effects.  However, Kubrick did direct the special effects.  I think that there is a lot of potential if electroacoustic composers who had a crew where someone mastered the music, another that created special effects, etc.   

The proposal that I would like to make is to have a crew of people working on an electroacoustic piece similar to how movies have crews working on a movie.  Instrumental composers already have a crew.  A composer that writes music for an ensemble will meet up with the performers.  The performers will play the piece and the composer may make corrections on the performer or make corrections in the score.  I propose something somewhat similar.  The roles the electroacoustic music crew will have, will be similar to movie crews.  The composer will be like the director, and maybe there be people doing sound design, engineers, performers, etc.   Today it is possible for the crew to meet online through something like Skype or Google Hangout.  The files used in the piece can all be uploaded to a cloud drive, so everyone working on the piece can have access to it.   

For myself, I have written mostly pieces on my own, but one of my future goals is to have people collaborate with me on future works.  There is one safe project that I have in mind that is entirely possible.  One project I propose is for an electroacoustic transcription of
Ligeti’s Atmospheres.  I’m usually against turning instrumental works into electroacoustic music, but this piece has more potential, because it was directly inspired by elect
roacoustic music, and written a few years after his working in the studio.  There are challenges to the project.  How will we convert an score into an electroacoustic work?  How many people is needed for such a project?  What programs will be used to realize this project?     

If this project is done right, than I think it will bring a lot of publicity, because it’s a piece by a famous composer rather than an unknown composer.  It will show the potential of using a crew in electroacoustic music.  This doesn’t have to be the first project, but I think that the electroacoustic medium will have more potential if there are more people working on a piece; just as we have people working on a film. 

The Age Of Computer Music

Innovations happened throughout human history and there were major changes over time.  In Japan there were the samurai class which later became extinct thanks to the gun.  Horse and buggies were a common site for thousands of years, but later were replaced by cars.  For trains, the steam locomotive was the locomotive used throughout most of railroad history but during the second half of the 20th century it started to be replaced by the diesel.  In the past people who traveled from places like Europe to the Americas had to use boats and it would take them several weeks to travel across continents, but now almost everyone uses planes.  The type writer was a very common tool through much of the first three quartets of the 20th century only to later be replaced by the computer, etc.

The same thing happened in the music world as well.  In the past many composers were able to adapt to new technology much easier than they do today in some regards.  The first forte-piano was made in 1698, but it wasn’t until the second half of the 18th century until it started to replace the harpsichord.  J. S. Bach was mostly writing for the harpsichord, but towards the end of his life he wrote The Musical Offering which is for the forte-piano.  Composers for centuries didn’t have as much of a problem with adapting to the technology of one’s time as they do today.  Composers such as Mozart, C. P. E. Bach, Haydn, etc. all started off with the harpsichord, but later in life they were writing for the forte-piano. 

Today it is a lot harder for composers living in the 21st century to adapt to 20th/21st century technology, because composers are ill-equipped because of the radical difference between 19th century technology and 20th/21st century technology.  Most composers today, including those currently in their 20s would much rather stick with what their teachers know best (19th century European instruments) instead of embracing computer technology that they grew up around which is embraced in so many other fields.

The piano played a very crucial role in bringing music from the Classical era to the Romantic Era up to the Impressionist Era. We would not get brilliant masterpieces for the piano from composers like Chopin, Debussy, etc. if they did not have access or take advantage of the current technology of their day as they fortunately did.  Liszt himself was very popular when he was young and the audiences were stunned, because the piano was still a new instrument (it came out of its predecessor the forte-piano).  Today there is less of this fascination in the art music world. The Donauschinger Festival and many other festivals that label themselves as ‘new’ music festivals do not really play any electronic music which I find to be very absurd especially given that our age is so radically different with our technology being so radically different than that of the early 20th century and earlier.

Many if not most professional composers today have most likely started music at a younger age.  Many of them will start off playing the flute, piano, guitar, etc. as a child and are part of the band, choir, or orchestra in school or sometimes at a church.  During their time in school they learn nothing about how to work with synthesizers, creating music on the computer, etc.  Today there are very simple and basic programs that can be used in electronic music like Csound or Audacity that can be downloaded for free on the Internet.  Yet, for the most part there aren’t any required courses on those programs in most music colleges. 

Most of the instrumental music that is written today uses conventional instruments like the piano, violin, trumpet, saxophone, etc.  You hardly hear composers writing for other acoustical instruments like the banjo, jaw harp, sitar, etc.  One of the big reasons for this is that composers will have performers at their disposal.  A performer, ensemble, etc. commissions a composer to write something and then the composer writes for that medium.  It’s less common to find them to write for instruments that I mentioned, which I find to be strange.

In college, composers can get away without knowing anything about electronic music or even incorporating it in their own music.  One of the problems is that many of these composers have spent countless hours reading music scores and learning how to write for certain instruments.  Many are not required to learn about the acoustics and science behind the sounds.  Adjustments are very difficult because they will have to completely change their minds in order to write for computers.  Working with timelines instead of staffs is an entirely different work process.  Many will have to learn how to use a sound editor like ProTools or Adobe Audition which are entirely different than something like Sibelius.   It still occurs today where countless musicians have been hostile towards the use of integrating computers in music and the few that may use it will only use it as a background instrument.  I’ve read one interview where a composer was criticized for writing computer music because it was a tool of capitalism.  A lot of this close-mindedness has some similarities to how people are raised in religion where they are taught a certain religion at a young age and try to make the religion relevant for today despite it being archaic, contradictory, nonsensical, bigoted, etc.. 

Serious music seems to be an area that is so uniquely resistant to technology.  When I had art class back in elementary school we were encouraged to experiment and we did things like make collages, drawings, paintings, etc.  In fifth or six grade in elementary school we learned to use a drawing program using the Apple II computers.  In high school we were taught to use a paint programs like Photoshop.  In music it was a different story.  In elementary school, high school, and even my first 2 years in college all we did was perform music.  There were no computers involved.  They didn’t teach us to record something on a program like Adobe Audition and try to edit the recording.  Of course computers were less common in the 1990s, but we never went to use them the way we did with art or even writing.  Music on computers is mostly seen with commercial music while music on acoustical instruments is associated with art music.  Many of the colleges around the United States have very small electronic music programs and many may only have one professor on staff.  I went to Queens College which had one electronic professor on staff, but when he retired they shut down the electronic studio and now don’t offer any classes in electronic music anymore.  There are probably many similarities with other places too.

Xenakis was already pioneering the UPIC which was an electronic instrument that can be used by children, but yet the descendants of the program are hardly ever shown to children in music programs in elementary school to my recollection.  Elementary schools that still have music programs (some schools don’t have any thanks to the budget cuts) ought to offer children the opportunity to learn to use a simple music software like Audacity the same way that they teach children how to draw using drawing programs or how they teach children to sing or play an instrument.  Children should be given the chance to do something like record themselves with a program like Audacity and then learn how to speed up a sound or take a recording of someone giving a speech and then distorting it and using many other basic techniques.  They should learn about the basic building blocks of sound like the sine wave, white noise, saw tooth wave, etc.   In college music students planning to take a degree should be forced to not take one, but a few classes in knowing at least some of the very basics to working with programs like CSound or Max/MSP.

We’re not living in the fifties or sixties where one has to pay $200 an hour ($200 during that time that is!) just to use a computer as Max Matthews did.  Many of us are lucky to not be old enough to remember what Max Matthews had to experience on a day to day basis.  When Ligeti was working on his three electronic pieces that he wrote during the late 50s he also experienced a lot of trouble and had to have other people help him.  It’s no surprise that he eventually abandoned electronic music altogether.   His Pièce électronique no. 3 had to wait 40 years in order to be fully realized because the technology of the time was too primitive at the time.  Today things have changed rapidly for the better where one can just download free programs that are far superior than what Ligeti was forced to work with.  Very few people would ever want to go through all of the intense labor just to write a short simple 3’15” minute piece as Stockhausen did with his work Etude back in 1952.  If Stockhausen were still around and writing the same piece it would be a lot easier for him (although still not easy overall).  When he worked on it back in 1952 he only was able to use the studio once a week for only an hour.  Today he can just go on his computer and download programs.  He doesn’t need to have a separate machine for transposing notes because he can just use software on a computer.program.  He also doesn’t have to physically cut the tape and measure the tape carefully with a ruler.  Instead he can just move sound objects on the computer screen and measure the sound by the milliseconds.  There is no worry about destroying the sound object on the computer.  Of course one has to make sure that they save their work, and for safety make a copy of the sound files just in case if the hardware dies or if they experience a problem with the cloud drive.

Simple free programs like Audacity, PD, SuperCollider, Csound, etc. are more powerful than an entire orchestra that has all of the acoustical instruments on the planet combined.  You can do more with apps on your smart phone than an entire orchestra.  I haven’t begun mention the programs that will cost you money.  The computer has the power to emulate acoustical instruments.  My college didn’t have a harpsichord and so they used a synthesizer imitating the harpsichord instead.  It’s not ideal, but it sounded better than hearing harpsichord music played on a modern piano as I was accustomed to hearing there.  I wouldn’t recommend using computers for imitating acoustical instruments to play classical music unless if the technology gets so advanced that you can’t tell the difference, but it would be best to use acoustical instruments to get a feel of what it was like in the past.  There are techniques that the piano for example is not capable of that requires a computer. On a computer you can get a piano sound to crescendo instead of having it decrescendo.  You can’t get a trumpet, organ, or clarinet to make glissandos naturally like a trombone.  A computer or synthesizer is capable of imitating those instruments and getting those effects. 

I’ve had times where people will talk about how electronic music is missing “a certain element”.  Many have rejected electronic music altogether where I’d often hear things like “I don’t give a shit about music that comes out of speakers” or “I don’t care about music that is recorded”.  Many people have told me they are not interested in electronic music concerts because it’s “not live” and they can hear a recording instead.  To be frank I haven’t found much acoustical music live to be any more rewarding either.  I’ve heard Stockhausen’s electronic music and instrumental music played live with the composer present and I personally found his electronic music more rewarding to hear live than his instrumental music.  The electronic music was able to fill up the room and it was a different experience hearing it with the lights off as opposed to watching a performer.   Many will argue that I’m taking away the human element of music by not having performers.  Why don’t these same people who protest about electronic music not being live also protest about movies not being ‘live’ as well?  Aren’t movies dehumanizing everything as well?   Why don’t they say “well, I can just watch the movie at home and I don’t need to go to a movie theater”?  One can record musicians playing on acoustical instruments and then do a lot of editing to the performance as is the case with movies using actors.  Some will complain that electronic music concerts lack a visual element.  Many of today’s pop stars lip synch concerts and people still enjoy their concerts.  Pop music today is heavily computer edited that it is necessary for the computer to edit nearly everything to get certain sounds that would be impossible to get in real life.  Movie directors are doing this all the time visuals.  Many of the effects that you hear in mainstream pop music for example would be impossible to replicate unless if you are using some live electronic programs.  An electronic musician can just go on stage with their laptop.  Does it really make that much of a difference if the person is pressing buttons a lot or not pressing anything at all?  Seeing a saxophone player’s face turn all red on stage while they are pressing buttons may seem like fun to some, but to many it’s not a big deal, and that the music is more important.  Someone dancing, acting, or doing gymnastics would probably be more appealing visually to most people than watching a person pressing buttons on a musical instrument.

I’m not in favor of just throwing away acoustical instruments altogether. The harpsichord is a far important instrument today than it was during the Baroque era, but even during the 20th century you still have great pieces like Ligeti’s Continuum.  There are some great acoustical works written in the 2010s and there will be some written in the 2020s.  I do think that electronics need to play a far more crucial role in today’s contemporary music scene since acoustical instruments are very outdated and limited as I explained earlier.  Electronics have a lot more to offer in continuing the rich art music tradition with the unlimited amount of sound sources, editing, etc. that was once impossible.  Computer technology is getting more accessible, cheaper, better, and many times free if you have a computer handy.  You don’t even need to own a computer anymore and can just use your tablet or smart phone to write music, if you don’t mind writing music on the tiny screens.  I think that many contemporary music institutions are right now doing a very horrific job as far as technology goes. Composers in college are not taught to work more with electronic programs since writing music on a multitrack is not taken as seriously as writing out a score.  Often at times electronic music is seen as techno music found on car commercials or music found at discos.  Many professors did not really have much of the opportunity when they were going to college and so they spread their technological ignorance to the next generation as I’ve already witnessed (even among people who were born in the 1990s).

One of the big problems that electronic music composers face today is that many have to work alone in the medium alone.  It can sometimes be a problem for some.  When a composer writes for acoustical instruments they will have performers ready to play their works.  The performer will spend hours and hours of his or her time trying to master someone’s music.  Often at times composers will write very demanding scores and it may take the performers about six months or possibly a year to play the music.  Composers who write electronic music often do not have sound engineers or anyone like that at their disposal ready to help them.  There is hardly a network for them compared to what composers for the orchestra have.  Some electronic music composers hate working alone and miss the whole interaction process and some have turned back to old fashioned musical instruments.  I think that there needs to be bigger and better networks.  Who knows.  Maybe in the future a person will commission an electronic composer to write a piece using the sounds of their favorite cat as the main sound source.  There are already many collaborations in electronic music going around.  There are laptop ensembles, duos, etc.  We have yet to see more of what the future will have.