Innovations happened throughout human history and there were major changes over time. In Japan there were the samurai class which later became extinct thanks to the gun. Horse and buggies were a common site for thousands of years, but later were replaced by cars. For trains, the steam locomotive was the locomotive used throughout most of railroad history but during the second half of the 20th century it started to be replaced by the diesel. In the past people who traveled from places like Europe to the Americas had to use boats and it would take them several weeks to travel across continents, but now almost everyone uses planes. The type writer was a very common tool through much of the first three quartets of the 20th century only to later be replaced by the computer, etc.
The same thing happened in the music world as well. In the past many composers were able to adapt to new technology much easier than they do today in some regards. The first forte-piano was made in 1698, but it wasn’t until the second half of the 18th century until it started to replace the harpsichord. J. S. Bach was mostly writing for the harpsichord, but towards the end of his life he wrote The Musical Offering which is for the forte-piano. Composers for centuries didn’t have as much of a problem with adapting to the technology of one’s time as they do today. Composers such as Mozart, C. P. E. Bach, Haydn, etc. all started off with the harpsichord, but later in life they were writing for the forte-piano.
Today it is a lot harder for composers living in the 21st century to adapt to 20th/21st century technology, because composers are ill-equipped because of the radical difference between 19th century technology and 20th/21st century technology. Most composers today, including those currently in their 20s would much rather stick with what their teachers know best (19th century European instruments) instead of embracing computer technology that they grew up around which is embraced in so many other fields.
The piano played a very crucial role in bringing music from the Classical era to the Romantic Era up to the Impressionist Era. We would not get brilliant masterpieces for the piano from composers like Chopin, Debussy, etc. if they did not have access or take advantage of the current technology of their day as they fortunately did. Liszt himself was very popular when he was young and the audiences were stunned, because the piano was still a new instrument (it came out of its predecessor the forte-piano). Today there is less of this fascination in the art music world. The Donauschinger Festival and many other festivals that label themselves as ‘new’ music festivals do not really play any electronic music which I find to be very absurd especially given that our age is so radically different with our technology being so radically different than that of the early 20th century and earlier.
Many if not most professional composers today have most likely started music at a younger age. Many of them will start off playing the flute, piano, guitar, etc. as a child and are part of the band, choir, or orchestra in school or sometimes at a church. During their time in school they learn nothing about how to work with synthesizers, creating music on the computer, etc. Today there are very simple and basic programs that can be used in electronic music like Csound or Audacity that can be downloaded for free on the Internet. Yet, for the most part there aren’t any required courses on those programs in most music colleges.
Most of the instrumental music that is written today uses conventional instruments like the piano, violin, trumpet, saxophone, etc. You hardly hear composers writing for other acoustical instruments like the banjo, jaw harp, sitar, etc. One of the big reasons for this is that composers will have performers at their disposal. A performer, ensemble, etc. commissions a composer to write something and then the composer writes for that medium. It’s less common to find them to write for instruments that I mentioned, which I find to be strange.
In college, composers can get away without knowing anything about electronic music or even incorporating it in their own music. One of the problems is that many of these composers have spent countless hours reading music scores and learning how to write for certain instruments. Many are not required to learn about the acoustics and science behind the sounds. Adjustments are very difficult because they will have to completely change their minds in order to write for computers. Working with timelines instead of staffs is an entirely different work process. Many will have to learn how to use a sound editor like ProTools or Adobe Audition which are entirely different than something like Sibelius. It still occurs today where countless musicians have been hostile towards the use of integrating computers in music and the few that may use it will only use it as a background instrument. I’ve read one interview where a composer was criticized for writing computer music because it was a tool of capitalism. A lot of this close-mindedness has some similarities to how people are raised in religion where they are taught a certain religion at a young age and try to make the religion relevant for today despite it being archaic, contradictory, nonsensical, bigoted, etc..
Serious music seems to be an area that is so uniquely resistant to technology. When I had art class back in elementary school we were encouraged to experiment and we did things like make collages, drawings, paintings, etc. In fifth or six grade in elementary school we learned to use a drawing program using the Apple II computers. In high school we were taught to use a paint programs like Photoshop. In music it was a different story. In elementary school, high school, and even my first 2 years in college all we did was perform music. There were no computers involved. They didn’t teach us to record something on a program like Adobe Audition and try to edit the recording. Of course computers were less common in the 1990s, but we never went to use them the way we did with art or even writing. Music on computers is mostly seen with commercial music while music on acoustical instruments is associated with art music. Many of the colleges around the United States have very small electronic music programs and many may only have one professor on staff. I went to Queens College which had one electronic professor on staff, but when he retired they shut down the electronic studio and now don’t offer any classes in electronic music anymore. There are probably many similarities with other places too.
Xenakis was already pioneering the UPIC which was an electronic instrument that can be used by children, but yet the descendants of the program are hardly ever shown to children in music programs in elementary school to my recollection. Elementary schools that still have music programs (some schools don’t have any thanks to the budget cuts) ought to offer children the opportunity to learn to use a simple music software like Audacity the same way that they teach children how to draw using drawing programs or how they teach children to sing or play an instrument. Children should be given the chance to do something like record themselves with a program like Audacity and then learn how to speed up a sound or take a recording of someone giving a speech and then distorting it and using many other basic techniques. They should learn about the basic building blocks of sound like the sine wave, white noise, saw tooth wave, etc. In college music students planning to take a degree should be forced to not take one, but a few classes in knowing at least some of the very basics to working with programs like CSound or Max/MSP.
We’re not living in the fifties or sixties where one has to pay $200 an hour ($200 during that time that is!) just to use a computer as Max Matthews did. Many of us are lucky to not be old enough to remember what Max Matthews had to experience on a day to day basis. When Ligeti was working on his three electronic pieces that he wrote during the late 50s he also experienced a lot of trouble and had to have other people help him. It’s no surprise that he eventually abandoned electronic music altogether. His Pièce électronique no. 3 had to wait 40 years in order to be fully realized because the technology of the time was too primitive at the time. Today things have changed rapidly for the better where one can just download free programs that are far superior than what Ligeti was forced to work with. Very few people would ever want to go through all of the intense labor just to write a short simple 3’15” minute piece as Stockhausen did with his work Etude back in 1952. If Stockhausen were still around and writing the same piece it would be a lot easier for him (although still not easy overall). When he worked on it back in 1952 he only was able to use the studio once a week for only an hour. Today he can just go on his computer and download programs. He doesn’t need to have a separate machine for transposing notes because he can just use software on a computer.program. He also doesn’t have to physically cut the tape and measure the tape carefully with a ruler. Instead he can just move sound objects on the computer screen and measure the sound by the milliseconds. There is no worry about destroying the sound object on the computer. Of course one has to make sure that they save their work, and for safety make a copy of the sound files just in case if the hardware dies or if they experience a problem with the cloud drive.
Simple free programs like Audacity, PD, SuperCollider, Csound, etc. are more powerful than an entire orchestra that has all of the acoustical instruments on the planet combined. You can do more with apps on your smart phone than an entire orchestra. I haven’t begun mention the programs that will cost you money. The computer has the power to emulate acoustical instruments. My college didn’t have a harpsichord and so they used a synthesizer imitating the harpsichord instead. It’s not ideal, but it sounded better than hearing harpsichord music played on a modern piano as I was accustomed to hearing there. I wouldn’t recommend using computers for imitating acoustical instruments to play classical music unless if the technology gets so advanced that you can’t tell the difference, but it would be best to use acoustical instruments to get a feel of what it was like in the past. There are techniques that the piano for example is not capable of that requires a computer. On a computer you can get a piano sound to crescendo instead of having it decrescendo. You can’t get a trumpet, organ, or clarinet to make glissandos naturally like a trombone. A computer or synthesizer is capable of imitating those instruments and getting those effects.
I’ve had times where people will talk about how electronic music is missing “a certain element”. Many have rejected electronic music altogether where I’d often hear things like “I don’t give a shit about music that comes out of speakers” or “I don’t care about music that is recorded”. Many people have told me they are not interested in electronic music concerts because it’s “not live” and they can hear a recording instead. To be frank I haven’t found much acoustical music live to be any more rewarding either. I’ve heard Stockhausen’s electronic music and instrumental music played live with the composer present and I personally found his electronic music more rewarding to hear live than his instrumental music. The electronic music was able to fill up the room and it was a different experience hearing it with the lights off as opposed to watching a performer. Many will argue that I’m taking away the human element of music by not having performers. Why don’t these same people who protest about electronic music not being live also protest about movies not being ‘live’ as well? Aren’t movies dehumanizing everything as well? Why don’t they say “well, I can just watch the movie at home and I don’t need to go to a movie theater”? One can record musicians playing on acoustical instruments and then do a lot of editing to the performance as is the case with movies using actors. Some will complain that electronic music concerts lack a visual element. Many of today’s pop stars lip synch concerts and people still enjoy their concerts. Pop music today is heavily computer edited that it is necessary for the computer to edit nearly everything to get certain sounds that would be impossible to get in real life. Movie directors are doing this all the time visuals. Many of the effects that you hear in mainstream pop music for example would be impossible to replicate unless if you are using some live electronic programs. An electronic musician can just go on stage with their laptop. Does it really make that much of a difference if the person is pressing buttons a lot or not pressing anything at all? Seeing a saxophone player’s face turn all red on stage while they are pressing buttons may seem like fun to some, but to many it’s not a big deal, and that the music is more important. Someone dancing, acting, or doing gymnastics would probably be more appealing visually to most people than watching a person pressing buttons on a musical instrument.
I’m not in favor of just throwing away acoustical instruments altogether. The harpsichord is a far important instrument today than it was during the Baroque era, but even during the 20th century you still have great pieces like Ligeti’s Continuum. There are some great acoustical works written in the 2010s and there will be some written in the 2020s. I do think that electronics need to play a far more crucial role in today’s contemporary music scene since acoustical instruments are very outdated and limited as I explained earlier. Electronics have a lot more to offer in continuing the rich art music tradition with the unlimited amount of sound sources, editing, etc. that was once impossible. Computer technology is getting more accessible, cheaper, better, and many times free if you have a computer handy. You don’t even need to own a computer anymore and can just use your tablet or smart phone to write music, if you don’t mind writing music on the tiny screens. I think that many contemporary music institutions are right now doing a very horrific job as far as technology goes. Composers in college are not taught to work more with electronic programs since writing music on a multitrack is not taken as seriously as writing out a score. Often at times electronic music is seen as techno music found on car commercials or music found at discos. Many professors did not really have much of the opportunity when they were going to college and so they spread their technological ignorance to the next generation as I’ve already witnessed (even among people who were born in the 1990s).
One of the big problems that electronic music composers face today is that many have to work alone in the medium alone. It can sometimes be a problem for some. When a composer writes for acoustical instruments they will have performers ready to play their works. The performer will spend hours and hours of his or her time trying to master someone’s music. Often at times composers will write very demanding scores and it may take the performers about six months or possibly a year to play the music. Composers who write electronic music often do not have sound engineers or anyone like that at their disposal ready to help them. There is hardly a network for them compared to what composers for the orchestra have. Some electronic music composers hate working alone and miss the whole interaction process and some have turned back to old fashioned musical instruments. I think that there needs to be bigger and better networks. Who knows. Maybe in the future a person will commission an electronic composer to write a piece using the sounds of their favorite cat as the main sound source. There are already many collaborations in electronic music going around. There are laptop ensembles, duos, etc. We have yet to see more of what the future will have.