Working in tech with a degree in music, I have ended up noticing some interesting connections between seemingly disparate disciplines. Recently, I realized that music theory could arguably be the world’s first computer language. In both cases, a clearly defined system of rules dictates how a machine is supposed to execute a pattern, with intermediaries (performers and instruments versus compilers and linkers) taking on the translation work and adding their own flair (and opportunities for errors).
"Music theory could arguably be the world’s first computer language."
The connection between music and computers is not revolutionary, and terms like “Kubernetes Orchestration” already acknowledge a metaphorical parallel, but to me the connections are much deeper than top-line lingo, and understanding them can help non-computer-scientists such as myself deepen their understanding of these two similar worlds.
Getting into the nerdy weeds for a minute using western tonal music theory as the baseline (while acknowledging the wealth of awesome music that exists in other musical systems, both non-western and atonal), let’s use the “Hello World” C example as an allegory:
First, a compiler sees the “include” command, perhaps akin to the scope of instrumentation - are we talking a full-blown symphonic rendition of a Mozart Piano Concerto, or are we talking about a chamber rendition with only six instruments? It says to go and read the file <stdio.h>, which is analogous to setting the musical “key,” for example, D Minor for some mysterious melodrama. The compiler then works with a linker and then creates a binary which can be executed on the computer, or the conductor and musicians set up, pull out their scores, and prepare to start executing what is written on the page on their instruments (or machines). Then, with the baton in the air, they run the binary output file, filling the air with their version of the same harmonies that filled the concert hall of the Mehlgrube Casino in 1785 Vienna. What a marvel of human invention that such a thing can happen!
So, with standard headers and libraries as the “key,” there are melodies, harmonies, counterpoint, and key changes to be played, always within the parameters specified by the key and its standard progressions in the circle of fifths. The parameters give us the foundation from which we can expand and innovate, playing with what we can evoke at the edge of tonality, or what we can get machines to do.
Looking at the history of western music, there are some additional interesting parallels that can shed light on modern software engineering innovation. In the 17th century, the standardization of tuning, providing consistently executed intervals on instruments, enabled a great leap forward in musical innovation. The harpsichord and clavier (two string-plucking keyboard instruments that resemble a modern piano for the layman) being “well-tempered” with standard intervals between the notes enabled Bach to write elaborate counterpoints that are still some of the most beautiful and ingenious pieces of western music 400 years later. It is not a coincidence that once the standardized tuning of instruments improved, so too did the complexity of classical music in this system - the modern orchestra was effectively born a few decades later in the early 18th century, reaching new heights by the time Mozart and Beethoven hit the scene.
The expansion of orchestras with an increasing diversity of instruments only works because the instruments are playing within the same system - they are all playing within the proverbial guardrails of “C” headers and libraries. There is a standardized way of interacting dictated by the sheet music, even between a tuba and a timpani. They are interacting with each other within the same basic language, and even when their manifestations of sound are so different, somehow they follow the same “functional argument,” whether it’s a concerto, sonata, or symphony, with the help of the standard rules. With the standard “key” holding them together, the instruments create something bigger, more harmonious, and more innovative than what any of them could produce on their own. So, with the building blocks of tonality in place, the expansion of the orchestra enabled new musical innovation in all sorts of ways at a scale that wasn’t imaginable during Bach’s era. Sound familiar?
Cloud-native architecture is effectively the nascent 18th century orchestra, and we are now entering into a rapid evolution in which new instruments, along with their dependencies, complex harmonies, and opportunity for error, are being added to the orchestra constantly. When one lone harpsichordist messed up Bach’s counterpoint, they could pretty easily tell where they’d gone wrong and fix it. Did they skip a beat? Hit the wrong note? Probably it was obvious. But with a whole modern orchestra’s complexity to troubleshoot with a bigger, more complex piece, say, a Rachmaninoff Piano Concerto, figuring out what went wrong is a much bigger problem.
When the dependencies between instruments are flowing through an orchestra and something gets off, it can be almost impossible to figure out what went wrong. While the melody is passed from oboe to flute to bassoon and then dropped back into a lush harmonic orchestration supported by the whole orchestra of 100 musicians playing at once, the opportunities for error are vast. If the oboe misses a beat while carrying the melody it might be easy to pinpoint, but what if the cellos come in a beat late for the grandiose finale? The violas sitting next to them hear their entrance and become confused. In a split second, they are off too. The violins follow, followed by the flutes, and suddenly the timpani is the only one keeping the beat while the rest of the orchestra is in chaos. The conductor throws his baton and walks off the stage with his arms up in exasperation…
Watching it unfold on stage is easy for humans to comprehend, but the exact same thing is happening when something goes wrong in a modern cloud-native stack, with one service missing a beat followed by a collapse of cascading dependencies until everything comes to a painful halt. In the theater, if you’d just walked in after the conductor walked off, how would you possibly figure out what happened? If the orchestra is still playing in chaos, chugging along with trumpets playing over flutes in a wild cacophony, it could be just as hard to figure out where the root cause is, but, oh, the pressure of figuring out where it started! Perhaps the orchestra could just restart from the beginning of the concerto, but what if those cellos miss the same beat again? Was it an error on the sheet music, or is someone just having an off day? Or maybe the piece is just too hard for the musicians to play perfectly, and the next time around the oboe is off, the bassoons and flutes fall next, and we are back to figuring out why we can’t just make it through the beautiful concerto as Rachmaninoff wrote it… It is just as difficult to pinpoint the source of the musical error as it is to pinpoint the true source of an error in a stack of microservices, with APIs and dependencies changing as quickly as chords change in an orchestral piece. And when the orchestra gets bigger with more instruments and more opportunities for mistakes, with greater, more complex harmonies that challenge the individual players’ talents to execute, the chances for failure are higher and the ability to pinpoint a source and fix it for the next go are that much harder.
"Cloud-native architecture is effectively the nascent 18th century orchestra, and we are now entering into a rapid evolution in which new instruments, along with their dependencies, complex harmonies, and opportunity for error, are being added to the orchestra constantly."
To take this analogy one step further, there is a reason that orchestras rehearse before performing for a live audience. The ability to run through the piece when the stakes are lower to work out kinks before the big debut. Each individual player can practice how their execution of the written music works in the real world when they’re swimming in a rich complex sea of sound, allowing the conductor and the players to identify any common issues. They can identify the points at which the orchestra keeps falling apart or the oboe keeps missing their entrance so that by the time the performance rolls along, the concerto flows perfectly.
This is strikingly similar to running new code in staging before it fully launches into production, and just like orchestras, the more one can simulate the performance environment before the big debut, the better the chances are that it will go off without a hitch. This is basically the orchestral version of a canary deploy - just as one tests new code with a small percentage of live traffic to make sure it works in that environment, an orchestra may very well bring in small friendly audiences for the dress rehearsal, allowing players to get their toes wet and practice under minor pressure, so that they increase their chances of a perfect performance when it matters most.
"It is just as difficult to pinpoint the source of the musical error as it is to pinpoint the true source of an error in a stack of microservices, with APIs and dependencies changing as quickly as chords change in an orchestral piece."
I hope you are as excited as I am to see how the cloud-native equivalent of a modern orchestra pans out. If it’s anything like Rachmaninoff’s Rhapsody on a Theme of Paganini, it will be worth the ride.