PATRICK RASHLEIGH:
The Double Revolution of the Theremin:
Musical Instrument Interface Innovation in the Ages of Analog and Digital Electricity
© 1998 by PATRICK RASHLEIGH. All rights reserved.
In the October 2, 1927 issue of The New York Times, tucked away in column 6 of the second section, there was an announcement of a new musical curiosity that had been publicly unveiled in Berlin the day before. The article was headlined "ETHER WAVE MUSIC AMAZES SAVANTS" and went on to describe the new invention of a young Russian professor, Leon Theremin of the Physicotechnical Institute of Leningrad. His device, described as a "box three and a half feet wide, two feet deep and three feet high [with a] short brass rod projected up from the top at the right side and a brass ring about 8 inches in diameter from the left side", was a bizarre new musical instrument that sounded like "a violin of extraordinary beauty and fullness of tone." However, the sound was not the most startling feature of this invention, but the fact that the instrument was played without physical contact. "Assuming a slightly affected posture, ... [Prof. Theremin] merely gestured in space. Out of the loud-speaker of the familiar radio type came the familiar strains of the Scriabine Etude". The music of the new instrument was instantly dubbed "ether music" as notes seemed to be drawn magically from the air. (New York Times, Oct. 2, 1927)
Interviewed after the concert, Theremin predicted far-reaching consequences of his machine. "My apparatus frees the composer from the despotism of the twelve-note tempered piano scale", he claimed. "The composer can now construct a scale of the intervals desired. He can have intervals of thirteenths, if he wants them." The machine also apparently had near-infinite tonal possibilities, mimicking string, wind, and brass instruments with "absolute fidelity", and giving the composer a new palette of "literally thousands of tone colours." But apart from the new resources the instrument offered the composer and listener, it also offered a more intuitive, natural means of producing music. The ether music was "created with a simplicity and a directness matched only by singing. There is no keyboard to obtrude itself, no catgut, no bow, no pedal, nothing but simple expressive gestures of the hands." This instrument, Theremin claimed, would "[open] up an entirely new field in composition." (New York Times, Oct. 2, 1927)
Paradoxically, history would both surpass and fall far short of Theremin’s optimistic claims. His enthusiasm was a product of his age, a world swept up in significant new advances in electrical technology. For many today, his instrument - called originally the Thereminvox, but eventually shortened to simply The Theremin - signaled the birth of electronic music, and the starting point of 60 years of musical exploration not comparable to anything that had come before. In that sense, Theremin’s claims about the future impact of his instrument were accurate. What he did not foresee was the quick fall of his instrument into near obscurity right up until the 1970s, when it was effectively "rediscovered" by a new generation of electronic instrument designers, led by Robert Moog. Now, in the 1990s, Theremin’s dream is very much alive, but transformed in many significant ways. Just as Theremin’s generation was gripped by the new "cult of electricity" that foresaw the transformation of every aspect of society through electrical analog technology, since the 1980s we have been engaged in a similar giddiness over the digital revolution- and the computer has now become the new technological means to a better society. For now, the Theremin and its ideals are enjoying a rebirth in the digital age and inspiring a new generation of musical interface innovations not seen since the first rush of new designs in the 1920s and 30s. Nevertheless, the Theremin’s role today is very different from the one it played in the 1920s. In the course of this paper, I will examine how the Theremin first established a new direction in musical instrument interface design, as well as its resurgence in the 1980s and 90s as inspiration for the second revolution in electronic interface innovation. I will begin my discussion with a short historical overview of interface development.
A Brief History of the Instrument Interface
In surveying the history of musical instrument interface designs, it is remarkable how little has changed in the last 2500 years. With very few exceptions, almost all the major interfaces that we use today were fully established by the Middle Ages and have changed little if at all since then. Carlos Chavez broke the "sound-agents" of pre-electronic instruments into three categories: 1) strings (plucked and bowed), 2) columns of air (woodwinds, brasses), and 3) plates and membranes (percussion). The "procedures for obtaining vibrations from them" were generally either 1) rubbing (as in bowed instruments), 2) blowing, 3) striking, or 4) plucking (Chavez, 1937:138). These "procedures" are, of course, the prime determining factor for the interface design- and Chavez pointed out that they have essentially not changed in thousands of years. "Going as far back in history as possible, we find peoples living five thousand years B.C. using the same sound-agents we use today, and vibrating them by the same means." (Chavez, 1937:139) The instruments of the orchestra are substantially the same -albeit refined - as the ones used in ancient times, the exception being the bowed instruments whose history is less clear. In terms of instrument interface, two more exceptions come to mind: the keyboard and the valve, both of which are rare examples of unique acoustic interfaces that have arisen within recorded history. I think it worthwhile to briefly look at the development of these two interfaces, if only to put the advances of the turn of the 20th century into a greater historical perspective.
Today’s most popular interface, the keyboard, first originated in classical Greek times as a series of sliders that controlled air passing from pistons and hydraulic compressors to a series of pipes. The hydraulis (‘water organ’) evolved into the Winchester monastery organ of AD 980, an instrument powered by bellows but still controlled by sliders, called linguae, that were pulled rather than pushed. By the thirteenth century, pulled sliders became pushed levers, and by the fourteenth century, paintings show black and white organ keys in the modern chromatic arrangement (Sachs, p.284-6). While sound production using the keyboard interface has undergone many startling and dramatic developments since, the interface itself was already established by the 1300s.
The valve is one of the few examples that I have found of a new interface design that seemed to occur with almost the same spontaneity as the early electronic instruments. However, while the valve did appear on the scene quite suddenly, it was the result of a prolonged period of experimentation in the attempt to produce a chromatic horn. Of course, non-valved horns have existed for millennia, but have suffered from a significant limitation: a fixed-length, ‘natural’ horn is only capable of producing a fundamental pitch and its overtones. On its own, the accessible portion of the harmonic series does not even form a diatonic major scale, let alone the full chromatic range. In order to access notes outside of the overtone series, various options were tried, including fingerholes, keys, and a technique called stopping, which alters notes by inserting fingers into the bell. Different ways of varying the horn’s length were also tried, including the use of ‘crooks’, or removable (and replaceable) sections of the horn. To access different notes, the existing crook was removed and replaced with another of different length, which produced a different set of overtones. This idea was taken to its extreme in the 19th century, with the omnitonic horn, which had "crooks for all transpositions solidly fixed to the instrument in a circular arrangement and connected at will by a small dial." (Sachs, 1940:426) Of course, with all its accompanying crooks, this horn was too heavy to support, and the dial interface was too clumsy and slow to be practical (Sachs, 1940:417-426).
All this experimentation, which had started in the 15th century, culminated in the 1815 invention of the valve by two German musicians- Bluhmel and Stolzel. Crooks were selected by spring-loaded valves that allowed the player to access new overtone series with ease and speed (Sachs, 1940:426). With this discovery, the truly chromatic trumpet and horn was born, as was an entirely new instrument interface.
These examples serve to highlight two main points: that interface design had traditionally been a slow evolution, where each innovation ‘built’ upon the advances of the last. If real change occurred at all, (Chavez claimed that musical instruments had not changed substantially in seven thousand years) it was continuous and gradual. Another point, which will be explored more extensively later, is that interface design was always subject to the demands of the physical sound-producing phenomenon. The lengthy search for the valve was an attempt to find a practical way to deal with the pitch restrictions of the horn’s vibrating column of air. While the control interface had to be ergonomically viable, ergonomics was a consideration that entered only after the issues of sound production control had been resolved. The interface’s design could never be based purely on suitability to the human body in isolation of external factors. In the words of the Baroque; ‘the interface was the mistress of the sound producer.’
With such incremental change in interface design over the course of so many millennia, the extent of the advances made in the early part of this century were gigantic and unparalleled. Technology as a whole was making giant leaps forward, as the potential of electricity was gradually realized in such revolutionary inventions as the light bulb and radio. Contemporary accounts reveal the simultaneous excitement and fear of the impacts electrical technology could have on the future. "The most important evolutionary step in the entire span of history is without a doubt the conquest of electricity", wrote the composer Joseph Schillinger in an article entitled: "Electricity, a Musical Liberator" (Schillinger, 1931:26). Schillinger asserted that composers are limited in their creativity by the technology of their time, and that as musical applications of electricity arose, composers would scramble to fill the void. Electricity would also open the possibility of the scientific, rather than subjective, analysis of musical phenomena. "[J]ust as Edison’s lamp, literally speaking, shed a flood of light, so the possibility of obtaining sound from electrical current has illuminated the dark realm of musical phenomena and given us possibilities of observing and studying sound phenomena" (Schillinger, 1931:27). As for music production, Schillinger described a "more extensive study" undertaken by Leon Theremin, which resulted in instruments with 3 "essential characteristics": 1) electrical means of producing acoustical oscillations, 2) methods of singling out harmonics electrically to obtain various timbres, and 3) developing "different ways of playing by means of changing the electrical constants ... and using keyboard, fingerboard or space controlled adjustments." (Schillinger, 1931:30) Like Theremin, Schillinger foresaw the new musical resources as opening new possibilities for musical expression, both for the composer and the performer.
Others, however, saw technological advances in music in a different light. Boris de Schloezer saw technology as distancing the performer from the musical product. "All those splendid mechanisms, like Theremin’s or Martenot’s apparatus, ...are in a sense negligible, since they are not animated by the thought and will of man ... The development we have seen in the last twenty-five years ... consists in gradually replacing the direct relation between performer and auditor ... by an indirect and somewhat remote relation." (de Schloezer, 1931:3) Although he foresaw the eventual widespread impact of electrical technology on music, de Schloezer also dismissed its ability to contribute to music as an art: "Strictly speaking, there is no such thing as mechanical music ... [M]usic is, and always will be, essentially spiritual ... The ‘mechanization of music’ actually means the increase in the number of intermediaries between producer of music and listener." (de Schloezer, 1931:3)
This view was the exact opposite of Theremin’s, who saw electrical apparatus tapping more directly into the performer’s thoughts and intentions. Clearly, Theremin and de Schloezer were operating under different assumptions. De Schloezer held the view that music and its "humanity" lay in the direct, physical relationship between the player and the sound-producing device. The closer and more immediate the relationship, the better the instrument to express humanity. "Perfection for a musical device means ‘being human’", he claimed. The ideal instrument was, therefore, the human voice- an instrument inherently expressive of the human condition. He did, however, name a second best: "[B]owed instruments are incontestably the finest ... because they are in intimate contact with the human body and respond to its slightest impulses." (de Schloezer, 1931:4) Theremin also saw the ideal instrument as one sensitive to the performer’s actions, but didn’t see the technology as an "intermediary" or distancing apparatus, but a means to remove the physical restrictions of the very interfaces de Schloezer saw as in "intimate contact with the human body". Interestingly, both authors saw the voice as the ultimate expressive instrument, but came to completely opposite conclusions about technology’s ability to emulate its expressiveness. As I hope to demonstrate later, I think that six decades of development have shown that both men were essentially correct in their assessment of the potential and pitfalls of electronic interfaces.
The Theremin was the first in a series of new experiments in instrument controls that arose in the 20s and 30s, and in many ways the Theremin was also the most radical and significant. Many of the new interfaces were augmented versions of pre-existent acoustic interfaces, especially the piano keyboard. Chavez saw electrical technology in part as a way to potentially enhance existing interfaces. "In the case of the piano, ... the complexity of the system of hammers has prevented new dispositions of the keyboard, which might otherwise exist." (Chavez, 1937:163) Inventors obliged this suggestion. The Ondes Martenot, the Electrophon, the Dynaphone, the Hammond organ, all were electronic musical instruments controlled by a piano-type keyboard or a modified version. The reason for this is obvious: apart from the fact that the preceding 19th-century was among the most keyboard-obsessed in the history of western music, the keyboard can be seen as nothing more than a musical configuration of a series of switches. In contrast to the keyboard, the ‘data output’ of most acoustic interfaces is hard to translate into an electrical control signal that a sound-producer can interpret. For instance, translating an action into a control signal using a guitar interface involves measuring the frequency of the guitar’s vibrating string and translating it into control data-- a complicated process that was only achieved about 50 years after the Theremin’s appearance. Translation from a keyboard action into a corresponding electrical signal is a simple one-step process, since keys that control a hammer (as in the piano) can just as easily close an electrical contact that results in a control signal to the sound producer. Therefore, the acoustic keyboard had close equivalents in pre-existent electrical controls (consider the similarity of the piano key with the telegraph key of the 19th century!). Moreover, because the keyboard is essentially a set of 88 independent controls, mapping control action and the expected pitches is vastly simplified.
There were experiments on varying the keyboard interface: one of the more successful was the Ondes Martenot, a keyboard with a cable that stretched from side to side across the black keys. To glide from note to note, the player could pull the cable from left to right to determine the pitch. There was also a panel on the left consisting of levers and buttons that allowed the player to control the sound’s amplitude and brightness (Moog, 1993: 46). Pitch-cable notwithstanding, it is interesting to note that in 70 years, the vast majority of synthesizers’ interfaces have changed very little from the Ondes Martenot’s keyboard-with-accompanying-knobs-and-switches.
Theremin also used acoustic instruments as a basis for interface design. In 1930, Theremin produced a variation of his space-controlled instrument based on the ‘cello fretboard, resulting in it being dubbed the "electric ‘cello" (Rhea, 1978:60). In Theremin’s own words, the electric ‘cello had a fingerboard, "but instead of pressing down on strings, it was necessary just to place one’s fingers in different places, thereby creating different pitches." Unlike the acoustic ‘cello interface, however, there were no strings, so amplitude had to be controlled through a lever. In addition, pitch was controlled as if there were only one string- the fingerboard only sensed movement up and down, not side to side (Mattis and Moog, 1991:51). The Theremin itself, although not comparable to any acoustic predecessor, was based on the idea of a conductor’s hand motions. In a 1991 interview with Robert Moog, Theremin said that he originally "conceived of an instrument that would create sound without using any mechanical energy, like the conductor of an orchestra." (Mattis and Moog, 1991:49) As we shall see, the idea of conductor-like motions controlling music directly, rather than via a musician, would be part of the philosophy and ideals of the Theremin legacy.
There were other interfaces in the early years of computer music that attempted to break from the acoustic instrument tradition. Many consisted of levers, knobs, and switches- a natural choice for inventors who were not necessarily musicians themselves. One of the more popular examples was the Trautonium (fig.3), an invention of Friedrick Trautwein. The Trautonium, in its most sophisticated form, consisted of a bank of switches and knobs and a strip that you ran your finger along to control pitch (Moog, 1993:46). Like the Theremin, the Trautonium pitch control was continuous, and therefore did not restrict the player to any single scale. Unlike the Theremin, the Trautonium also was designed to have extensive control over the timbral qualities of its sound.
The Theremin enjoyed a period of popularity and novelty before its difficulties became apparent. First and foremost, one of the Theremin’s biggest drawbacks had also been praised as one of its biggest assets; namely, the lack of a fixed tuning reference. Since the Theremin had no physical guide or contexts in which to play notes, the only feedback to the performer lay in the sound itself. Moreover, surrounding objects apart from the performer would also affect the intonation of the instrument. Tuning was therefore extremely difficult, and producing even a simple major scale was a challenge, let alone a scale that divided the octave into thirteen parts, as had Theremin originally suggested! (Rhea, 1978:60, and Chavez, 1931:163-4)
But two factors were on the side of the Theremin. First, several prominent composers had written pieces that included a Theremin in the orchestration, including Edgar Varese, Joseph Schillinger, and Grainger (Mattis and Moog, 1991:48). Second, the instrument had found a virtuoso, a violinist named Clara Rockmore, who turned the Theremin into a highly expressive concert instrument. Rockmore was both a performer and a performance theorist: she devised a system of hand positions that allowed the player to raise and lower the pitch in discrete, or digital, steps (Rhea, 1987:60). By systematically associating hand and finger positions with pitch, Rockmore partially addressed the intonation and pitch reference problems inherent to the Theremin. Through her, the Theremin achieved a certain legitimacy in the eyes of the music establishment that prolonged its ‘first life’ beyond that of many of its fellow early electronic instruments.
However, both Theremin and his instrument soon fell into obscurity. Theremin himself, who had been living in New York since 1927, was taken back to Russia during the Second World War to help in the war effort. There, in his own words, he was "arrested, and ... taken prisoner. Not quite a prisoner, but they put me in a special lab in the Ministry of Internal Affairs." (Mattis and Moog, 1991:51) To the West, Theremin disappeared completely, as did the public exposure of his instrument. A listing of newspaper articles featuring the Theremin shows a gap of 54 years, between 1934 and 1988, during which no articles are listed. Theremin’s disappearance was so complete, in fact, that a history of electronic music published in 1981 claimed that Theremin had died around 1945 (Mattis and Moog, 1991 and Mackay, 1981).
During the ‘50s and ‘60s, the Theremin became primarily a ‘sound effects’ tool for radio shows and movie soundtracks, especially in the science fiction genre. Although some attempts at musical application were made in this genre (the theme to "Star Trek" and the soundtrack to "The Day the Earth Stood Still" are two examples) generally the Theremin had a "fall from grace" from a serious Art Music performance instrument to a low-brow theatrical effect. As the Los Angeles Times dubbed it in 1995, the "Thing that goes Oo-Wee-Oo" was so characteristic of ‘50s and ‘60s Sci Fi and horror, it later became the subject of nostalgia-parody through such movies as "Ed Wood" (Riemenschneider, 1995). If the Theremin’s sound had become a curiosity, its revolutionary interface was all but forgotten to the public.
However, the Theremin lived on with a cult-like following throughout this period. As electronic circuit building became easier and more affordable, occasional "Build your own Theremin" articles appeared in hobbyists’ magazines. Robert Moog, later a synth pioneer and chief advocate of the Theremin revival, built his first Theremin in 1949 (Mattis and Moog, 1991:49). Pop bands such as Led Zeppelin ("Whole Lotta Love") and the Beach Boys ("Good Vibrations") recorded hits which used the Theremin, although largely as a sound effect rather than a melodic instrument. This period served to separate Theremin users into two near-opposite camps: the slim elite of avant-garde electro-acoustic musicians, and (albeit adventurous) radio-friendly pop musicians. Even today, prominent pop and popular musicians regularly use the Theremin in their recordings: Fishbone, Portishead, Tom Waites, Pere Ubu, The Pixies, and many others, have used the Theremin in recent memory. However, the real advances and developments of this instrument, as well as the transposition of the Theremin philosophy to variants of the ‘Ether Music’ interface, have occurred within the avant-garde.
In 1982, the MIDI standard was created to address the need for a standardized communication protocol between electronic instruments. The profound impact that MIDI would have on the Theremin interface is not immediately obvious: MIDI is a heavily keyboard-biased language which sends data as a series of ‘note-on’ and ‘note-off’ messages, accompanied by a volume and pitch value. There are two significant assumptions in MIDI: 1) that volume over time is proportional to the initial attack (i.e.- ‘note on’ volume), as with a piano or plucked string instrument; 2) that pitch is non-continuous, and operates within the framework of the equal-temperament scale. MIDI is flexible enough to allow both volume change within a note and pitch variation between semitones, but not without some effort. MIDI is a language "native" to the keyboard. The effect on the final musical product is somewhat analogous to translating from one language to another: you can take most anything in one language and translate it, but often at the expense of the elegance and subtlety of the original statement. MIDI contrasts sharply with the Theremin, which was designed in part to undo the "tyranny of equal temperment", and allow both pitch variation without reference to any predetermined tuning system, and very accurate volume variation independent of pitch, and within a single note. Moreover, the Theremin is not "attack-oriented" like the percussive piano or the "note on/off" MIDI protocol.
However, MIDI did allow the complete separation of the sound-producing component (technically, the sound "synthesizer") from the controlling component, connected only by the MIDI cable. Since the musician interface was now an entirely independent component, it threw new significance on the control portion of the electronic instrument. Since the significance of the Theremin lay mostly in its alternate interface, this opened up a whole world of possibilities: the Theremin interface could be used to control any electronic synthesizer, not just the Theremin’s own sound-producing circuitry. The challenges faced by the interface designers in the acoustic era (namely, creating an effective control of a sound-producing mechanism while keeping the control interface ergonomically useable) were now reduced to issues of suitability for the human body and the potential for musical control and expression. No longer was the compatibility of the interface and the sound-producer an issue: by establishing a common middle-ground in MIDI, any equipment that was MIDI-compatible was automatically able to share information with any other MIDI-compatible equipment. The choice of an effective interface for the chromatic trumpet might have been very different had the sound-producing horn been MIDI-compatible!
In its most literal translation to the digital age, the inevitable "MIDI Theremin" was invented in the mid-90s as a controller versed in the MIDI protocol (http://www.fullerton.demon.co.uk/longwave /mcv1a/index.htm). However, the most significant (and promising) interfaces have arisen from the inspiration of the Theremin, by taking the philosophies, ideals, and interface advances introduced by Theremin’s experiments and extending them to new interface designs using current technology, and often using completely different physical principles. In the next section, I will examine what those ideals were, as well as the myriad of Theremin-spin-offs introduced in the 80s and 90s.
The Theoretical and Philosophical Impact of the Theremin
The Theremin was among the first to utilize electricity in a fully developed musical application. As such, it was among the first to create musical sound using a non-acoustic mechanism. The implications of the Theremin, and more generally of the introduction of electronics into musical production, were enormous. At the most fundamental level, the Theremin interface divorced the control from the means of sound production- or at least hid it to such an extent that the musician would not be aware of the processes that connected action and consequent sound. This was not an entirely a new concept: the organ interface also bears little relation to the physical phenomenon that produced the sound. However, in the vast majority of cases, the control mechanisms of acoustic instruments have a direct and apparent relationship with the sound source. This can be illustrated with a simple diagram:
In this model, interface and sound production are housed within the instrument’s physical structure. Although decisions can be made about the interface, they must be made within the restrictions that the means of sound production sets. In most cases, this is self-evident: the guitar interface is a means to control a set of vibrating strings, the trumpet valve is a means to control the vibrating column of air within the bell. Even if the guitar neck would be ergonomically better suited to the player if it were curved or if the frets were spaced equally, this would violate the physical restrictions placed on the interface by the nature of the vibrating string. Thus, interface and means of sound production are in many ways aspects of the same sound-producing organ, and there is an apparent link between the musician’s control actions and the resultant sound (i.e.- the physical properties of the system make for a predictable physical result).
The Theremin was an intermediate (but nevertheless gigantic) step from that model. Unlike the acoustic model above, the Theremin’s interface had less of an apparent link between the actions of the performer and the sounds that resulted: in fact, the variations in pitch and volume seemed at first surprising to Theremin users and observers. By removing the physical contact between performer and instrument, the control actions and sound seemed less consequent. In contrast, consider the violin, in which the connection between the actions of the violinist and the sound the violin makes are clearly displayed to the audience. The player is in direct physical contact with the interface, and the interface and sound-producing mechanism are visibly aspects of a single mechanism. The Theremin had an interface, and it produced sounds in response to control actions, but the means by which these control actions resulted in these sounds was not visibly obvious. The interface and sound production had made an important break in the minds of the performer / observer:
In terms of the mechanism of sound production, the Theremin was an interesting reversal of the acoustic model. Unlike acoustic instruments, the interface was not designed around the sound-producing mechanism, but instead the Theremin’s sound was largely a by-product of the controller. The Theremin senses distances through electromagnetic fields, which alter a frequency that results in the Theremin’s tone. One history described it as a modified "radio squeal" which you sometimes experience when touching radio antennae. Although Theremin himself introduced overtones to give the timbre a violin-like tone, the sound itself was derived from the interface (Moog, 1987:12). This is the opposite of most acoustic instruments- as in the case of the trumpet, violin, and guitar, where the mechanism for creating sound was the starting point, and the interface was designed to allow easy control of this sound production by a human user.
However, more important in the context of interface development is the fact that the interface and the means of sound production were still intrinsically tied to each other. Thus, we can see the Theremin as an important intermediate step between interfaces that are clearly tied to the physical means of sound production, and the interfaces that are completely independent of the sound producer. This trend would ultimately culminate in the establishment of MIDI in the early 80s, which can be diagrammed thus:
In terms of the performance, the Theremin’s sound production seems no less ‘remote’ to the interface than that of a MIDI setup with separate controller and sound source. In fact, when a person plays a MIDI keyboard controller and a piano or organ sound is produced, or a MIDI wind controller with a flute sound, it seems to be almost as ‘natural’ an association as an acoustic instrument. There is, however, a significant difference. As previously mentioned, the Theremin’s interface and sound source are intrinsically tied to each other- MIDI has no such association, save for the MIDI interface itself. Any interface can link up with any sound source, with only a few negligible exceptions. In terms of instrument design, this is a profound separation that goes far beyond the apparent controller/sound source separation of the Theremin. Since any MIDI-compatible interface can be linked to any sound source, the instrument is customizable within the current availability of controllers and sound producers, and physics does not dictate restrictions on which can be matched with which, or which timbre is associated with which controller.
Moreover, the simplification of music into note-defining data packets allows another intermediate level of interface flexibility. The digital age allows extremely efficient processing of MIDI data, so that real-time processing of note parameter values can occur. Music data can be modified and customized completely, so that the data coming from the controller is interpreted by a computer before being passed on to the sound producer:
The implications of the ability of the performer to arbitrarily match any interface with any sound source, as well as transform the note data en route will be further explored in the upcoming discussion.
The above four diagrams serve to illustrate the relationship of interface to sound producing body. I would now like to examine the role the interface plays in terms of the act of music making. Kvifte starts with a simple cause-and-effect chain that can be applied to any musical instrument(Kvifte, 1988:79):
From there, he suggests that music has to be seen as a feedback loop between musical intention -perception and the physical actions and controls necessary for producing the music. This is Kvifte’s representation of this system (Kvifte, 1988:80):
Kvifte does not explicitly name the interface per se, but instead talks about the distinction between "control action and organ". The control organ is the parts of the instrument "responsive to the performer’s control actions." (Kvifte, 1988:80) The control actions are the movements that the musician performs, usually resulting in a sound. In playing a piano, "striking a key" implies the control action (striking) and the organ (the key). In certain cases, Kvifte’s dichotomy of organ and action are unified- such as in the case of the jaw harp, where the oral cavity acts as both controller and resonant chamber (Kvifte, 1988:80-1).
What is most useful for this discussion, I think, is the idea of musical intention and control in a feedback cycle. For my look at the Theremin, I will re-label ‘Music’ as ‘Mind’, and ‘Instrument Control’ as the ‘Interface’. For the purposes of illustration, I will use the term ‘Music’ in my diagrams to mean ‘musical sound’.
As the early quote of Theremin’s indicated, the Theremin interface was the first to signal the use of electronics to "liberate" the performer of mechanical constraints, thereby associating the musical sound production even closer to the performer’s muscular impulse. Theremin’s comparison of his instrument to the human voice is a telling one: by not having to "fight" against friction, weight, string tension, and the other mechanical restrictions that acoustic instrument interfaces impose, the translation from the musical thought to the musical sound production was more immediate. Here is one possible representation of the musical act produced through an acoustic instrument:
The Theremin intended to effectively bypass the control interface, thereby translating more directly the muscular movements into musical sound:
This second schema clearly shows that there is less of a distance between the mind and the resultant music. However, also notice that one of the two forms of feedback to the performer is no longer present, the physical feedbacks of touch and a body position point of reference. Clara Rockmore, the acknowledged "virtuoso of the Theremin", developed a system of playing that partially addressed these problems through a system of finger and arm positions, in effect creating a ‘point of reference’ for body position through the dimensions of the fingers and arms. Despite the benefits of this system, Clara Rockmore heavily relied on her sense of perfect pitch, which she was born with. "She is constantly moving her hands, listening to the resulting pitch changes, then ‘trimming’ the precise position of her hands to home in on the desired pitch and volume ... she is able to hear effect of her hand motions soon enough that her audience is rarely, if ever, aware of the aural feedback corrections that she intuitively applies." (Moog, 1984:5)
Without the "body-memory" of knowing the feel of blowing, or plucking, or hitting a middle-C, the performer is left with far less resources for ‘self-control’. Muscle control must be all the more exact, since the interface gives far less guidelines, and returns far less information to the performer. And therein lies one of the primary problems with the Theremin: the ‘liberation’ of the performer means that the music theory implied in tangible interfaces is no longer present. The performer is liberated from limitations, but is still expected to perform within the constraints of music theory. In many ways, the polar opposite is the piano interface, which is very close to a representation of a staff laid horizontally. Music theories of scale, intonation, harmony, melody, etc. are clearly built in to the keyboard and the instrument ‘aids’ the performer and composer in musical production. The Theremin interface, through its near-complete ‘liberation’ of the musician, removes that aid and visual representation.
Despite its inventor’s ideals of unifying mind/body with musical product, the Theremin did not remove the interface component at all, but merely moved it from the realm of tangible matter to one of intangible electrical fields. In that sense, the second diagram represents the ideal of the Theremin rather than the reality of it. And this, in many ways, is the Theremin’s greatest long-term contribution to interface design, and perhaps the secret of its longevity: namely, that the Theremin set up new goals and expectations for interface design that could be extended beyond its simple technology and straightforward application, into new areas of application using far more advanced technology. The Theremin was as much a philosophy as it was an instrument. It is this philosophy and its impact on 1990s instrument interfaces that shapes the discussion later in this essay.
The Theremin interface also was distinct in its exact control and complete separation of note parameters. At its most fundamental level, a note can be defined in terms of certain parameters: fundamental pitch, amplitude (loudness), duration, timbre, and so forth. In acoustic instruments, physics dictates certain connections between these parameters. A guitar plucked softly sounds different than a guitar plucked hard, and not just in terms of amplitude. There is a built-in association between amplitude and timbre intrinsic to the vibrating string. Similarly, the harder plucked string will have a slightly higher pitch than the softer one, assuming the remaining parameters are held constant. Therefore, there is also a built-in association between amplitude and pitch, albeit a subtle one. Certain instruments associate different parameters in different ways, and most (if not all) acoustic instruments have parameters that somehow are affected by each other (Kvifte, 1988).
The Theremin’s parameter control is independent both in sound production and interface control. In general, changes in one parameter do not affect another- amplitude can be increased and decreased without change in pitch, timbre, or any other sound-shaping factor. There are exceptions common to the electronic realm. First, too much amplitude will tend to "overload" circuits and speakers, resulting in distortion that will modify timbre, sometimes significantly. Also, speakers and circuitry tend to have frequency areas of "peak" efficiency, which means that certain pitches will produce more sonic energy than others, due to the makeup of the electronics or speaker. However, the ear also has areas of peak efficiency of perception, so that certain pitches are perceived as louder, even if they are not. Likewise, as notes go higher, the upper overtones will tend not to be heard as loudly as they approach the 20,000 Hz "maximum frequency" that the ear perceives. Changes in the loudness of the partials results in changes in the timbre, and we perceive a slightly different tone quality, even if in physical ‘reality’ the relative distribution of energy between harmonics are equal. It would appear, then, that complete parametric independence is impossible without ideal equipment (both sound-producing and sound-perceiving). With that in mind, however, it is still fair to generalize that in practical terms, electronic music allowed independent parameters in sound production.
The control of these parameters could also be similarly independent- especially in the case of the Theremin, where pitch and amplitude were determined by separate and distinct control mechanisms. In some ways, this is not an entirely new idea: the violin or guitar or nearly any stringed instrument has the same division, with one hand controlling pitch (by determining the length of the vibrating section of string) and the other controlling volume (be it the speed and pressure of the bow, or the strength with which the strings are plucked, etc.). However, in the case of the Theremin, the difference is the purity of the control. The left hand controls nothing but the amplitude, and the right controls nothing but the pitch. In the case of most acoustic instruments, the distinction is not so clear: the right hand of the guitarist, for instance, also selects which strings to play, which obviously has a significant role in selecting pitch. The left hand can mute strings on the fingerboard, which directly affects amplitude.
In the acoustic world, instrument controls generally play multiple roles simultaneously: Kvifte has done a systematic survey of acoustic instrument controls and their multiple connections with various parameters that make up a sound (Kvifte, 1988:146). By contrast, electronic instruments have no such connections unless the instrument builder or user decides it. For instance, it is common to find piano programs on keyboards that raise the pitch slightly when the player hits the keys harder, in imitation of an acoustic piano. Here is where MIDI data processing once again comes into play. It would be just as easy to decide to lower the pitch in response to harder key hits, or introduce a tremolo, or whatever the user decides. Most aspects of the controller data can be easily manipulated and linked to any note parameter, bringing a whole new dimension to design interface.
The implications of this concept can only be demonstrated with a brief exploration of possible configurations. Let us start with a MIDI Theremin, which is essentially a Theremin-type interface that produces MIDI code. The MIDI Theremin produces two ‘streams’ of data, derived from the distance of each hand from the two antennae. Usually, the two streams are linked to pitch and amplitude:
However, there is nothing to stop us from reversing that configuration, so that the right hand controls amplitude, and the left pitch. Let’s also reverse the positioning of the hand so that the closer the left hand gets to the antenna, the lower the pitch. With some additional simple calculations, we can figure out the speed at which the hands are moving: we can easily cause the sound source to play a cymbal crash on top of the regular instrument sound every time the left hand moves above a certain speed. What about ranges of pitches? We can set up the instrument so that below middle C, the sound source will play major chords using a piano sound. Above middle C, we can have a flute sound with an organ doubling an octave above. The resulting instrument interprets the data coming from the interface in a very different way than originally intended, and in a sense produces a new interface without physically modifying it. The transformation of data within the processor becomes an intrinsic part of the interface-- or is it a part of the sound producer that the interface controls?
It is worth noting that the transformations performed here are extremely easy to implement: the math involved involves nothing more than subtraction and addition, and simple comparison of numbers.
This example process is a direct result of the digital era. By expressing music data as a series of 0’s and 1’s, it allows easy transformation of music through a computer, and easy communication between separate modules. The transformation process described above would have been impossible before electricity, and very complicated in the analog era of Leon Theremin. This new way of perceiving music as data to be manipulated and transformed is very much a product of our digital age, and forms the basis for the resurrection of the Theremin in the last 20 years.
Before moving on to the next section, I will briefly recap and summarize the impact of the Theremin in terms of the new conception of interfaces and control mechanisms. First, the Theremin demonstrated the possibility of using technology to bypass the perceived restrictions of physical (acoustic) interfaces, and thereby tap more directly into the impulses and intentions of the performer’s mind. Second, the Theremin began the process of separating the controller from the sound source, so that physical action was not obviously linked to the resulting sound. Third, the Theremin allowed a complete separation of the parametric controls that made up each note. Parameters were no longer necessarily associated with one another, but could be seen as distinct entities that only came together in the final parameter ‘mix’ that resulted in a note.
Finally, by utilizing electrical fields in a musical context, Theremin posed a crucial question: if electromagnetic fields can be used to produce music data, what other non-traditional sources could be used as musical controllers? With MIDI and digital technology, that question could be fully and easily explored, since it was a relatively simple task to transform almost any phenomena into MIDI signals. The result was (and is) almost an embarrassment of riches- the heartbeat or brain waves of the performer, the volume of cars passing on the street outside, light levels coming through a window, the humidity or temperature in the venue... close to anything can be used as a MIDI controller in a performance or studio environment. The search has now turned from the technical challenge of how to link various phenomena with musical output, to producing an interface that is truly useful for the musician.
The Theremin and its Derivatives in the 80s and 90s
The recent explosion of interface design exploration has resulted in a surprising rekindling of interest in the Theremin and its ideals. Many of the leading designers of today openly credit Theremin for inspiring their current research (including Matthews, Moog, Buchla, etc. ). The section that follows is an attempt to demonstrate the variety of research that is currently underway in the field of interface design, as well as the aspects of the Theremin that have been a source of inspiration to the designer.
Max Matthews is frequently dubbed the "Father of Computer Music", due to his extensive research in software design starting in the 60s. However, he has also designed a variety of well-known instrument interfaces, including the ‘Radio Baton’ (fig. 4). The Radio-Baton is a device which tracks the motions of the tips of two batons in three-dimensional space, by determining the distance from 5 ‘plates’. The radio-baton has a built-in microprocessor (small computer) that reads the five distances and calculates the position, producing a set of Cartesian coordinates (positions along an x, y, and z-axis) (http://ccrma-www.stanford.edu/CCRMA/Overview/node44.html). This data can be then manipulated in any one of a myriad of ways. Two examples could be the creation of invisible ‘surfaces’ that, once crossed, produce MIDI triggers. Alternately, positions could be used to change the timbre of a continuous tone, so that the higher the baton, the ‘sharper’ the tone, and moving to the left produces an echo effect.
The Radio-Baton (and the derivative ‘Radio-Drum’) is one of many ‘space-controllers’ that have arisen recently in the wake of the Theremin. Another example is the (relatively) popular Buchla Lightning, named after its inventor, Don Buchla. Lightning uses two infra-red light transmitters to track the movements of the player, where velocity and motion are converted into MIDI signals. The transmitters are available in several forms, including a ‘drumstick’ shape, a short wand about 3 inches long, or a ring that can be worn on the body. Warren Burt, a composer who uses Lightning in his performances, says that a "full range of MIDI signals is provided by the Lightning programming, and a wide variety of placement and gesture is accommodated by it as well." (Burt, 1994) The following two examples of his work suggest the range of possible applications of Lightning, and other space-controllers.
The first performance ‘composition’ of Burt’s involved two performers, each wearing one Lightning ring. "I programmed the Lightning so that the movement of each ring into the Lightning's detection area would turn on a single sine wave of randomly chosen pitch ... any left to right horizontal motion would be turned into MIDI pitch wheel signals which bent the pitch of the sine waves over a range of a Major 9th ... We found that even with this simple patch, we had a large range of control, and could produce a wide variety of musical material with simple gestures." (Burt, 1994) His second piece that used Lightning was a collaboration with an author/performer named Chris Mann. Noting the extent to which Mann used his hands as a means of expression while reading aloud, Burt conceived of a performance piece in which Mann would wear Lightning rings on his hands while reciting. Mann’s voice would be picked up by a microphone and put through a processor- a device capable of applying a variety of acoustic "effects", such as echo, reverberation, distortion, and so forth. Since most processors are capable of responding to MIDI data, the Buchla Lightning could be used to select which ‘effect’ would be applied to the voice at any time. "The motions of his right hand produced MIDI Program Change signals, which selected the effect being applied to his voice. The motions of his left hand produced sporadic drum sounds ... My job as composer/programmer was to program the Lightning so that it would respond to Chris' gestures in such a way that he would have to modify his motion, or even be conscious of it, as little as possible." (Burt, 1994) The ‘musical’ element of the performance was to be an unintentional by-product of the readers’ natural hand motions.
Another clear Theremin-derived controller is the 3DIS (3 Dimensional Interactive Space) system, invented by Simon Veitch. The 3DIS uses several video cameras to track changes in brightness in different parts of the performance space. The tracking mechanism is quite complicated, and requires a PC-compatible to perform all the calculations necessary to translate positions into MIDI data. Unlike Lightening, whose forte is in tracking movement, 3DIS’s emphasis is on calculating position in a 3D space: in this way the 3DIS is comparable to the Radio-Baton. However, the performer is neither required to hold any baton or wand, nor stay within a restricted area. By setting up video cameras in strategic positions, a whole stage can be used as a ‘control space’ (http://www.anzlink.com/Aprofile/cebit97.html, and Burt, 1994).
The primary inspiration of the Theremin in these controllers is, of course, the space-control interface. Taking the idea of ‘music from thin air’, and merging it with sophisticated tracking techniques and data processing, these ‘virtual’ instruments can be seen as having taken up where the Theremin left off. Where the Theremin was capable only of sensing the distance from two antennas, the information that these interfaces provide is far more descriptive. The Radio-Baton and 3DIS pinpoint the exact location in space of the object (be it performer or baton), rather than simply the distance from a point. Lightning tracks movement across a 2 dimensional map and derives data more from movement than position. Moreover, both Lightning and the Radio Baton allow two independent controls through one control unit, whereas the Theremin had to have two antennas to track two hands.
Other space-controllers have been developed with another aspect of Theremin’s ideals in mind. Picking up on Theremin’s idea of using a conductor’s gestures to produce music directly, rather than via other instrumentalists, researchers have been attempting to take gesture recognition to a new level. With its use of wands and motion-tracking, the Buchla Lightning has clear associations with conducting- and by analyzing its movements, Lightning can be used for pattern recognition. More specialized, however, is ‘The Sensor Frame’, developed by Paul McAvinney at Carnegie Mellon University. The Sensor Frame can track the position of each individual finger while accompanying software can at first ‘learn’, and then recognize gestures that can in turn be used to adjust parameters (Dannenberg in Haus, 1993:313). The computer could then be "conducted" like an orchestra: "In computer music work, gestures are often used to control continuous parameters ... When gestures can be recognized from their initial movements, the remainder of the gesture can be tracked in real time to control one or more parameters ... For example, a conductor can hold up a palm to mean ‘play softly’, and the amount of arm extension gives some range of expression between ‘a little softer, please’" (Dannenberg in Haus, 1993:313-4).
It is worth noting that these examples are but a small sample of the remarkable range of controller research being conducted. Even non-musical applications are citing Theremin’s influence: researchers at the MIT media lab are attempting to develop a means to transmit data using the human body, thereby passing information between personal computers via a physical handshake. They credit Theremin’s use of electrical fields with the inspiration behind this current project (Haus, 1993). Clearly the impact of the Theremin extends beyond the now very dated technology of the 20s. The ideas that the instrument and its inventor have spawned have has had far-reaching effects in the chosen direction of current researches into interface design. But are these innovations of lasting worth or only a passing fad of the early digital age, much as the electronic instruments of the 20s and 30s were of the analogue electrical era? Are we in a period of laying a groundwork for future mainstream innovations, or in a perpetually obscure side road or dead end? Certainly, only time will tell about the future relevance of current research, but it may be worth briefly evaluating the value of electronic interface innovations thus far.
So What? The Success and Failure of the Theremin’s Legacy
The Theremin has been a simultaneous smashing success and dismal failure. In historical terms, electronic instruments have indeed transformed certain aspects of music dramatically, as synthesizers and MIDI are now firmly entrenched in the music world. As Theremin predicted, the dividing of the octave into 13 parts is indeed possible today, and the range of sound sources available to the composer is vast. There is no question that electronics has transformed music production in ways beyond the wildest imaginings of Theremin or his contemporaries. The development of interfaces, however, has not matched that of sound production itself.
Early writers were quick to recognize the problems. Chavez, in a remarkable assessment of the new electronic media, raised a crucial point: "To me, this seems at present one of the most difficult points to solve: to find a medium adequate to human anatomy, and taking advantage of the infinite facility of the electric production of sound." (Chavez, 1937:164) Referring directly to the Theremin, he gives the example of the "imperfect and awkward" note attacks. Comparing it to acoustic counterparts, "we must think of the great richness, variety and elasticity of the attack on many traditional instruments in order to see clearly the problem of the performance of electronic instruments- the enormous richness of ... so-called ‘touches’ on the piano, the widely varied attacks on the instruments with mouthpieces- in which the embouchure and the inflections of the breath produce an infinite variety." (Chavez, 1937:163-4 ) Herein lies both the strength and the curse of electronic instruments: they are pure and absolute, and as such the user has complete control over every aspect of its makeup, but also nothing can be taken for granted. In an ironic twist of terminology, the vast amount of information that is produced by acoustic interfaces, as well as the highly complex mix of harmonics and random noise that the sound source produces, is "automatically" built into the instrument. In the world of computers and electronics, nothing is "automatic"- everything must be deliberately indicated. Since, as Chavez indicated, the variety and subtlety of sound production in acoustic instruments is near-infinite, by selecting only certain parameters to be represented in the interface mechanism, one is forced to select the data that is to flow between the interface and sound source, and in doing so vastly reduce the resources of the interface. If MIDI allows access to a vast palette of different sonic colours, MIDI and its interfaces have yet to control a single colour with the same subtlety of shading of the acoustic instrument.
It seems almost self-contradictory to also suggest that electronic interfaces also suffer from too much data. However, my experience has been that new electronic interfaces often suffer from an inability to shape data into musical material. MIDI has allowed the transformation of almost any measurable phenomenon into ‘musical’ note data, but at some point, the question must be asked: "what resources opened up to us through MIDI are truly of musical value?" Burt comments that the Buchla Lightning allows us to create musical data through flapping one’s arms, but then wonders: "well, is flapping one's arms in the air such an interesting way of making music, anyway?" (Burt, 1994) While the horizon of possibility rushes away into the distance, we must also be deciding what data is of musical value, and/or creating a new musical aesthetic to accommodate the new resources. Either way, these new interfaces produce a flood of musical data which needs to be ‘tamed’ into a coherent, yet flexible, sonic statement. Freedom and expression do not necessarily coincide, as Theremin assumed.
As mentioned previously, musician feedback has also been a weak point in interface design. With acoustic instruments, there is a variety of tactile feedback to the performer that aids in performance and gives the instrument a "presence" for the musician. In my opinion, while designers seek to remove physical ‘constraints’ from their interfaces, it is important they provide feedback to the musician beyond simply the audio output of the sound source. The visual, audio, and tactile messages that the acoustic instrument sends back to its user is a crucial component of instrument ‘control’, and I believe that electronic instruments should somehow address this deficit. Beyond considerations of instrument and player control, the tactile experience of playing an instrument is an important part of the pleasure of music-making, and in part explains the popularity of the controllers based on acoustic interfaces, such as the MIDI keyboard and guitar, over the more experimental ‘new’ interfaces. Without popular acceptance, I believe these interface designs will have difficulty moving beyond the academic fringe of the designers and researchers themselves.
The advantages of using electronic technology to produce musical data are numerous and varied- and I have tried to address a variety of them in the course of this paper. If, as I have suggested, interface design is going through a period of "growing up", where novelty and utility have not quite been differentiated, it is no doubt largely because the field is incredibly new. The MIDI standard, which allowed the complete modularization of controller and sound source, has only been in existence since 1982. Many of the controllers mentioned use computer number-crunching technology that until recently was not available. It is only natural that a period of experimentation precedes a period of more ‘mature’ technological application. Whether these new controllers and interfaces extend beyond the sliver of avant-garde New Musicians into the ‘mainstream’ of musical society depends in large part upon the considerations being addressed by their designers. Two approaches may be taken: to create an interface that is designed to control a sound source in a limited, specialized way (such as the keyboard), or to create an interface that is so flexible that it achieves widespread acceptance through its ability to adapt to a wide range of uses (such as Buchla Lightning). Currently, research on new interfaces seems to be favouring the latter. Whichever approach is taken, Chavez realized a crucial danger in the ‘30s that still very much applies today: namely, the "necessity for the inventors to interest themselves in the practical use of their inventions, or to associate themselves with musicians in pursuing their investigations." (Chavez, 1937:165)
In any event, this is a vital and fascinating field of an emerging ‘new music’ with little precedent outside of our own century, where even the ‘antiques’ of the electronic revolution in music, such as the Theremin, still strike us as profoundly new and innovative. Despite the completely outdated technology those pioneers employed, in many ways their inventions still are revolutionary. In retrospect, the Theremin was more of a signpost pointing to vast new areas of musical exploration opening up for designers and musicians, than it was a generally-useful musical instrument. Although its persistence as a novel instrument in both mainstream and avant-garde music attests to its continued appeal as an instrument in itself, its widespread impact in so many areas of current interface innovation and research suggests that it will be the ideals and philosophies behind the Theremin that will ultimately have the profound and widespread impact that Leon Theremin foresaw in 1927.
Appendix
‘Analog’ and ‘digital’ denote two approaches to data representation and processing. Analog representation is continuous, so that electrical parameters (such as voltages, currents, frequencies, etc.) can hold a value across a spectrum, and every variation of that value (no matter how small, whether desired or not) is taken to be significant. In contrast, digital data is discrete, and parametric values have only two values: 1 or 0 (or on/off, high/low, etc.). As a result, a very limited range of parametric values and variations are significant to the eventual output.
A simple example of this is the contrast between vinyl records and compact discs. Vinyl records represent audio data as ‘wiggles’ in the path of a groove. This is analog data, since the degree of movement of the groove is continuous within a range; even the tiniest wiggles in the groove are significant and alter the eventual sound product. In contrast, the CD represents data as a stream of music data encoded as 0’s and 1’s, detected optically through a laser that has been reflected off the surface of the disc. If there are slight variations in the strength of the reflected beam, they are not considered significant unless they distort the data to the point of ‘transforming’ 1’s into 0’s, or vice-versa. Of course, the physical representation of 1’s and 0’s are made as different as possible in order to avoid this happening.
There is a twofold result of employing digital technology. First, digital data is much more difficult to corrupt or alter. Analog LP records degrade naturally over time, resulting in scratches and wear in the groove that in turn introduce ‘pops’ and clicks into the audio signal. CD’s also get scratched and worn through use, but generally this does not affect the audio signal, since the extent of damage to the disc must be much greater in order to disrupt the stream of 1’s and 0’s. Therefore, digital data is much more ‘durable’ and less susceptible to distortion than analog.
Second, digital data can be shared between components much more easily. Since digital data consists of only two values, the specifics of these two values are not so crucial, and the difference between ‘high’ [1] and ‘low’ [0] is fairly easily standardized. For instance, one could easily define a positive voltage as a digital ‘1’, and a negative voltage as a digital ‘0’. The exact voltage levels are not crucial, since the purpose is to define only two states relative to each other. With analog systems, however, many issues come into play when trading data between components. Since there is an infinite number of states within a range, one must make sure that both components have a common point of reference: a particular value must be set to represent a ‘0’ mark, both in the data transmitter and receiver. There are also issues of scale, so that a ‘10’ sent from one component isn’t interpreted as a ‘12’ in the receiving component. Even this simple calibration can be very difficult or impossible when employing hundreds of components in a synthesizer system.
A final example of the significance of the differences between analog and digital can be found in the story of the making of the album ‘Switched-on Bach’, which employed nothing but analog synthesizers. Much of the effort behind this album was simply keeping the synthesizers in tune, since the variations in room temperature resulted in changes in the electrical properties of the equipment, which resulted in significant variations in the electrical signals. Since every little variation in an analog electrical signal is significant in terms of the final output, temperature became a major factor in distorting and influencing the electrical signals. Today’s digital synthesizers are similarly susceptible to such environmental fluctuations, but the output is not affected because of the digital system’s immunity to minor signal variations.