Performance Recordings®
Home | Books & Articles | Albums | "Musician's Ear" microphone | Contact us


University of California, San Diego
Thursday, April 8, 2004

EE Special Seminar

 
Capturing Music: The Impossible Task

James Boyk
Copyright (C) © J. Boyk 2004. All rights reserved.

(Notes April 20, 2004: 1. Three of the illustrations are in (the same) BBC Research Department report and are linked to that report. Note the number of the BBC illustration before linking so you can find it. I'm sorry that I cannot link direct to the right illustration.  2. After you read a footnote, the "back" button will, with luck, return you to your place in the text. —jb)



 

Abstract:
The real piece of music is the sound the composer heard in imagination. For the performer trying to recreate this in the listener's ears, details have profound implications. The performer operates in a feedback loop with the instrument and the room acoustics (and the audience, if one is present). A recording which alters the sound outside this loop may falsify musical meaning. Most recordings don't serve the music well because of such errors and/or because audio gear isn't good enough. The latter comes about partly for lack of scientific knowledge of the ear-brain system, and partly because standard technical "specs" don't correlate with sound quality. R. A. Belcher's "double-comb-filter" measurement holds out hope for better correlation with perception, and should be followed up, though certain aspects of audio performance may always be difficult to measure. Until we have specs known to correlate with and to tell all that's needed about audio performance, designs must be evaluated by listening. The musician's trained ears can help with this, while the engineer contributes not only the design but the technical and procedural knowledge to define conditions for meaningful listening. Listening is most revealing when carried out by comparison with an impeccable original: either live sound or a direct microphone feed of high quality. Blind tests are a "gold standard," but sometimes they are inherently impossible; in such situations, respect for the non-blind listening skills of some musicians and recording engineers is reassuring.

 


 
I'm pleased to be here, and grateful to [Prof.] Shayan Mookherjea for arranging this visit.

     Audio has more baloney than any field I know—except piano teaching. I'm involved in both fields, but I hope you won't find too much baloney in this talk. I'll say musicians when I mean anyone with a good ear for musical sound; this includes many recording engineers. And when I speak of "we" design engineers, I'm expressing sympathetic identification with the engineer's point of view, not representing myself as something I'm not. The audio we're talking about today is not public-address or sound-reinforcement, or telephones or hearing aids, but equipment and processes for recording and playing back music accurately. I'm going to suggest that a symbiosis between musicians and design engineers is necessary to advance this field.

 

 
Most music, heard by most people most of the time, is recorded, so musicians are dependent on recordings to represent them. But few recordings serve the needs of the music or the listener. Performers need help from engineers. And engineers will find this a worthy challenge, because music as a signal is uniquely demanding.

     Music is a language of emotions expressed in sound. ( Note 1 ) The real piece of music is not the printed score, but the sound the composer heard in imagination. If the music's written down, the score is just a shadow of that sound; and the performer's job is to recreate the sound from the shadow; recreate it in the listener's ear as it existed in the composer's. An ordinary shadow that's a perfect disk might be the shadow of a beach ball or a wine bottle (if it were oriented just right). With a circular hole in the center, it couldn't be either one; but it could be a piece of pipe or a Compact Disk. Each detail of the shadow has implications for the higher dimensions that are getting projected onto the shadow. The details profoundly affect your guess about the shape of the object casting the shadow.

     In music, too, details in the score have implications for the sound you imagine. The importance of details is the deep reason that musicians are perfectionists.

     A performer's understanding of a score takes the form of comprehending the implications of the details and the web of their interconnections. An interpretation is what this understanding becomes on one occasion. The great pianist Artur Schnabel said that he wanted to play only "those works that can never be played well enough." I take this to mean works whose web of interconnections in the higher-dimensional space of imagination is so complex that it can't be fully represented in any one performance in our 4-dimensional space-time. This is the deep reason that it's difficult to play the great works.

     We'll return to examples of musical details and how audio errors can hurt musical meaning; but first I want to undermine your confidence in the traditional technical specifications of audio equipment, so that you'll appreciate the ray of hope I'll point out.

 

 
The design engineer who creates a piece of audio equipment, the recording engineer who uses it at a concert, and the performer who plays the concert have something in common: For all three of them, the only way to judge success is by listening. For the musician and the recording engineer, this is the natural order of things; but design engineers want something to measure. Unfortunately, no measurement is known generally to predict sound quality, though there's one that's promising. Perhaps you'll be interested in testing it further—modifying it if need be—if I can inspire you with interest in audio.

     Luckily, audio is fascinating. It's the only kind of engineering I know that deals with a frequency range of over 10 octaves at the same time as a dynamic range of 120 decibels. ( Note 2 )

     It must also satisfy our sense of hearing. [JB moves to one side of room.] You can see the evolutionary importance of hearing in the fact that we have eyelids, but not earlids. Throughout our history, hearing has guarded us from threats from all directions, in light and darkness; and even when we're asleep. [Confederate in audience clicks two pieces of metal.] We still cock our heads to locate unexpected sounds, as you've just done; and we turn our heads to face sound sources, as you're doing now to track me. And if you keep your heads still while I move to the other side of the room, I suggest that it takes a conscious effort. [Audience agrees.]

     Audio must also convey music's physical and emotional power and delicacy. We may think, "If we get the engineering right, the subtlety will take care of itself." This is true; but how can we know we've got it right except by evaluating how well the subtlety survives?

 

 
Audio engineering draws on many fields. Mechanical engineering contributes to design of microphones and speakers—and turntables, for those who play records. Materials science plays a role. But electrical—or electronic—engineering is the kind we'll mainly address.

     This wonderful field has given us feedback, push-pull amplifiers, and op-amps; over-sampling digital converters, and digital clocks with 80-picosecond jitter; cascode and follower circuits. Maybe you thought the follower was given by God like the integers; but someone invented it. The same man who invented stereo. ( Note 3 )

     Double-Es, with their creative ability and knowledge of existing circuits, with mathematical simulation as a double-check, can produce analog and digital designs that work with predictable precision—provided the problems are well-defined.

     Unfortunately, the problem of satisfying the ear is not well-defined. Not enough is known about what parameters of musical sound are important to the ear and brain. An old friend of mine won the Westinghouse Science Talent Search in high school with his work on hearing; and continued in the field through a doctorate and professorship; but now works on vision. "Why the change?" I asked. "Listen!" he said, "In vision, at least we know what the questions are!"

     If we knew the questions and the answers for sound, we'd know what parameters are important to the ear-brain system; and we could measure how well they were preserved by equipment we design. That would be a scientific basis for audio. But there isn't enough of this fundamental science.

     In this situation, we tend to fall back on thinking that if audio gear reproduces the waveform perfectly, that's good enough. And of course it would be. But living on Earth rather than in Heaven, we can't reproduce the waveform perfectly. And we don't know the mapping from waveform error to perceptual error. We don't know how much error, and what kind, causes how much degradation in sound quality.

     We also tend to think that if a waveform looks right to the eye, it must sound right to the ear; or if it's deformed visually to a just-noticeable degree, the audible distortion must be just-noticeable. I don't know of any support for these assumptions.

     Finally, we tend to assume that technical measurements that seem as though they ought to be meaningful, are meaningful. This is not true. The best-known distortion spec is THD: Total Harmonic Distortion. We measure it by putting a sine wave, a pure tone, into an amplifier or speaker or other component, then seeing how much of the output energy is at harmonics—overtones—of the input frequency. Ideally, there should be none.

     To the listener, the effect of THD is to change our impression of the tone color of the instruments—a little more of one harmonic or another. If THD is very small, it doesn't seem that it would matter very much to our judgment of sound quality, especially if we can't compare the reproduced sound to the original. And indeed, if you rank a group of amplifiers by THD and also by sound quality, the two rankings won't in general agree. {Belcher RD 1977/40, Fig. 14.} An engineer at the BBC, Allan Belcher, showed this with two line amplifiers in the '70's. ( Note 4 ) (A line amplifier is a voltage amplifier that can "drive a line"; that is, drive a load of say 600 ohms in parallel with 1000 or 10,000 picofarads.) One was a discrete transistor unit; the other, based on a 741 chip. He asked experienced listeners to give sound-quality ratings for each amp, using master tapes as sources. Actually they rated loss of quality compared to the ideal, which was the sound they heard with the line amps out of the circuit. Then he measured the amps' THD. Finally he graphed the sound-quality loss—"subjective impairment"—versus THD in this chart, in which dots and Xs are the two line amps. A circle around either indicates that high-frequency emphasis and matching de-emphasis were used to simulate the transmission process. If THD correlated perfectly with sound quality, all the points would lie on a straight line. They don't, and it doesn't. It does better with the dots than the X's; but when we measure an amp, there's no way to know if it will behave like a dot or an X.

     The unsatisfactory nature of THD was recognized more than half a century ago, but the industry continues to design for low THD. ( Note 5 ) This was really pernicious in the early days of solid-state power amps. They amazed us with their high power relative to tube amps, and their far lower harmonic distortion. And they stunned us with their awful sound. They got great figures by having gobs of global feedback; but most of them were so slow they had slewing problems on transients; so musical attacks suffered. If you told the designer what you heard, you were dismissed as an enemy of progress and told you didn't know what accurate sound was. "I'm a pianist," I said, "and that ain't the sound of a piano!" I was informed that I was used to the "euphonic colorations" of vacuum tubes.

 

 
Let's look at two other standard specs. First, frequency response. We put sine waves of equal strengh and at various frequencies into whatever we're testing, then observe the relative strength of what emerges. But when we test an amplifier, it's not connected to a speaker but to a "standard load": a non-inductive resistor, with or without with a capacitor across it. This load is far simpler than most speakers. In our lab, for instance, we have a commercial 4-way speaker with a 22-element crossover network, and a small two-way with a 15-element network. Amplifier response may well differ into such loudspeakers from what we see into standard load. And some speakers, for their part, when you feed them loud bursts—within their rated power capacity—expand or compress the dynamics. ( Note 6 ) This doesn't show up in standard tests.

     Is an amplifier's power rating meaningful? In one test, various amps drove the same speaker in turn, and a meter measured how loud the speaker could play at a given level of distortion. One amp honestly rated at 20 watts played louder than another amp honestly rated at 100 watts. So much for power ratings. ( Note 7 )

 

 
Engineers need measurements! You work for an audio company. You design a new amp. Is it "New and Improved," as the Advertising Department will claim; or merely new? You need a measurement that will tell how good the amp will sound. Someone comes in that door right now claiming to have the measurement. What will it take for you to believe it?

     [Audience: Proposed measurement must correlate with listening.] Yes; correlation with listening test. This means doing a double-sided experiment, just as Mr. Belcher used to show the unsatisfactory nature of THD. We'll take amplifiers A B C D E and measure them according to the new test, and they rank in some order. Then we put them through a careful listening test, and it ranks them in the same order. So far so good. But—would that be enough to convince you that you could rely on this test?

     [Audience: No.] You'd want more groups of amplifiers to give you confidence. And if it's a general audio specification, you'd want speakers, too. And recorders. Every kind of component.

 

 
Here's the bad news. No measurement has been confirmed by thorough correlation tests to predict perception across the range of audio gear.

     Here's the good news. Dr. Belcher of the BBC, in the same work that produced the THD graph we saw before, also tried a new measurement he called the "double comb filter" method, which is a kind of super-intermodulation-distortion test. {Belcher RD 1977/40, Fig. 1.) The input signal is two line spectra, each a fundamental and its harmonics, with the combined spectrum being about as busy as possible. At the output of the device being tested, the input frequencies are all removed, leaving the intermodulation products produced by the amps. Ideally there should be none, of course. (Some of the products will lie at input frequencies and therefore be filtered. Enough remain that the result is not affected.)

     We saw his graph of subjective impairment vs. Total Harmonic Distortion. Here's the graph for the double-comb-filter method. (Belcher RD 1977/40, Fig. 13.) Just what we're looking for. Up to the other day, this was all I knew about this test. But I spoke to Mr. Belcher on the phone the other day and he told me is that the test also correctly predicted sound-quality ranking of digital companders and linear PCM systems at the BBC. These results were never published; but in his doctoral thesis, he used the method with success on microphones and loudspeakers. ( Note 8 ) I don't know how the listening sides of these tests were done, but it's tempting to start trying the test on everything in sight. This will be easy, because his company sells Windows software that performs the test on both simulated and real circuits. ( Note 9 ) The IEC is making the test part of its IEC 60748 Standard, called "Interface Integrated Circuits."

     I'm eager to see whether the test would correlate with specific subtle aspects of performance. For example, my friend Doug Sax, a recording and mastering engineer, tested two line amps himself by a clever method to learn about their performance with very soft signals. Doug Sax's test of line-amp low-level performance He had someone talk at one end of a studio; and put a microphone at the other end, 90 feet away. The output of the mike ran to a power amp and speakers, and he could hear and understand the speech over the speakers. In the cable between the mike and the power amp was a switch. When turned to its other position, it inserted a line amp between the mike and the power amp. The line amp's volume control was adjusted so the volume didn't change. If the line amp were perfect, adding it to the chain wouldn't change anything. And that's what happened with one of the line amps, a unit Sax had used for years, and which he liked. Then he substituted a different line amp, which had better THD and noise specs than the first. When it was switched in, he tells me, the spoken words became unintelligible. To be sure, they were very soft, but lots of things in music are soft, like the dying away of reverberation.

 

 
Reverberation is not just an added feature of musical sound.  It affects the tempo, or speed, of a performance. When I play in a reverberant hall, I may play fast passages slower to keep them from sounding muddy. If, on the other hand, the hall is "dry"—has little reverb—I may play faster so that a slow passage doesn't die on its feet or a not-so-slow passage won't sound sterile. I adjust my tempo within a feedback loop running from me to the piano, the room acoustics, and back to my ears.

     Suppose my playing is recorded and the recording engineer adds artificial reverberation. It may sound lousy, but most important, it's added outside the loop. I don't hear it, so I can't adjust for it. This might make nonsense of the tempo. Might make me sound as though I can't hear.

     What if reverberation is removed instead of added? It's done all the time, unintentionally, by using equipment which doesn't preserve low-level detail, like that line amp that made speech unintelligible. Or the wrong input transformer on a microphone preamplifier. A digital converter which is inaccurate or just doesn't have enough bits. Capacitors with the wrong dielectric. Whether reverb is added or subtracted, the tempo will still be correct, but the sense of the tempo may be lost.

 

 
In return for undermining your trust in traditional measurements and equipment, I'm offering—what? Listening. Whether we want to test the double-comb-filter method or to evaluate a new piece of equipment, we must listen. So I want to convey that for some people, sound is as vivid perhaps as vision is for most. Doug Sax tells about recording engineer Bill Schnee, who crossed through Sax's workroom each morning to get to his own studio. One morning, Sax happened to be playing a recording Schnee had heard before, but through a different preamp than usual. The preamp was not visible, by the way. Schnee took the usual few seconds to walk through the room; said, "Morning. New amp. Sounds good"; and was out the door.

     Or consider clarinetist Margaret Thornhill, who has given many formal and informal performances with me in Dabney Lounge at Caltech. After one concert, for which the piano was tuned by Robert Koning, she said, "This is the first tuning since Ken Brown retired where I could relate to the tuning from the clarinet." That is, she could tell, during a given note, where the piano's next pitch would be; and therefore where her next pitch should be. In the years between these two tuners, various others had tuned the same piano; and she hadn't been confident about the pitches.

     My soprano friend Susan Judy spotted an edit in one of my recordings. I couldn't hear it though I knew where it was. I said, "I can't hear any difference in the piano sound." She replied, "Oh, there's no difference in the piano, but the ambience changes." When I listened for the ambience, I heard the edit. And what caused the difference? A change in reverberation from a different size of audience. 220 people at one concert, 240 at the other.

     Such listeners would be useful to audio designers. But in general, designers and manufacturers don't ‘get' it about listening. This is why most gear isn't very good. One fellow who makes very expensive speakers seemed to be bragging that he doesn't listen to his own designs. (He also claimed to be a music lover, but didn't know the make of piano in his own home!) [Audience: Laughter.]

     Another designer had the opposite attitude: he designed the microphone preamps and analog/digital converters for a half-million-dollar recording console, and wanted to listen to his circuits as part of the design process; but his employer wouldn't give him the facilities.

     A graduate of my course designed loudspeakers for a famous company in the East. When they were about to spend big money on a new building, he said, "Let's include a direct-feed listening facility like we had at Caltech, so we can listen to the live sound in one room and hear what our speakers do to it in the next room." His colleagues didn't disagree; they didn't even understand what he was talking about. He resigned.

     He had the right idea, though. Listening is most revealing when carried out by comparison with an impeccable original. If you're evaluating a microphone or a speaker, this has to be live acoustic sound. If you're evaluating a recorder or line amp, you can do what Doug Sax did, and compare input to output. But the input should come straight from a good microphone, not a recording, because the highest-resolution sound will be the most demanding and therefore the most revealing.

 

 
It's specially important to have a top-quality source when one of the things being auditioned uses a New technology. When the New corrects defects of the Old, it's easy to hear that fact. For instance, analog recording has speed instabilities called "wow" and "flutter"; digital recording does not; and this digital virtue was obvious from the beginning. What was not easy for everyone to hear were the defects in the digital of that era. Loss of resolution; difficulty with complex textures, so that a note on the flute might sound OK but a piano chord would be a mess; harsh treble that could get very unhappy on soprano or muted trumpet; odd-sounding reverberant decays; and sound vanishing into a black hole below a certain level. ( Note 10 ) Some people had to learn to listen for these defects, and learned faster if they compared against live sound or a direct mike feed. ( Note 11 )

     The New will always have defects; but learning to hear them may take a while. Or maybe not; it depends what the defects are, and how one listens. Moreover, listening-test results will always be statistical, not absolute. A defect is not either absolutely audible or absolutely inaudible; it's audible to some listeners, not to the remainder—until they learn to hear it. At Thomas Edison's laboratory, the guide told us that Edison ran "live vs. recorded" demonstrations, and that listeners found the cylinder-phonograph playback indistinguishable from the live sound! ( Note 12 ) I put those listeners' reactions in the same category with the comment by Herbert von Karajan when Compact Disks appeared. The famous conductor exclaimed, "All else is gaslight!" Like Edison's listeners, he hadn't listened to the New technology long enough to develop the mental categories that would let him hear the defects.

 

 
How music can succumb to bad audio. Many non-musicians don't realize that the sensitive listener can not only tell a clarinet from an oboe but can often identify a particular performer, if the recording is good. Often, though, we find that recordings aren't plausible as representations of whatever instrument we play. As a pianist, I notice that recorded piano attacks—the transient sounds when the hammers hit the strings—often sound plucked instead of struck; sort of an 88-key guitar. This is from incorrect miking. Sometimes, I even hear multiple attacks on each note, due to wide spacing of the microphones.

     And when we go beyond generic plausibility to the individual sound we spend a lifetime developing, this is almost never rendered correctly. Musicians despair of being understood on this subject except by other musicians. My clarinetist friend doesn't care how she's recorded, so long as she can recognize her sound and get useful feedback. No digital recording has given her what she needed; not even 96kHz recording at so-called 24 bits, usually more like 18. Yet she gets what she needs from a modest open-reel analog tape. Don't tell her that she's just used to the euphonic colorations of analog.

     Here's how a playback system damaged my Beethoven. When my first album came out, I sent a copy to a young pianist friend at a conservatory. She wrote back very embarrassed, saying that the first movement of my Beethoven had too many climaxes, and my tone was "bangy." This was crushing. When summer came, she wrote again. At home for vacation, she had listened to the album on her father's system, much better than her own dorm-room player. Now she did not hear the extra climaxes or the banging, and she loved the performance. In a flash, I realized what had been wrong. The recording has a wide dynamic range. When the music got loud enough, it had overloaded her dorm system. Any such passage came out equal in loudness to any other such passage; hence, multiple climaxes. They were graded dynamically in the playing and on the recording, but couldn't be distinguished by the system. Overloading also makes the reproduced tone ugly; but because she was thinking in musical terms, she heard the ugliness created by her system as though it were created at the piano! ( Stereophile magazine ranked this album a "Record to Die For." )

     Here's how a common problem in playback systems could damage Schubert: In the posthumous A-major sonata, the bass comes in groups of four sequential notes, with the first of each group holding through the remaining three. ( Note 13 ) But for two groups, it's not held. This contrast in texture means something to Schubert, but if there were a resonance in the audio system on either of these unheld notes, or a broad resonance in the general area of their pitch—as many cheap loudspeakers do have—these notes might seem to be held when they're not; and the textural contrast would be damaged.

     Here's how mis-design of a playback component can frustrate the listener: The KLH company made a small two-way speaker that came with a box to connect between your preamp and power amp. After calibration, when you played soft music, the speaker would go down much deeper in the bass than you would expect from its tiny woofer. As the music got louder, if the low frequencies were still present, the box reduced them electronically to prevent the woofer cone traveling too far and damaging itself. You could play the speaker as loud as you like, and it would always give you the most bass consistent with its own safety. This was clever; but consider its musical impact. At climaxes, things tend to be loud and full-bodied; and the speaker led you to expect that you'd get what you expected. But at precisely those moments, you did not get it! The ultimate audio tease.

     Here's how musical damage was narrowly averted in one recording: I helped out on a recording of the Kodo drummers from Japan. Their dynamic range is enormous. At our mike position 30 feet from the loudest drums, we tried three different mikes. First was a condenser with a one-inch diaphragm. At the loudest moments, the diaphragm hit the stops thup thup thup: unusable. Second was a five-eighths-inch condenser; the diaphragm didn't hit anything, but the character of the sound changed a lot between soft and loud passages. We were nervous. The third mike was a ribbon; and fortunately for us, it sailed through everything with no change of character.

     And here's a subtle example of how a recording might damage music: In Bizet's First Symphony, there's a famous oboe solo in the second movement. Its melancholy feeling gains force from the very sound of the oboe, and Bizet emphasizes this by having no woodwind sound at all up to the oboe entry. If the recording homogenizes timbres, as early digital recording did, and as some microphones do, the tone-color contrast and the feeling will be reduced.

 

 
Do such details matter? Yes. Compare the interpretive musician's task to the actor's. The actor interprets Hamlet. Each speech, each action must be effective in itself, and the totality must be convincing. We must be convinced that Hamlet, up on the stage, is indeed a person. Not trivially convinced by seeing that he's the same actor from scene to scene; but artistically convinced by the consistency of speech inflections and the way they change under the pressure of emotion; by gestures that reflect the same mind that inflects the speech; and so on. Change any of them, and the meaning may well change.

     A musical interpretation is vulnerable to misrepresentation, too, whether by a piano out of tune, an amplifier losing low-level detail or sounding ugly at high levels, or a speaker whose diaphragm stores energy and gives it back later, cluttering up the sound.

     Works of music have souls as Hamlet has a soul. The soul of the music is shown through musical gestures, the dance of the rhythm, the turns of melody, sequences of harmony, and the expectations built up by the structure. To communicate this to my audience, I may practice a thousand hours for the first performance of a concert. I also choose the piano that sounds best, and the best person to tune it. ( Note 14 ) I ask the house manager to turn off fans and air-conditioners to reduce background noise and especially to eliminate pitch in the noise. I take as my responsibility not only what I do at the keyboard, but what arrives at listeners' ears. And I do the same for my second audience—the people who listen to my concert recordings—by using my ears to help me choose the audio gear and processes to make good recordings.

     Many musicians are doing this on their own, and would benefit from symbiosis with double-Es. Musicians may not understand under what circumstances ears can judge fairly. The engineer understands that when you compare two units, polarities must be matched. If the diaphragm of one speaker comes toward you on the trumpet blast, and the other's recedes, you can't compare them 'til you reverse the connections to one of them. Levels must be matched, too, as I learned when I preferred microphone A to microphone B in a blind comparison, then found out that there was only one microphone, but version A was playing two-tenths of a decibel louder than B. (Louder sounds better, so long as nothing's clipping.)

 

 
Performing listening tests blind or double-blind is also important—when it's possible. As you know, in a blind test the listener doesn't know which sound is coming from which speaker, or amplifier, or whatever. In one that's double-blind, the operator of the test doesn't know either, and therefore can't influence the listener; and the experimenter—different from the operator—decodes the results afterwards. I tell my students, "First be able to tell two things apart without knowing which is which. Then you get to have an opinion about which one is better."

     But notice that, if you're comparing two microphones or amplifiers with different noise floors, and your listener picks up on the difference, the test won't be blind, no matter how carefully you go through the motions. My class once took a computerized audio double-blind test. One student got a perfect score, and it turned out that he had noticed that one of the sounds began with a subtle ‘click.' For him, the test wasn't blind. But what if others unconsciously picked up on the click? Was it blind for them? We must doubt it. In fact, we can be most confident a test is blind only if the results come out negative; if no one can tell A and B apart. Then we're pretty confident that no one heard that click consciously or unconsciously.

     But then, we must think about what negative results mean. They do not necessarily mean that there was no difference between A and B. They mean that either there was no difference, or defects in the rest of the audio system masked the difference, or none of the listeners were acute enough. A truly scientific attitude is not to say that A and B are indistinguishable, but to do another test with a higher-resolution system—and possibly better listeners.

     And what about the un-blind listening that musicians do every day? A musician's life's work consists of creating meaningful sound non-blind, and of developing ears that are reliable non-blind, and using those ears objectively under the most difficult conditions, namely while involved in playing. ( Note 15 ) Are we going to say that the musician's sonic judgments are worthless? It's amusing that when I talk about the sound of various pianos—Boesendorfer, Kawai, Bechstein, Steinway—my comments are rarely challenged. And when I teach my piano students to make exquisite adjustments of their tone to heighten the musical meaning, their playing sounds better. But if I comment on the same things in reproduced sound, I hear doubt from certain quarters because I didn't listen blind or double-blind.

     Don't go away and say, "Boyk doesn't believe in blind tests." I believe in them; ask anyone who's worked with me! I just recognize that sometimes they're inherently impossible; and in this situation, we mustn't fool ourselves. And I feel confident that a carefully-done test, even if it's not blind, can still be meaningful. A lifetime of objective listening as a musician tells me so.

 

 
Audio passions. One of my uncles was a successful attorney with a judicial temperament, but waxed passionate about the superiority of cactus needles over steel for his acoustic phonograph. (It was in his summer cottage when I was a kid.) Thanks to him, I haven't been surprised by the passion of audio arguments. They're all the same argument, I've noticed, whether about tubes versus transistors, analog versus digital, or capacitors with different dielectrics.

     Walt Jung and Dick Marsh wrote a paper pointing out that differences in dielectric absorption and dissipation factor might account for sound-quality differences among capacitors. ( Note 16 ) Before them, some people were using the phrases "lunatic fringe" to describe those who thought that a .05 uF Teflon capacitor could sound different from a .05 uF polypropylene.

     "Lunatic fringe" is a good audio put-down, like "euphonic colorations." Fifteen or twenty years after I learned about the euphonic colorations of vacuum tubes, I learned about the euphonic coloration of analog recording when I pointed out the defects of early digital. Again I was told that the New technology was more accurate. (No reproduced sound is too ugly to be called accurate.)

 

 
The most important thing to know about audio is that music is beautiful! It draws you in, rather than puts you off. You want to hear more of it. No matter what audio does right, if the original was beautiful and the reproduction is not, something fundamental is wrong. For those of us whose lives are involved with this beauty, the audio situation is unsatisfactory. But that's just on Thursday, April 8, 2004. Maybe tomorrow—maybe today—we engineers will learn how to do a better job for us musicians.

 

 
Thanks to: John Atkinson, Allan Belcher, Carol Boyk, Martin Colloms, Louis Fielder, Jeff Greif, Walt Jung, Doug Sax, Peter Shelswell, Gerald J. Sussman, Julie Sussman, Peter Sutheim, Margaret Thornhill, Rick Walker—and again Shayan Mookherjea.

 

 
And thanks for your attention.

 


Notes


1.  Deryck Cooke, The Language of Music, Clarendon Paperbacks. ISBN: 0198161808


2.  Louis D. Fielder, “Dynamic-Range Issues in the Modern Digital Environment” JAES Vol. 43, No. 5, 1995 May. “Abstract: The peak sound levels of music performances are combined with the audibility of noise in sound reprodution circumstances to yield a dynamic-range criterion for noise-free reproduction of music.... A dynamic range of over 120 dB is found to be necessary in the most demanding of circumstances, requiring the reproduction of sound levels of up to 129 dB SPL. Present audio systems are shown to be challenged to yield these values.” Measuring levels at normal listening locations, Fielder found classical music peaking up to 118 dB; unamplified jazz to 127, and amplified rock up to 129.


3.  "God made the integers; all else is the work of Man." —Kronecker


4.  R. A. Belcher, "Audio non-linearity: An initial appraisal of a double-comb-filter method of measurement." RD 1977/40, BBC Research Dept.


5.  By the early 1950's, alterations to the standard THD measurement were suggested as correlating better with subjective judgments. In "the red book," Radiotron Designer's Handbook, 4th ed., 1953, Chapter 14, "Fidelity and Distortion," part (vi), "The search for a true criterion of non-linearity," p. 610, says, "It is generally admitted that the value of total harmonic distortion does not provide a true criterion of non-linearity between different types of amplifiers under all possible conditions...." Note on the other hand the reference to an "interesting investigation by Shorter" [D. E. L. Shorter of the BBC, co-designer of the LS3/5A speaker and the ribbon microphone now known as the Coles 4038] which "has indicated that a... drastic weighting [of harmonics by their order]... including all harmonics with at least 0.03% amplitude, showed distortion values in the correct subjective sequence." I don't know why this measurement was not adopted; whether it did not pan out, or was too much trouble to make with the equipment of that era.


6.  Personal communication from Mr. Martin Colloms, author of High-Performance Loudspeakers. He reported on burst tests in British audio publications. He states: "The context was gated tone burst drive of loudspeaker systems well beyond their continuous rated thermal limits but within program peak ratings. Thus a 200W rated system may only survive continuous tone at low frequencies of 50W[,] at 1k[Hz] 25 W[,] and at 5k[Hz] 8W. This was pertinent in the mid to late 1970s. [If you] tone[-]burst say on a 1:5 mark space and FFT the clean central region...you may find interesting results. For example when the series inductor to the bass low pass filter has a ferromagnetic core the low current value may be 6mH, and this defines the output say at 500Hz. Driven hard it may saturate prematurely (if not sufficiently generously rated), and on peaks will collapse to a mean, non-linear value of say 3mH. The crossover misalignment may result in both network peaking due to altered Q values and sound level increase directly due to the failing inductance value. In combination there is both accelerated distortion and a rise in level, hence the expansion of the observed tone burst amplitude. It is a difficult test to do consistently over the frequency range and for differing loudspeaker types and the magazine market eventually would not bear the cost. It did explain why some loudspeakers sounded artificially dynamic at full operating level, and most these days use non-saturable air core inductors. Another aspect of apparent 'expansion' I have related to the typical third harmonic loudspeaker distortion. This can harden the mid tonal quality shifting lower band energy into the more aurally sensitive region lending an artificial loudness. The Quad [British electrostatic speaker; nothing to do with surround-sound] does not suffer this and is a good basis of comparison for some electromagnetic transducers. Where I have chosen to minimise third harmonic by motor design , the 'shouting' effect is substantially lessened. Indeed some listeners used to the old, hear it as a lack of dynamics! Another aspect of subjective dynamics is response smoothness/delayed resonances. Poor performance here tends to artificially boosted subjective loudness and increased listener fatigue."


7.  The 20W amp was the NAD 3020; the 100W amp was the Quad 405. I remember being struck by this test reported by Martin Colloms. In a personal communication, Mr. Colloms, whose files are in storage and unavailable, confirms that, "The NAD had better peak current and voltage into a complex load."


8.  R. A. Belcher, "An experimental investigation of test noise signals for the measurement of non-linear distortion of sound signals." Ph.D. thesis, University of Surrey, England, 1977. I have not yet read this thesis, nor the following list provided by Dr. Belcher.

Belcher, R. A. "A new distortion measurement," Wireless World, May 1978 pp. 36-41.

—, "A double comb filter method for testing of quantisers,' pp. 79-82, IEE ADDA99 3rd International Conference on Advance A/D and D/A conversion techniques and their applications, IEE Conf. pub. 466, University of Strathclyde, Scotland, July 1999.

Irons, F.H., Saucier S.G., Belcher R.A., "Band separation and the noise power ratio test," pp. 149-153, IEE ADDA99 3rd International Conference on Advanced A/D and D/A conversion techniques and their applications, IEE Conf. pub. 466, University of Strathclyde, Scotland, July 1999.

Belcher, R.A., "Multi-tone testing of quantisers using time and frequency analysis," pp. 173-177, Proceedings of IMEKO TC4 4th International Workshop on ADC modelling and testing, University of Bordeaux, September 1999.

—, "ADC standardisation: the need for multi-tone testing," pp. 141-148, Computer Standards and Interfaces Journal, Vol 22,No2, June 2000, Pub Elsevier Science B.V.

—, "Multi-tone testing of quantisers using PRBS signals," pp. 269-279, Computer Standards and Interfaces Journal, Vol 22, No4 October 2000, Pub Elsevier Science B.V.

Belcher, R.A. et al, "A DSP based test system for 'NOISE-SEPARATION measurements on ADC and DAC systems," IEE ADDA2002 4th International Conference on Advanced A/D and D/A conversion techniques and their applications, Technical University of Prague, Prague, June, 2002.


9.  Signal Conversion Ltd., http://www.signalconversion.com


10.  James Boyk,
"There's Life Above 20 Kilohertz! A Survey of Musical Instrument Spectra to 102.4 KHz," gives waveforms of muted trumpet and other instruments. (Self-published at request of members of standards-setting panels for high-resolution digital audio, to get results out more quickly.)


11.  —, "
Rules of the Game," Hi-Fi News & Record Review (England), Jan., 1983.


12.  Confirmed in a published letter from a historical expert—possibly a historian at Greenfield Village, in Dearborn, Michigan, where Edison's laboratory is now located—but I cannot remember where the letter appeared.


13.  D[eutsch catalog] 959. The huge sonata from the composer's last year, not the "little" A major sonata, Opus 120.


14.  James Boyk
"The Endangered Piano Technician," Scientific American, December, 1995, page 100. Reprinted in Piano & Keyboard, May-June, 1996.


15.  —,
To Hear Ourselves As Others Hear Us: Recording as a Tool in Music Practicing and Teaching. St. Louis: MMB Music. ISBN 0-918812-87-9. "The hardest thing in the world is to be objective while involved. Performance involves us physically, mentally, emotionally, and spiritually, so hearing ourselves objectively is four times difficult for us." —Session [Chapter] One.


16.  Walter G. Jung and Richard Marsh, "Picking Capacitors: Selection of Capacitors for Optimum Performance," Audio magazine.
Part 1: February, 1980.  Part 2: March, 1980.



 

Performance Recordings®
Home | Books & Articles | Albums | "Musician's Ear" microphone | Contact us