# Amar Bose: 6.312 Lecture 24

Search transcript...

AMAR BOSE: It's to remind me of the number of topics I want to try to cover today. More than have ever been covered in one talk, but we'll see why. First, something left over from last time, namely, air absorption.

We took into account the absorption on the surfaces of a room, large irregular room, and got the decay of sound, from that, derived reverberation time. But we assumed that, as the wave traveled through the air, there was no loss.

Now, in fact, that's not the case. To be sure, the loss is small. You normally don't think of that in communication in the room. But when you get to have rooms that are very large, air loss, actually, turns out to be a significant factor. Not below a few hundred hertz, but when you get up to 5 kilohertz or so, it can be significant. So it's a frequency selective kind of absorption. The physics of it, which won't be of our concern today, basically come from two aspects.

One, the viscous friction of the gas, as we've seen in the tubes, and they have the narrow slits and molecular absorption in the air. The amount of absorption is dependent on humidity, somewhat temperature. There are nomograms in and the text, which are interesting just to look at once. God forbid, you'll need them. But if you do, they're very handy. But there's a whole set of rules, as you enter here, and you go up this way, and you then take that data, and you enter this one. And, eventually, you come out with what the absorption is, at a given environmental condition. It's absorption per meter.

Turns out that the absorption in the air, as you, by this time, have come to expect in many things, is-- the energy absorbed, when a wave travels through a unit distance in air, is proportional to the energy present. And when the energy absorbed is proportional to the energy present, you know, by now, that the curve of energy versus distance would be exponential. And it's normally expressed as e to the minus m times the distance that you go. e to the minus mx, if you wish.

d, for us, is the mean free path, but remember that's, actually, what we're going to need in the derivation. If this was mean free path, it would be the energy, not absorbed, but the energy at some point, d, at the one mean free path, would be d 0 e to the minus md. It's decreasing as you go. In a mean free path, the decrease is very, very small, a small percentage. Because a mean free path, in a big room, might be 40 feet. A room about, let's say, 100 feet by 50 by-- you can calculate that-- 4v over s. But take a room about 100 feet deep, 50 feet wide, 50 feet high, it'd be about 40 feet.

By the way, why is it smaller then the minimum distance between two walls? Why is the mean free path smaller than that? Yeah?

AUDIENCE: [INAUDIBLE] from point [INAUDIBLE] one wall, at the corner, to the other one is very short.

AMAR BOSE: Exactly. Don't think of modes only existing this way, this way, and that way. From here to there is very, very short. And there's an enormous number of waves going along here, as I speak, that are just taking very, very short paths.

OK, so now, if you look in the reference text, in Bernanek, air absorption will look like a very complicated thing. And there are quite involved derivations. But it turns out it's actually very simple. The reason those derivations look complicated is because, for each different case, like reverberation time, they're trying to derive a modification of the end expression as a result of air absorption. And then if you have a different type of expression, you derive for that.

It turns out that you can do this extremely simply, by simply considering, since the air absorption in one mean free path is very small, you lump it in with the wall absorption, get an equivalent absorption coefficient. Not just the alpha bar for the wall, but a new alpha bar. An alpha bar t, if you want to call it, sub t, which includes air and the wall. Because every time the energy travels through one mean free path, it hits a wall. So consider the path plus the wall as a new wall and no absorption in the air, and the whole thing becomes very simple.

Let's consider, from here to here, what happens? How much energy is absorbed? At this wall, we know, from before, that alpha bar is the energy absorbed divided by the energy incident. Now all we have to do is find out what happens in here. So right, leaving this surface, we'll say we have d 0. And out here, let's say, we have some dr.

What's transpired between these two is that we've had some loss here, which diminishes very slightly the energy that's incident on this wall. And then we have the loss due to the wall, and that's it. So if we look at it this way, alpha bar total, the new thing that we want, the new equivalent absorption for the wall that would give you the right expressions for everything you've derived for the reverberant field, for they reverberation time, as if you have no energy lost in the air, that's what alpha t is. It's the new alpha that you could replace this alpha by and have no air absorption and get the same results.

So, let's see, that's what we want to get. There's a lot of ways we can go about this. We can find the dr. You can write an expression for dr in terms of d 0. So we have energy lost in the air here. So let's see what that would be. That's d 0 times 1 minus e to the minus md, where d is the mean free path. This is the energy that entered, over here, d 0. And d0 times e to the minus md is the energy that gets to this surface, incident upon this surface. d 0 1 minus e to the minus md.

Now, that takes care of the air. Now we have to, as we did before, take care of the energy lost at this wall with alpha. So if this is-- ooh, just a moment. What I want here? Yeah, this is the energy absorbed in this. Hang on. Energy absorbed here, energy absorbed at the wall. Energy absorbed at the wall is d 0 e to the minus md. That's what arrives at the wall. I made a mistake. I'm very sorry.

d0 starts out here. d0 e to the minus md is incident on the wall. That times alpha bar is the energy absorbed by the wall. So this is, by the wall. Now, let's see if I have this right. And this is energy absorbed in the air. This is what started at this point. d0 times this is what got here. The difference is what is absorbed by air and then by wall.

OK, if we haven't made any other mistakes. That, divided by incident energy, is the definition of the total absorption. Now, md turns out-- by the way, this thing is called in acoustics, it's called architectural acoustics-- it's called the absorption coefficient. And, of course, its dimensions are 1 over the distance. So these 0s go out, and md is much, much, much less than 1. So 1 minus md is 1 minus the quantity.

And I can make the Taylor series expansion of the exponential. The first two terms are 1, and the next term is plus the exponent, which is a minus. So that's minus 1 minus md. Plus. If I make the Taylor series expansion of this-- everything's positive here, so I don't have to worry about two terms canceling each other-- Taylor series expansion of e to the minus md is, again, 1 minus md. But md is much, much less than 1 here, and it's multiplying the whole term. So I can put down here, for that, just md times alpha. Oh, I'm sorry. 1 times alpha.

Let me write the whole thing. 1 minus md times alpha is the expansion of this times this, times alpha. So this is approximately 1. I can neglect this. So this is alpha bar, the way we had, I think, normally addressed it. That's it. So, finally, alpha t is then 1 minus 1, that goes out, plus md alpha. So it's alpha bar plus m d. That's the whole story, as far as modeling air absorption goes. We now take this thing here, and we replace it by alpha t, and with no air absorption. And in all the relations that you get. Now if reverberation time doesn't explicitly have an alpha in it, remember it has a room constant. We did have a room constant too? I think so. In any case, the reverberant field also has a room constant, which is s alpha bar ove 1 minus alpha bar. Just go back far enough from the end result, where you get alpha in it, replace alpha by alpha plus md, and it's all over. You've handled air absorption.

Now, on the first day, we drew a picture. We said this would become more meaningful towards the end of the subject. The picture was something like this. Three sets, all the points in this set represent devices, physical devices. All the points in this set represent measurements, different measurements that you make on these devices. And all the points in this set represent perception.

Now, this part that links these two, links the physical device and the measurement, is called physics. This part, which links the measurements and the perception, is called psychophysics. Now, in our case, most of it has been psychoacoustics and physical acoustics.

But everything I'm going to talk about today, like most of the subject, is not so that you learn-- I'm going to give you examples, mostly in psychoacoustics-- but I don't care, at all, about whether you learn that discipline. What I want you to keep in mind and, perhaps, one of the most important things in your career, will be the fact that most of what most of you do, in engineering, will be to design things that are ultimately destined for appreciation by a human. And where we, as engineers, get stuck, all of our education is from here to here.

And the people that have the education from here to here, very often, are ones who are going to go on and study the human, make a career in studying the inner ear, or parts of the ear, or vision. And they would study the psychovisual things, the relations between stimuli that you can measure and what you perceive. And hearing, the relationship between pressures, and incident angles, and spectrum, and what the person perceives. And so, generally, you find this discipline is taught to people who are going to work on the human. And this discipline is taught to the engineers. And there's a heck of a gap in here. In other words, nobody connects them all. And this is terrifically important, as we will see from some examples.

So whatever discipline you go in, as an engineer, don't let yourself be put in a box, and some marketing type tells you, I want to design such and such with the specifications. Ask why you want those specifications. Because you can spend a lifetime designing, from here to here, for things that are not perceptible by the end user of a product. And that isn't very satisfying.

So let's talk about some. Well, first of all, let's introduce some test methods. When we first got interested in this whole thing, and that was because, as many of you know, I happen to have purchased a stereo system. And, as an engineer, I'd finished nine years here, and I thought I understood it. And I believed all this stuff, like motherhood, flat frequency response, uniform phase, and all this and that, and it sounded terrible. And that's what actually, first, got us involved in this whole thing.

After three years of work, those of us that were involved decided to bail out, basically. And it happened, because we figured that maybe the best judges of everything, of sound, were musicians. So we hired some musicians from Boston Symphony. Come to think of it? I guess, we did hire them. I don't know whether they volunteered or we hired them. But in any case, to present different sounds to them and ask them, which was more realistic. And we did this, only after having done it with groups of MIT staff and students, and discovered that you got a whole spectrum of different results. And that if you performed exactly the same experiment a month or two later, on the same group, you got different results.

And so we said, my god, there's a fundamental problem here. And the problem resides in the fact that in this space up here. Here is what you would be trying for. We'll call that t, the true sound, the sound that you would have in a live performance if you went to Jordan Hall or Second Best Symphony Hall. Basically, this is the sound that will define that point as the sound. Well, when you reproduce it, now, I think you can see, from all the things that we've done during the term, that there's no hope of getting the same pressure wave form that your two ears that you got in the live performance, when you're in a room with the mean free path, very, very small and completely different reverberation times. You can't do it.

So what we realized, from the fact that you could give people different tests, you get all different results, was that there's no measure on this space. And that's what you need in math, when you define a set, the first thing you want to do is get a measure on the set. So there's nobody that can tell you that this point here is closer to true then this. And it always happened that when you asked musicians, they would, finally, say. Well, you have to sort out the musicians also, because some of them, literally, only listen to their own instrument, the whole orchestral performance. We used musicians that we're always playing non amplified, non electronic instrument, because the electronic instrument isn't defined. It has a whole pile of buttons and knobs, and they're turned differently in each case. So nobody knows what the instrument should sound like. Whereas an acoustical instrument is relatively fixed.

So what would happen is some musicians would only pay attention to that. Composers would only pay attention to the overall performance. RCA, I talked to the people who used to make the-- for the recordings, RCA had, not Leopold Stokowski, Toscanini as the conductor who they recorded for. And months, and months, and months would go by, and they would invite him in to hear the sound that they were mixing. And he didn't like it.

And they would make all sorts of changes, boost the base, boost the treble, boost everything, cut everything, still didn't like it. And it took them a long time, they said, to realize he wasn't listening to the frequency characteristics, which is were the only things that the engineers knew how to fool around with. He was listening to the performance, and that's what he didn't like. And so there was no convergence on that at all.

So you have to watch out, when you ask these questions, but, in the end, when you get the group who you think is responding to what you want, they will tell you that, look-- I think I mentioned this once before-- that what they would say to us, what you're asking us, is which is more like the lemon, the apple or the grapefruit? The apple has a size more like a lemon, it's a little smaller. The grapefruit has the texture like the lemon, in the appearance, the color. Now how are you expecting me to answer you, which one is more like the lemon? And so you get those kind of answers.

And so, at that point, we decided this was a crazy field to be spending much time in, and better get out. Because nobody seemed to have any answer. The lay person had all sorts of past experience that dictated it. The famous RCA experiment, which I think I've mentioned, was where they put thousands of people through this experiment, and-- I think I mentioned that earlier, no? Yeah? How many think I did? I didn't.

This was in the '50s. They had a podium set up. They had a little control box on here and two knobs. And they had a door there and a door there. And they had a good sound system set up. Dr. Olson, who was one of the the most knowledgeable people, ever, in the field of acoustics, acoustical engineering. And they brought the people, and they said, turn the knobs until you like the sound. There will be music playing when you enter, turn the knobs till you like the sound, leave. And the knobs only had numbers on them. Jesus, must be going back last year or something, I have a feeling I've said this.

But anyway, turn the knobs until you like it the best and leave. And these knobs were nothing more than bass and treble control. Raise the bass, lower the bass, one know did. Raise the treble, lower the treble, at about 1 kilohertz center frequency.

Well, it turned out that the vast majority-- I don't remember the numbers on this, now-- but it was overwhelming. Turn the thing into a table model radio. If here's the response, it was going down at about 200. AM was the only thing at the time, basically, when they were doing this, so it went down it about under 5 kilohertz.

By the way, this is important. They stopped people at the exit who had adjusted the thing to be flat. And it turned out that they had some connection to live music, even going to play across the river-- this was done in Camden, New Jersey-- going across the river to Philadelphia and seeing a play, which, in those times, didn't have amplified sound. And they opened the thing up. That's a very significant thing that got ignored in the experiment.

And what they concluded, RCA marketing, and this is why RCA did not enter hi-fi, the field. They concluded there was no market for it, because this response happened to be as close as you could get to the response of an AC/DC table model radio, of the time, which people grew up with. And which there, believe it or not, according to the President of ABC, there are approximately seven per household, in this country, today.

So the RCA marketing said, my god, people prefer what they are brought up with. We just offered them a fantastic system, they don't like it. They said so. They adjusted it so that it sounded like the radio. So this hi-fi is for the birds. So turns out, if anybody had stood beside them-- this is 20/20 vision afterwards-- if anybody had stood beside these people on the podium and said, oh, by the way, did you notice when you turned this knob down, that bass drum, right there, disappeared? Don't you like bass drums? I just wondered why you turned it out. And you would see the person, oh, my God. Yeah, he would turn it up. Same thing with the cymbals at the other end. Don't you like the cymbals?

So it turns out that once introduced to something, it's like a painting. If you go in there. All of this stuff in the perception area, the different senses, is similar. If an amateur, an absolute amateur, goes into an art gallery and sees different works there, he can't tell a good one from a bad one. He gets an overall nice feeling, maybe, on a good one.

But if some expert came and just showed him the most elementary thing. Like, notice the shadows in this one picture, they all have the same, they're all coming from the same point of illumination. And in the other picture, they aren't. He'd go, oh. Then he would be able to go down the line and see that, on that he could now judge one parameter.

So if introduced to things, people will change. So the question is, how do you get something that comes closer to here, because the composers compose for something that will touch the human emotion. And we as engineers, horsing around with a few controls, if we can ever improve on a Mozart or a Beethoven it would be a strict miracle, I would think. And so we better try for this. But if you can't get a measure on it, what can you do?

So we came very, very close to bailing out on the whole program. And then we realized that there was something you could do. And again, I'm telling you all this, not for acoustics, but for the disciplines that you go into one day. The route around establishing a measure on this, because there's no way. I mean, if we can establish a measure on this, then we can have a machine, which looks in an art gallery and will tell us the best artist.

I think we are a little way from that, like infinity. Basically, way around it is very time consuming, very difficult, but it's there. You can learn what people can perceive and cannot perceive. In other words-- I have a feeling I mentioned this too, already-- in the video, they did this so well, when the color television was developed.

Everybody said you need 12 megahertz for color, because you need four for black and white, 4 times 3 is 12. Well, it turns out, if you do the psychovisual stuff, if you do the experiments, you hold up threads or something that it subtends as a narrow enough angle, it turns out that the people can only tell the intensity of that. They cannot tell the color. So you get it down to 600 kilohertz of color. Now audio never did this job. And as a result, people have paid fortunes over the decades for stuff that they couldn't hear.

So what you can do is to find out what people can hear, what's important in design. For example, you can find out what, in the spectrum, is important. If you have a spectrum like this, suppose you put a little peak in here, certain width, a certain height, at that frequency. And then investigate dips. How far down can you go before they hear it, with a given width? You can do that at all frequencies. The whole object of this is to use the human mechanisms of perception without using subjective judgment. In other words, don't ask them, like we were doing initially, which one is better, which one is closer. That's using their subjective judgment, and you get all over the map.

But if you've made the exhaustive study, in this simple parameter, of how much of a dip you can have, how wide it is before you can receive it, at a given dB down, and then the same thing up, at all different frequencies. Then you know that, maybe, it's not important to correct for a system that has things like this, that are much narrower than what you can hear.

Thank god, because you just heard, the other day in a demonstration in here, you heard all these normal modes. And you're wondering, oh my god, how come the music sounds the same to my neighbor as it sounds to me when you sit in a concert hall with 50 million, 100 million modes below 10 kilohertz? So then know you don't have to correct for those. They can't hear it.

Everything, in engineering, that you know you don't have to do gives you freedom. Because in the end, you're going to be constrained to a design that the people can buy. When you design an airplane, for example, if somebody told you to do the best designed airplane that would never crash, nobody would have the money, very few would have the money to fly on one. So there's always a constraint. There is no engineering without cost.

So everything that you are free from, that you don't have to design for, gives you freedom to design in another dimension that is perceptible to the people. So primary test, in this discipline of finding out, painstakingly, what people can hear and what they can't, are the psychoacoustic tests. And the most common one, the one we've had to use most, is AX.

You give a person many samples. You tell them, in advance, you're going to hear a sample, maybe it's a few seconds long. And then it'll switch to another sample, which is the unknown. And the other example could be either a or b. It's called a AX test. You know, because you're generating the signals, when it's b and when it's a, in this whole sequence. They don't know that. You're asked just to press a button.

So you don't say what's better, what's worse. If you hear a difference, press a button, finished. And you can go right down, and you can explore these characteristics. Having explored them, does not tell you which is closer, in the end result, but it tells you what people can hear, what they can't hear. And you can then optimize what they can hear. And, with luck, that'll bring you closer here. And judgment here has to be, again, the human.

So there's other tests. AXB, you'll give a person-- or ABX, depending on how you do it-- something and ask them whether this belongs to this. You tell them them, in advance, it's going to belong to either this or this, that x will be a our b. Now, you tell us which. Most of the ones that we have used have been always in this area. When you give people three things, you have another problem, a memory problem and that introduces more of a variable. Now, biggest problem, in all of this kind of testing, however, is something you don't think of in the beginning, the first time you do it. The biggest problem is to make sure that, in the test, you set up that only the variable that you want to test is changing.

That sounds elementary, but, boy, it's difficult. I'll go through examples, and, maybe, it'll become clear. Yeah, I have a list of. First one, isn't on the list that I want to talk about. First one that we got involved with. One experiment lasted four years, from '61 to '64, just to get the result of one experiment. And it was a result that we wanted to know, because knowing about the normal modes in the room, we began to wonder if these loudspeakers that people were using, if the normal modes of the loudspeaker were even audible. The speakers go, the speaker cone vibration modes that are just diaphragm vibration modes, go up and down all over the place. A negligible number compared to the room modes.

We could tell, by a simple psychoacoustics experiment, that, if you have one loudspeaker here, that it's normal modes were audible. Because we knew what there width was, and what their height was. And, from these other studies that we talked about, we could tell that. But we wondered, if you put a lot of speakers together, little ones, I mean this size, whether they would be audible. And there was a reason for that.

That number one, just ordinary quality, in terms of making the cone or any physical thing, would cause the modes in one of these-- I should have draw that-- and as you know, the density goes up as the square-- Oh, no it doesn't. That's a two dimensional thing. Density goes up linearly. --of the normal modes.

If we had two units, and they each had peaks at different points and nulls at different points, then maybe you'd only be responding to the average of all of this. If we had a lot of them together, and the peaks and dips weren't lined up. Let's just make life simple, here, and we talk about one peak. This an ideal thing. We have a response. Well, let's take even a dip. It goes all the way to 0 at this frequency. Now, the next one goes all the way to 0, at some assumption, now. If we had another one that went like this.

Well, if you put the two together. Even if I do it at high frequencies, so that one doesn't affect the radiation impedance of the other, which, if they do, they radiate more power, as you know we talked about that. But this would now give you, just suppose, we give you this. Because when the white one dipped, the yellow one was up still. So you didn't go to 0 at all, you went halfway down. So that was a thought. And then secondly, there's another very interesting thing. If you make resonant circuits-- this is linear theory.

It was Norbert Wiener who finally proved that the exact opposite happens in nonlinear, with a certain kind of nonlinear coupling. Namely, if you have two linear systems, give you a simple example of something that's like a TV or an FM IF, two coils both tuned to, let's say, to the same frequency. Bring them close, so you have mutual coupling here or put a capacitor here. And what happens is the frequencies split off. Where you had each one had a pole right here, now you don't get two poles there. You get this. And that's how, instead of making filters that are very sharp, that's exactly how you make filters with a wider bandwidth.

So these, if they're close enough, could be acoustically coupled. And the resonances actually do move, if they happen to be at the same point. These modal resonances of the cone will split apart. So, on that basis, we thought, my god, maybe the way to make a loudspeaker, since one has normal modes that we know you can hear, if we make up a multiplicity of ones, and see if that would work. But that's only a hypothesis, because that's all talk.

Everything I have said is talk in this domain, not over here. So we had to find this out. Well, how do you find out whether the normal modes are audible or not? This is a real problem, because you can't build a source that doesn't have any normal modes. If you could, you could compare this source to the other one and ask people if they could hear any difference. Again, an AX test, you hear a difference, press the button, if you don't, don't.

And so what we finally came upon was that we could make something which was ideal, but it wouldn't play music. However, it would play the correct impulse response. Now that's no good for you to listen to. If you ever heard them, they're just a bunch of clicks. And you couldn't tell one from the next.

Now, it turns out, if you make a discharge from an electrical spark, this-- well, it's a long story-- but it, actually, is a doublet in there, creates a doublet. But the impulse is the integral of that doublet. So you integrate the response, and you'll get the same thing as if you had had this as your excitation. And if this is narrow enough, and it is, if you get to spark down to the less than 5 kilovolts.

If this is narrow enough, it has a spectrum, which is all that you want, for the audio man. But it radiates spherically. So the idea came, well, let's put the spark in a corner. Let's make an array of loudspeakers, which fits in the corner, which is an eighth of a sphere and has all of these loudspeakers closely spaced all over the thing. So we made it. And then we could compare that to what we're going to talk about, [? degenerate ?] from the impulse response.

So we made this ball about this big, eighth of a sphere. When you put it in a corner, what does it do? To think of mirror images, by the way, always just use the fact that hard walls are to sound like mirrors are to light. Just envision an eighth of a sphere in the corner, what do you see? A full sphere, in the center of a room, how many times as big? Hmm?

AUDIENCE: [INAUDIBLE]

AMAR BOSE: Eight, yeah, 8 times as big. Because the whole room images, floor to ceiling and wall to wall. So what about this impulse? Well, there air can be a good linear system. You have the impulse response, h or tau or h of t and the output of the system. Let's say, in input, you have x of t, output y of t. y of t, you can, as you know, express as a convolution integral. h of tau, x of t minus tau d tau.

So we said, aha. All we need to do is get the impulse response of the room to a spark. Then we can put in x of t, and perform this convolution integral, and get out what would have been measured with a microphone, at the same point at which you set off a spark in a corner, and measure the impulse response. So you measure the impulse response in the room to here. And you put that in here, perform the convolution integral, lo and behold, you have this. And you could play that on headphones and ask people if there was a difference between the recording of the actual system playing the same piece of music, x of t. And you could set up an AX test. I won't go through, unless you have questions on it, I won't go through the details of why this took four years to do.

By the way, at the time it was done, there was only one computer in the United States that could do it. And about one person that developed the technique. I think he's the only one, Tom Stockham, who just came on the faculty here, a colleague of mine who got interested. And this started his whole career in digital. And he predicted, as early as '62. Well, first of all, the first recordings of digital sound were made, quality sound, were actually made during this experiment. I used TX2 computer at Lincoln, which was the computer that gave birth to Digital Equipment Corporation. Done by the same folks who then went into Digital, who formed Digital.

It could do four multiplies in parallel. It was a huge, huge room full of transistors. It was the second transistorized computer, I think. But it all worked. And it could actually sample audio bandwidths. This integral by the way and digital computers are mortal enemies. This is a terrible function to have to evaluate, because you're sliding. The graphical interpretation of this function is you turn one of the functions around, in this case x of t, you slide it over the other one. Because the t, whatever t you want out. And for each t, you have to multiply all the samples that you have for this. Move t a little bit, multiply all the samples. First recordings, by the way, took 20 hours per second of music. And we had the computer. It was an Air Force computer. It was done for the Air Force, and they weren't particularly interested in experiments on sound reproduction. So what they did is they told us, look, if nobody signs up for this thing at night, by 4 o'clock in the afternoon, you can have it at night. And so we had it at night for four years. And very seldom was it used at night. But also a number of us had very red eyes when we came into class the next morning.

But in any case, it was able to do it. And through Stockham's amazing work, which launched him in the career of digital signal processing, at one of the highest levels attained by anybody, and the fast Fourier transformed contributions that he made, all came out of this experiment. He got it down to 20 hours per minute by looking at the content of what audio signals really had, and some extremely clever processing. 20 hours per second, he got it down to 7 minutes per second, and the whole thing became practical. 20 hours per second, even having it every night, we'd still be there.

So anyway, what happened is, in the end result, you got the AX kind of thing. And you could then tell whether the resonances of all of this stuff was audible or not. But in doing that, remember, I told you that, in these tests, the hardest thing is to guarantee that other things don't change? If the average spectrum of this, the average spectrum of a spark was pretty good after you integrated it, if the average spectrum of the speaker went down for as much as a dB over an octave, it would be a difference. People don't have to know anything about music for this. But if all that you ask them is can you hear a difference? And people are very acute at sensing differences. And they press the button. So it took a long time to get everything else constant, not varying between the two. And in fact, I might tell you that to reproduce speech, without a difference, was far more critical than to reproduce music, without a difference. Same amount of difference might be very audible on speech, not audible on music.

And let me say, the average person turned out to be as critical on speech as the highly trained professional musician on his own instrument. And it's because you spend so much time listening to it. Your friend calls you up in the morning, and soon as he says, hello, you can tell whether he had a bad night, and whether he was out too late, no sleep, flu, anything. Just you're keyed to that. And when you place a person in a room and his friend outside, you give them a microphone and you tell them to talk.

And you ask the friend, who knows them well, to adjust the volume level-- he's listening to headphones-- till it sounds the most like your friend. If your friend's talking softly, he will adjust it softly. He's in another room. If he's talking loudly, because the whole frequency content-- at the rate we're going, we're not going go half of this stuff-- but the whole frequency content of the voice changes with loudness, as you perceive it. I shouldn't say, of the voice, as you perceive the voice, changes.

So it turned out that it was a heck of a job to equalize this thing. And then if you have any noise. For example, in those days, we were able to do, I think it was 14 bits. But there was a slight quantization noise. When you do sampling, you get enough levels in it, the quantization noise becomes uncorrelated to the signal. It just like a shh. Well, if there was that, the only fair question you can ask in the AX test is, is there a difference? People would hear that hiss. Yes, there's a difference. So this is where you have to use a lot of judgment in your engineering work. Always, even over there, but here, especially, because what we finally decided to do was quantize the signal that was recorded from the actual speaker also. So the same noise was there.

Now that's invalid if that noise was of a magnitude to mask the differences that you were hunting for. That's an invalid technique. But if the noise was well down below that, and that's judgment, that's a judgment call. And, believe me, there will be more judgement calls you have to make in engineering then quantitative things that are analytically derived.

So there were a lot of things that had to be done. But when it was done, it showed us that, if you use a multiplicity of, and these were all just to the kind of driver that would have had radio response by itself-- oh, a little better than that-- and then we equalized the whole thing. Because there were a lot of them. And all acting together, it's like a giant woofer. Because they all go in phase below fundamental resonance. So you could equalize all of this. That wasn't even done, ever, in sound systems before.

So it proved a very fundamental thing to us that, if you have a multiplicity of small. Oh, and we used purposely bad speakers. Bad in the sense that there was no annulus, just paper. It had the maximum modal structure, if you want. But there were 22 of them, over the whole thing, and nobody was able to hear the difference between this and this spark. The room modes were dominant over any differences that came out. And those differences were such that your ear-- whatever we had on the board there-- wasn't sensitive to them.

Let's see. Let me pick from some things here, randomly.

Since we talked a little bit about digital signals, let me talk a little bit more about sampling process. Now, I know that Fourier transforms aren't a requirement for this subject, but a lot of you. How many have studied Fourier transforms? Yeah, good, I remembered that most of you had them. And how many of you have studied the process of sampling and what the spectrum looks like? A little less, but how does that happen? Isn't that given to you in the same subject? Huh? It is. Then it's a question of memory, not a question of study. All right.

To very quickly then review, so that what we're going to say may make some sense. If you sample a signal, which you want to do if you're going to process it digitally. You can look at that as nothing more than taking an x of t and taking and multiplying it, if you wish, by this sampling function-- whatever we'll call it-- by an impulse train. In other words, at this point, you go up here, and you find out. How do you multiply an impulse? What comes out of the sample of this is an impulse. This was going in an impulse of area 1. After you sampled this function, it became an impulse of area equal to the height, a, here, if you want to call it that.

So what came out of the sampling process were samples that looked like, wherever these things are, et cetera. And where the height of the impulse, I'm indicating, is the area of the impulse, actually. It's an impulse of area proportional to that height. Now, you do this some t 0 sampling period, sampling period t zero.

Now a very, very, very useful thing is that, with a Fourier transform pair, in either domain that you're working, time domain or frequency domain, multiplication in one domain goes into convolution in the other, convolution into multiplication.

Do this for yourself one time, since most of you have had this stuff. Just derive that fact. And all that need to derive it the Fourier transform and its inverse. The steps are extremely simple. They require a lot of thought and one change of variable. But if you do it once yourself, the result, somehow, you have for life. If you get it presented to you, it's like so much else that you get presented to you in school. Just derive that multiplication in one domain is convolution in the other or vice versa, whichever way you want to start it.

And, of course, this fellow, an impulse. The other thing that's useful to have done, is that a periodic impulse train, gives a spectrum, interestingly enough, which is periodic in frequency. And, obviously, if you wanted to take this thing, which goes on, and on, and on, if you wanted to take this and represent it in terms of a Fourier series, it's periodic in exactly that.

So the fundamental frequency omega 0, times the period, t0, would be equal to 2 pi. Or omega 0 is 2 pi f 0, the sampling frequency, times t 0 is equal to 2 pi, which implies that f 0 is equal to 1 over t 0. So it comes out in the frequency domain, where the spacing between these of f 0, which is equal to 1 over t 0.

Now once you know that this multiplication goes in the convolution, the other thing you need to know about convolution, which comes right of the expression itself, is that convolution is nothing more than taking one function, in this case h of tau, taking the other function you want to convolve with it, turning it around, that's why it's minus tau, and shifting it by t. Which is a variable you want out.

If it's in frequency, it's the same thing. You take one function, flip it around, and slide it by f, for whatever f you want. I'm sorry, take the area under the product. So anything that you convolve with an impulse comes out to be the same thing. And anything that you convolve, if I convolve an x of t, with a spectrum. Let's suppose I had a spectrum audio signal h of f. I call it, a, for audio or something, a of f. Let's say it looked like this. Some frequency, whatever the bandwidth of that audio signal was, that's a of f. And now convolve that with an impulse train.

So what do I do? Let's flip this one around. It's all quite symmetrical. It's easy. Flip this around, and, after I flipped it around here, I slide it, along here, by f, to get out whatever I'm calculating the frequency, of the result, of the sampling function to be. I slide it. Well, every time I slide it, I just repeat it, because I'm multiplying by impulses. What did I call this thing here? Never mind, I didn't give a name to it.

So let's look at the spectrum of the sample signal. Spectrum of sample signal. So the spectrum of the sample signal is this fellow repeated. And it repeats, exactly, then every-- sorry, where the heck did I put it-- every f 0. The spectrum of this will repeat every f 0. Just a minute, I don't like this. I don't know if that's an f or a t, but it should be a t. The spacing, in the frequency domain, if I go over to the frequency domain of this, the spacing is f 0. This function is this time function multiplied by this. So the impulses just change, as we talked about. So the spectrum of the sample signal looks exactly like this. And the distance f0 is here. Sorry, the distance, f 0 is here, at f0, 2f0, 3f0, et cetera.

Now, if the bandwidth of the signal, this is the signal bandwidth, if the bandwidth of the signal is more than half the sampling frequency, the so-called. How did you learn this? Was it called Shannon's sampling theorem?

AUDIENCE: [INAUDIBLE]

AMAR BOSE: The Nyquist. Really, it dates way back. Shannon dealt with it in the time domain, but in the frequency domain, the picture is extremely simple. Everything that's on the board leads to it. So if f 0 is 2 times the bandwidth, then of course you get this. If f 0, the sampling period, is faster, it's bigger, larger f, higher frequency of sampling, then of course you get this, more separation.

Now, in order to avoid the aliasing problem, you don't want all this stuff in your output, if you can hear it. If your ear came along and went to 15 or so kilohertz and there was a low path feeler that went [PLUNK] you'd be all right. But if it doesn't, you have to put a filter in here.

You could do it on that, sampling. What we show says it's OK. You could do low path filter. Now, you can go to the store, today, and buy CD players that have four times the sampling rate. They're over sampling. And you pay more, usually. And you wonder what good is this?

I went to a fair, one time, a trade fair in Tokyo, and the company that was making them had an oscilloscope there, and they demonstrated the difference between sampling two or three times the required frequency and the regular.

And what they did is, they put a square wave in. And when they sampled it four times over, the square wave came out pretty good. When they sampled at twice the frequency, the square wave came out like this. And all the engineers we're going by this booth, oh my god. Wow, that's the difference. Engineers are, I said before, the biggest suckers for something like this. They get presented with something here, and they forget about this link.

Now why was this? Why did this happen? It happened because, when you sampled it just twice, the highest frequency, 2 times this bandwidth, you put in a sharp filter. Now a sharp filter has a phase shift that goes like this, [FYOO], great big phase shift. The phase shift, not the frequency imbalance, is what gives rise to this in the response. Now, when you sample, very fast, you can have a more gentle filter. And a more gentle filter has a phase shift that's much more gentle. And so you didn't have as much ringing.

Now enter psychoacoustics, and it turns out that both those systems map into the same point of perception. If I get to talk about it, monaural phase differences, you can forget. Or phase differences that are introduced into two channels that are identical, you can forget.

And so I remember going by the booth, seeing this demonstration and saying, look, do you believe-- the president of the firm was there-- do you really believe that makes a difference to what you hear? Oh, yeah, absolutely. I said, fine, would you just demonstrate it? There're headphones here. There's loudspeakers here. Would you give me an AB test? It wasn't offered. They'd probably tried it and found that it didn't do anything. But marketing departments grab a hold of things like this, and, as I say, engineers, they believe it. And so they tell all their friends. You got to do this. You got to buy this thing, because square waves are important.

While we're on the subject of square waves, I will review, for you, an experiment that, I think, I did just allude to, at least, building an all-pass. This was done for a number of the audio press. And you could select between the all-pass and a straight channel. And this could be a and b. This was audio signal coming in here, input, which could be anything, square wave or whatever you want.

All-pass network, I told you, was a network that looked like this. Holes and 0's poles exactly opposite each other, so the magnitude, which is the product of the vectors from the 0s to the point omega on the axis divided by the product of the poles, is identical. Flat frequency response. And every time you go by one of these, this vector changes. If they're close enough to axis, 180 degrees. This one changes other way, 180. So you get a phase shift phi versus omega, or f, or whatever you want, that goes like this, 360 degrees. It just keeps going down, every pole 0 [INAUDIBLE] half.

And what that does to a square wave response is hardly describable. At some frequencies, it comes out, there's no identification that that square wave went in. At other frequencies, it'll come out a triangle. And, of course, if you go to 10 kilohertz, as far as you're concerned, since the ear doesn't go higher than 30, what happens is, it comes out a sine wave. If you just have a filter in there at 20 kilohertz, and you put a square wave in at 10 kilohertz, first harmonics of the square wave, it's an odd function, so the first are at 30, 3 times, so it comes out a sine wave.

And so we had this, and we had giant oscilloscopes so all the press could see it. And we said, what would happen if somebody sent you an amplifier like this, with that square response? They said, ooh, we'd send it back. It's clearly a defect. We said fine.

Now, this one, of course, had a square wave. A straight line had a square wave response like that, perfect. And so we said, now, you can come up. This is not a standard psychoacoustic test. We'll put you on a set of headphones. We'll let you hear square waves through here and music through here. If you can turn the switch, and you can turn it at your own will, and you know which is which, because you can see it. If you can hear any difference, you tell us, and then we'll turn you around and put you to a standard psychoacoustic AX test to see if you really could hear it.

Not one could hear any difference at all. So, when you bring psychoacoustics into it, you find out, all too often, that two different measurements, which may have been extremely costly, the difference between these two in your design, map into the same point of perception. And so you've contributed absolutely nothing, in terms of the end product that the consumer's going to enjoy.

By the way, there have been no questions about what's going on. Let's pick on something here. Let's see, we did some of these things, by luck. I kept all the stuff to do with. Yeah, none of that. Let me just, I'll mention one other thing.

With the AX testing, you can, just like we did here, test amplifiers, for example. You can build an amplifier which satisfies all the criteria of the old boy acoustical experts, or hi-fi nuts, whatever you want to call them. The class A was supposed to be the best amplifier. And you can get to distortion levels that are almost immeasurable. And then you can put another amplifier, like today, all of them are, essentially, class B, as your b signal, and give the people test.

By the way, when you do this, in the AX test, another thing that's very important in the psychoacoustic part is you should try to make the switching within 25 milliseconds. I mentioned this once much earlier in the subject. If you leave a longer time between a and x, in sensitive experiments, where you're asking tone differences, minimum perceptible tones that people can perceive, one second makes a difference. They can perceive less. They have to have a bigger difference between two frequencies to perceive it then if you have 25 milliseconds.

And no clicks in between, that completely destroys it. In other words, if the switch goes click, you've lost a lot of your ability to discern. Acoustic memory is not very good. I have met only one person that had a memory, which I still can't explain. There was a person, R.L. Lee here, who, he's now at University of Utah. He was a graduate of New England Conservatory of Music working here and did a lot of electrical engineering also. But we once made two recordings in Jordan Hall and played one of them for him. And the difference in the two recordings were the position of the microphone, that's all. And six to eight months later, we went back, and we had reason to call him in again. And we played the piece of music that we recorded. Right away he said, I've never heard this before. Oh yes, you did, six months ago. No, he said. I have never heard this music before. I've never heard this recording before. And we finally scratched our heads, and oh, my god, it was the one with the different microphone. I mean, I have never seen that acuteness matched in anyone, professional musician or not.

But, in general, people don't remember. If you go to a hi-fi store, by the time you have heard the second or third loudspeaker, you forget what the first one's like. Let alone, of course, I told you the problems that one interferes with the other. Sitting there, the cone in the other one starts moving. And so there's no way of telling what's good or bad in that kind of environment.

If you get to see, I don't know what part of the tours that are there. There is no way of touring the whole company on that Friday. But you'll see that, if you have to make measurements, you remove everything from the room. You put A in, and then you move it out, put B in. But when you make AB tests, unfortunately, you have to have A and B present. Unless you want to record them, but that's somewhat artificial. It's masked.

Let's see, OK. Maybe, I see if there any questions so far. And what, yeah?

AUDIENCE: I was just wondering about that test up there with the eighth of a sphere. I always thought that it might be interesting to make some sort of product where you somehow do that kind of impulse response for a specific room. Like, maybe, you bring it to your living room, and you hold up like-- the first thing that you tell the customers, here's this thing. You have to hold it in the middle of your room or wherever you're sitting.

AMAR BOSE: Adjust it to the room. Yeah. A lot of thought has been given to that kind of thing among the people that you will meet on that trip, also. We actually do it for cars. You see, the biggest thing to realize, again, all of this is realization that comes from looking at the whole picture.

In the home, there are three things that you now know, from the work that we've done in the class, that are terribly important that are not under the control of the designer. Namely, what room you're going to put the loudspeaker in. Whether it's a brick room in a den, a wood paneled room, or an absorbent living room. The speaker's going to sound terrifically different in those three. Because the acoustics of the room are part of the system. Because the system goes from the musician who performed to your ear. And that's a big chain. And that's the last element in the chain.

So we can't control that. We don't know what you're going to do. Placement of loudspeakers, we can't control it. Some people like to shove them in the corner, and they all sound like a jukebox. Because, as you know, if you design in the anecholic chamber, like the textbooks tell you, and you put it against the wall. Let's idealize it. If you had a flat frequency response like that in the anechoic chamber, then you go into a room with one wall only, 6 dB, mirror image, 6 dB up. Put it on the floor with another two surfaces, another 6 dB. Put in the corner, another 6 dB. So you have a total of 18 dB, just worked out, right away, you can do now, from the mirror images. So you get a terribly unbalanced frequency response.

Now what the heck was I telling you about? I got off the track on this. Huh?

AUDIENCE: [INAUDIBLE]

AMAR BOSE: Oh yeah, placement. OK, so that's problem number one. The designer doesn't know what room you're going to put it in. Problem number two, doesn't know where you're going to put in the room. Problem number three, he doesn't know where you're going to sit. Most people, unfortunately, don't have chairs right down in the corner, where there head would be. But they sometimes have chairs in the corner, which gives them at least two, and another one that's close. So getting all sorts of funny things.

So, in a car, if you do work with the manufacturer, not if you try the aftermarket, that's impossible to achieve. It's like the home, because you don't know what the heck the environment is going to be in there. And it turns out it's extremely critical, when we get to localization, that the loudspeaker, for the best performance, shouldn't be moved by that much. And if there's a window winding motor in there, where you want to put it, in the aftermarket, you're in a bad shape. So working with the manufacturer, there's a little bit of pushing, or a lot of pushing, you move the window motor.

So you can know the environment. They make a heck of a lot of cars the same, so you can be certain of it. They're all furnished the same, you can be certain of it. And so you can know the environment, the acoustical environment. You know where you're going to place the speakers, because you specify them. You know where the people are going to sit. And so you have this huge advantage.

And as I think I may have mentioned, the Consumer's Union came out with a statement, forget it, no sound in a car could ever be worthwhile. This was 1980 or so. Because the acoustics are second only to a telephone booth. The acoustic part might be true. It's a big exaggeration, but it's closer to true. However, the big difference is, it's known. And if you know it, it's known, it's linear. There are a lot of things, not everything, a lot of things you can do with it.

So I still am of the belief that it's possible and has been done, to build a sound system, for an automobile, when you have the complete freedom to do what you want, that's better than 95% of the homes. Because in none of the homes could you ever, with knowledge, consider the three parameters. You wouldn't dare, because it would be made for somebody's home.

So we'll handle the rest of the topics, some of them at least, next time.