James K. Roberge: 6.302 Lecture 10

Search transcript...

[MUSIC PLAYING]

JAMES K. ROBERGE: Hi, during the last two sessions we've looked at the problem of compensating feedback systems. The discussion to this point's been quite general. We tried to indicate the sorts of manipulations or the sorts of changes we'd like to make to the loop-transmission in order to improve the performance of a feedback system.

However, I've tried to temper that discussion by pointing out that a great deal depends on the physical reality of the system. There are certain things we're able to do easily for certain systems, other things that are virtually impossible. For example, if we attempt to alter the magnitude of the loop-transmission, we find out that in mechanical systems that's off time easily done. In that we're able to add gain electronically to the loop-transmission. And that gain may have very wide bandwidth compared to the dominant poles in the loop-transmission.

Conversely, we find it off times very, very difficult to increase the loop-transmission magnitude when we're considering a high-speed feedback amplifier. And at any stages that we added in an attempt to increase the loop-transmission magnitude would have a profound effect on the dynamics associated with the loop.

Today, I'd like to try to emphasize this aspect of really working with an actual system, and seeing the kinds of modifications that are realistic for one particular system. The system we'll look at is an amplifier that we intend to use in a feedback connection. And we'll see how we can apply the various ideas that we've talked about to this particular amplifier.

I'd like to point out that there is an example quite similar to the one we're going to do that's presented as a problem in the text, and I assigned that problem. There's a good bit of experimental work involved as we'll see when we go through the demonstration and look at some of the responses from our actual amplifier using the equipment on the table. And you're able to do that same sort of a thing in connection with a problem that I'll assign in the book. It's a different example, but it's the same general kind of a thing as we'll be doing here.

The equipment that's necessary to conduct the experimental portion of the investigation is readily available in many, many laboratories. And so I think this is a way you'll be able to go through one of these examples yourself and actually physically compensate a system. The sorts of things I expect you'd to do will be very similar to what we'll do here today.

Assume that we have an operational amplifier, and that amplifier is given to us, and there's nothing we can do to change its transfer function. You're given this little box with no user serviceable parts inside, and it has a specified transfer function. And we have to make do with that amplifier in our particular application.

Our application is a non-inverting gain of 10 amplifier, as I show here. So we simply take our operational amplifier, apply the input to the non-inverting input terminal of the amplifier. And since we're trying to get a gain of 10, we take the output, attenuate it by a factor of 10 through a 9R versus 1R network, and feed the attenuated version of the output back to the inverting input of the operational amplifier. So this configuration is the very familiar non-inverting amplifier. This time we happen to have set it up for a closed-loop or an ideal closed-loop gain of 10.

We assume that the transfer function associated with the operational amplifier that we have to use in this application has an a of s equal to 5 times 10 to the fifth over 3 poles. One pole located at 1 radian per second, and then too much higher frequency poles located at 10 to the fourth radians per second and 10 to the fifth radians per second.

As I said, we have no possibility of changing that. We assume that that's a given, which is a very realistic situation when you're dealing, for example, with an integrated circuit operational amplifier and you have to take what the manufacturer supplies to you.

I'd like to say a little bit about the choice of transfer function. The general form where we have one very low frequency pole, and then some higher frequency things that begin to give us additional negative phase shift at higher frequency, is a very, very common pattern. You see, when you design this sort of an amplifier, or in fact, when you design many mechanical systems, servomechanisms, you try to dominate the loop-transmission with a single low frequency pole. We've looked at that method of stabilizing a feedback system before. And so many, many systems of interest have a dominant pole.

However, there's then something at high frequencies that comes in that begins to deteriorate the system. And in particular, gives you additional negative phase shift. It may be several additional poles, as I'm considering in this example. It may be, for example, time delays. When you look at certain kinds of transistors, you find out that they have a time delay associated, a propagation of diffusion time delay. But anyway, these higher frequency dynamics give us additional negative phase shift, and are ultimately the thing that prevents us from getting as high crossover frequencies as we might like to have. So this general form where we have a single low frequency pole and then something else at high frequencies that gives additional negative phase shift is quite a common pattern.

I've chosen specific numbers, so that we can go through the analytic portions of the development using those numbers. But it's really not necessary that I have the expression for the open-loop gain of the amplifier or the open-loop transfer function of the amplifier available in analytic form. As we'll see when I do the manipulations, I could equally well have started with experimental evidence or experimental data.

Had we measured the open-loop gain of the operational amplifier, presented that information in either gain phase or Bode plot form as a function of frequency, we would have been able to do all the manipulations that we need to do in order to compensate the amplifier.

The form where we have the high frequency additional poles coming in at somewhere on the order of 10 to the fourth radians per second, is unrealistically slow for modern operational amplifiers. A more realistic value might be something on the order of 10 to the seventh radians per second for many of the common integrated circuit operational amplifiers. The reason we do it this way is that we make sort of a pseudo operational amplifier using several others, so that we can control the transfer function very well.

And in order to be able to do that, we'd like to have the important dynamics of our pseudo amplifier occur at frequencies well below the ultimate limitations of the components we used to build the system. So in order to be able to build a demonstration system that very accurately represents this equation, I've chosen to limit or to locate the higher frequency poles at a frequency that's considerably below the ultimate performance capabilities of modern operational amplifiers.

If we take our operational amplifier and connect it in the non-inverting gain of 10 configuration, we get a loop that's indicated by the block diagram here. The input, the output, the open-loop transfer function of the operational amplifier, or the quantity that we've introduced earlier. And then, in this particular system, we have a feedback transfer function f equal to 1/10, representing the attenuation associated with the 9R versus R valued resistor.

This assumes, of course, that the feedback network doesn't load the output of the operational amplifier. And similarly, that the input of the operational amplifier does not cause any loading on the feedback network. So under those sorts of ideal conditions, this is the block diagram that results. We, of course, substitute a of s, or for a of s, the expression above.

If we, once again, look at either our operational amplifier or the block diagram that we've just seen, we recognize of course, that the loop transmission for this system is simply 1/10 of a of s since f is 1/10 and is frequency independent. We can plot that loop-transmission in Bode plot form. And this is the result.

Since a of s, itself, has a magnitude at dc of 5 times 10 to the fifth, the af product will have a magnitude of 5 times 10 to the fourth, reflecting the value of 1/10 for f. We have a first pole at 1 radian per second. We then drop off with a 1 over s 1 over frequency roll-off until we get into the second pole at 10 to the fourth radians per second. And eventually down here somewhere, well below unity gain for the loop-transmission, we'd get the third pole, the one located at 10 to the fifth radians per second.

In the meantime, the angle associated with the af product starts originally at 0 degrees. For very low frequencies, well below 1 radian per second. We have about minus 45 degrees of phase shift at 1 radian per second, the corner frequency associated with the first pole in the af product.

We then go through a relatively wide range of frequencies where the angle stays close to minus 90 degrees, reflecting the phase shift associated with the first pole. Out at about 10 to the fourth radians per second, we pick up an additional 45 degrees of negative phase shift. So here we're down to minus 135 degrees by 10 to the fourth radians per second.

The angle then continues asymptotically of course, reaches minus 270 degrees in our particular example, since we have three poles assumed associated with the af product. This is, of course, the af plot, the negative of the loop-transmission plotted in Bode plot coordinates that we get for our uncompensated gain of 10 amplifier. We combine the a of s associated with the amplifier with the factor of 1/10 that equals f.

And if we look at the phase margin of this configuration, recall that what we do is look at the unity gain frequency, the crossover frequency. The unity gain frequency associated with the af product, which is right here. And that frequency, if we do it with some accuracy, works out to be 2.4 times 10 to the fourth radians per second. So this is about the unity gain frequency on our plot.

If we then go in and ask what the phase margin is for this particular system-- in other words, how close is the angle to minus 180 degrees at the crossover frequency?-- we can do that construction like this. We go to the unity gain frequency about here, look at the angle curve, and evaluate the angle. Or in particular, the distance between the angle and minus 180 degrees at the crossover frequency.

If we do that for this particular system, we find out that we have 14 degrees of phase margin. I've entered those values into a table where we're going to begin to tabulate results on our amplifier.

Without compensation, we have a crossover frequency of 2.4 times 10 to the fourth radians per second, and a phase margin of about 14 degrees.

Similarly, if we evaluate the gain margin for this system by looking at the magnitude of the af product at the frequency where the angle goes through minus 180 degrees, we find out that the gain margin for the system is about 2.

Well, what I'd now like to do is look at the behavior of the system under these conditions. And what we have here is a little board that's effectively our pseudo amplifier. We actually make it up out of several operational amplifiers, as you can see. We have networks that fairly precisely locate the poles associated with our pseudo amplifier, or the poles at omega equals 1 radian per second, and at omega equals 10 to the fourth radians per second, and omega equals 10 to the fifth radians per second.

And we drive our pseudo amplifier, which is connected for the gain of 10 via two feedback networks to give us the closed-loop-- or two feedback resistors to give us the closed-loop gain of 10. The resistors are actually in here. Two precision resistors equal to 9R and R down in here somewhere.

We take that amplifier and we drive it with a signal generator with the signal from the signal generator. And we then look at the output of our amplifier with the oscilloscope.

What we're doing now is looking at the step response of our amplifier, the signal generator is providing a square wave, but a relatively low frequency square wave compared to the dynamics of the amplifier. And the step response is plotted on the oscilloscope. And we see a rather highly oscillatory response.

We notice that the amplitude of the step on this scale is about 2 and 1/2 major divisions. The overshoot beyond the final value of the step is possibly 60% of final value. And so we have a very, very highly oscillatory system.

If we talk about the peak overshoot for this particular system, it's about, as I say, 60%. So we have a p0 of 1.6. Indicating that the peak overshoot reaches a value equal to 1.6 times the final value of the step response.

We can also look at this response and get some indication that the system actually reflects the situation we have. We realize, of course, that 14 degrees of phase margin is a very low value. And so we'd anticipate considerable overshoot. And certainly our step response shows that for the uncompensated system. But let's think a little bit about the effective crossover frequency on the oscillation associated with the step response.

We said that from the Bode plot, we find that this system crosses over at about 2.4 times 10 to the fourth radians per second. If we divide that number by 2 pi, we get something like 4 kilohertz for the crossover frequency of the system.

Now suppose we had a system that had 0 phase margin and a crossover frequency of 4 kilohertz. Under those conditions, the step response would be a sinusoid or a co-sinusoid at 4 kilohertz. We'd put in a step if we had a system that had 0 phase margin, 180 degrees of negative phase shift associated with the af product. Why, we'd find that the system would give us an oscillation at precisely the crossover frequency in response to a step. So under those conditions, we'd get a 4 kilohertz constant amplitude oscillation.

Here we have 14 degrees of phase margin. Close to 0, at least uncomfortably close from a feedback system point of view. And so we'd anticipate that the ring frequency associated with the step would be somewhere in the order of 4 kilohertz. And we see that. Let me speed up the time scale just a bit.

One major division horizontally, now corresponds to 100 microseconds. And if we move things around a little bit, we can estimate the frequency of oscillation. Let's see here is one peak, a second peak, a third peak. And those peaks are just a little bit under 300 microseconds apart. They're just a little bit less than three major divisions apart. And so we conclude that the dominant component associated with that ring frequency is somewhere on the order of 3 and 1/2 kilohertz or thereabouts. We get a very good estimate, as I indicated, for a lightly damped system. For a system that has relatively low phase margin, we can get a very good estimate of the system crossover frequency by simply looking at the frequency of the ring associated with the step response of that system.

Well, if we decide that that's unacceptable performance, and we certainly should, we can then begin to ask what we might do to improve the system.

I've forbidden in this particular example, our modifying a of s. We had indicated that we're given the amplifier and there's nothing we can do to change a of s. So the only things we can do are add components external to the amplifier.

Let's look at our original topology once again, and now we'd like to see if there's a way we can get a lead type transfer function into our system. Let's see.

We notice that we have a resistive attenuator at the output. It happens to be in the feedback path. But as far as stability goes, whether we put a lead transfer function in the forward path or in the feedback path, we get the same effect.

Here we see a resistive attenuator and we recall that the topology of a lead type network was, in fact, a resistive attenuator that had a capacitor included. So we can get a lead type transfer function associated with the feedback path with f by connecting a capacitor across the 9R valued resistor. So that's not quite the same in terms of closed-loop response-- there's a little bit of change in zero location-- as if we were able to get the lead type transfer function in a forward path. But certainly in terms of improving the stability of the system, we'd be able to do it via this method.

So then, the topology which allows us to add a lead type transfer function is this. We have the capacitor that we mentioned added across the 9R valued resistor.

We can form a block diagram for that system. And that block diagram, of course, has the original a of s, the one that we're not allowed to change. We have the factor of 1/10, which is the attenuation of the 9R versus R valued resistor. And now we have a lead type transfer function. Actually, with an alpha of 10. Notice that the overall transmission of this block at high frequencies is 1. The factor of 10 increased from the 0 pole doublet, canceling the factor of 1/10. But recognize that must be true with our passive RC network.

So here we have the factor of 1/10 at low frequencies, and then we have the zero and the pole located at a factor of 10 higher frequencies associated with our lead type transfer function.

If you keep track of the arithmetic, why we get 9R, or 1 over 9R as the frequency at which the zero is located. Incidentally, all of these numbers are done in considerable detail in the text, and so there's no real need to belabor the arithmetic at this level. The numbers are available in the book. And I think it's really more important to kind of get the feeling for how we approach the compensation problem, to spend a great deal of time here belaboring the arithmetic.

Well, there's our lead topology. And what we do in order to decide how we might locate the zero associated with the lead network, is basically go back to our original system Bode plot and simply try a couple of combinations. Say, well, what happens if I locate the zero here for example? I then know that I get an additional pole out, a factor of 10 beyond that. I make some modification to the angle. And that sort of playing around isn't particularly difficult to do. You can do it very quickly. And you rapidly begin to home in on a location for the zero that will result in the maximum phase margin for the system. This isn't a mathematical optimum, but you can very quickly find out where you want to locate the zero to get the maximum phase margin in the resultant system, or awfully close to the maximum phase margin in the resultant system.

And if we do that, we find out that a good location for the zero is somewhere on the order of 2.5 times 10 to the fourth radians per second. If we added a zero somewhere in here, and of course, we get a pole because of the characteristics of the feedback network, a factor of 10 above that. Why, we end up with about the maximum positive phase shift we're able to achieve. And consequently, we end up with maximum phase margin for our system.

So here we've lived within the constraints. We can build a lead network with an alpha of 10 since that was the attenuation associated with the feedback network, we're able to use that as the maximum spread between the pole and zero in our lead network. And we live within that constraint. And we choose the best location for the zero in terms of improving the phase margin in the system.

When we do that, we find out that our maximum positive phase shift combines with the phase shift associated with the amplifier, itself. With the original a of s. And we find that we're able to get a phase margin of about 47 degrees. Crossover frequency with that combination is about 2.5 times 10 to the fourth radians per second. We increase crossover frequency just a bit by adding the zero close to crossover. In fact, the zero location in our lead network comes out just about at the compensated crossover frequency of the system. We can look at that Bode plot.

So here is the resultant picture when we add to our original magnitude curve a zero somewhere on the order of 2.5 times 10 to the fourth radians per second. We push the curve up a little bit. What we do is get additional positive phase shift out in here. We can now go to our new unity gain frequency. Here is the unity point in our graph. We can find our new unity gain frequency, which is 2.5 times 10 to the fourth. And at that frequency, we can look at the angle. We find out that the angle is such that we get 47 degrees of phase margin. The negative phase shift is minus 133 degrees, which corresponds to 47 degrees of phase margin.

We can compare that Bode plot with the original one. Here is our original uncompensated curve. And here's the lead compensated magnitude placed on top of it. And we notice, of course, that nothing much happens at low frequencies. We begin to get significant departure from the uncompensated curve where we locate the zero. But we begin to get appreciable angular departure below that frequency. And in fact, by the location of the zero, which frequency or the frequency of which the zero is located, which is somewhere in here, we've gotten a very significant positive phase shift from our lead transfer function. We indicated that last time. That was what we were trying to exploit with lead compensation. And that results in a significantly improved phase margin.

Let's look at the effect that that has on the step response of the system. Here, once again, we have the same amplifier, of course. And I now, simply by operating this switch, am able to shunt in the capacitor across the feedback resistor. And so when we do that, we notice a very much improved response, at least in terms of stability. The overshoot goes from this, which is the uncompensated case, to a much, much smaller value like so.

We can now afford to bring up the amplitude scale. And we can look, for example, at the overshoot. The amplitude of the step is now about five major divisions. The overshoot is about half of a major division. So now we have a p0 for our compensated system of about 1.1. A very significant improvement compared to the overshoot associated with our uncompensated system. Again, the parameters here, are our lead network, an alpha of 10, a zero located at 2.5 times 10 to the fourth.

We also might measure the rise time of our step response because that's a measure, of course, of the overall bandwidth of our system, the speed of response of our system. So let's do that.

I am now at a horizontal speed of 50 microseconds per major division. The oscilloscope is set up such that if we measure between this major division and this major division, we'll be measuring the 10% to 90% rise time of our step. Final value being here, an initial value being down here. And so let's see.

We have 1 and about 1/2 major divisions between these two points, from here up to here. So we have about 1 and 1/2 major divisions. We're at 50 microseconds per major division. And so the rise time for the lead compensated amplifier is about 75 microseconds. The rise time for the uncompensated amplifier doesn't really mean much because we probably wouldn't operate the amplifier that way anyway. The system is not nearly stable enough.

Well, so that's one kind of compensation that we can apply to our operational amplifier. What else might we do to it? We talked about lag compensation. And I'd like to look at how we might apply lag compensation to this system.

If we take the amplifier-- we had mentioned last time that when we use this configuration with resistive feedback in a non-inverting configuration, if we simply shunt a resistor between the input terminals of the amplifier, we're able to reduce the a0 f0 product. We did that as an example last time of how we might modify, compensate an amplifier. In particular, this non-inverting configuration or this type of non-inverting configuration, by adding a resistor between the input terminals.

Now, if we had put a capacitor in series with that resistor, we'd lower the loop-transmission magnitude at frequencies where the impedance of the capacitor was small. And yet, we wouldn't change the loop-transmission magnitude at frequencies where the impedance of the capacitor is large. So that's a way of getting a lag type transfer function into our loop-transmission. We add a resistor and a capacitor across the input terminals of the amplifier. And the effect there is simply at high frequencies where the capacitor is effectively a short circuit, to shunt the input terminals, reduce the amount of signal that's fed back to the input of the amplifier.

The effect on the block diagram for the amplifier is as follows. This again parallels very closely what we did in an example, I believe, two sessions ago where we put a resistor between the terminals. We found out that that resistor provided an attenuation to the input signal, as well as to the signal fed back.

Here, because we use an RC between the input terminals, we get an attenuation or a lag type transfer function applied to both the input signal path and the feedback signal path.

We're able to slide those two boxes through the summing point and replace them as one on the other side. In other words, we originally had a lag transfer function here and a lag transfer function here. We can bring them through the summing point. And in this way, we see that this method of compensation really does end up with our lag transfer function in the forward path rather than being in the feedback path, as was the case with lead compensation. So here we have a lag type transfer function in the forward path of our system.

How do we choose the parameters that we use for a lag transfer function? In other words, the alpha and the tau. Or equivalently, the location of a zero. Well, there's a rule of thumb that gets us a starting point off times for lag compensation. And what we do is think about what we're trying to accomplish with lag compensation.

What we'd like to do is attenuate the loop-transmission magnitude in the vicinity of crossover. But hopefully not too much at frequencies well below crossover. We prefer not to compromise system desensitivity at frequencies well below crossover. So we'd like to locate our lag network so that it has significant effect on the amplitude characteristics of the af product near the unity gain frequency. But not at very, very low frequencies.

Furthermore, we hope that the lag type transfer function doesn't give us considerable additional negative phase shift at crossover. And that tells us that we have to locate the lag back a ways from crossover.

If, for example, we located the zero associated with the lag network at the crossover frequency, we'd add an additional something on the order of 45 degrees of negative phase shift to our original uncompensated af product. And that would be a disastrous kind of effect. So we have to locate the lag network such that its zero is well below crossover, so that we don't get too much negative phase shift.

Well, we have a conflict. On one hand, you'd like to move the zero out close to crossover so you maintain desensitivity over as wide a range of frequencies as possible. On the other hand, in order to maintain stability, you'd like to have relatively little negative phase shift from the lag network at crossover, residual negative phase shift. And so here we have this problem that really has to do with the fact that a lag transfer function is a physically realizable kind of transfer function. It's a fundamental kind of a thing.

A starting point off times for that, sort of a rule of thumb that people use as a starting point, is to locate the zero of the lag network, possibly a factor of 10 below the compensated crossover frequency. The crossover frequency that will result with lag compensation. And we may choose to modify that, move it a little closer to crossover if we're really interested in desensitivity. Move it a little further back if we're really concerned about stability. But a good starting point might be this factor of 10, and then we can move around there. So I've chosen that as one of the guidelines for choosing the lag network.

Another thing I'd like to do so that we can inter-compare performance between lead and lag compensation, is shoot for the same amount of phase margin as we got in the lead compensated case. In the lead compensated case, the maximum phase margin we could achieve was really fixed. We had a given transfer function for the amplifier and we knew that the maximum value of alpha that we could get out of our lead type transfer function was 10. Because that was the attenuation associated with the feedback network. So we had relatively few degrees of freedom. The only thing we could do was locate the zero associated with the lead network.

When we do lag compensation, we can locate the zero anywhere we want to. And we can also get the alpha, make the alpha as large as we care to have it. Because we have really, two degrees of freedom, the R and the C. We're free to choose both of those. So we can locate any two quantities associated with the lag network. For example, the location of the pole and zero. Which of course, corresponds to the location of either one and the choice of alpha.

So I'm going to exploit that freedom to get as close as we can to 47 degrees of phase margin out of our lag compensated system. And the way that we do this is to go back to the original uncompensated Bode plot for the system. And our reasoning goes something as follows.

If we have the zero of the lag network located about a factor of 10 below our compensated crossover frequency, the residual negative phase shift associated with the lag network will be something on the order of 5 degrees. If the pole were very far below the zero, we'd get 5.7 degrees of residual negative phase shift. Assuming the pole is somewhat closer to the zero, we get a residual negative phase shift on the order of 5 degrees. So if we follow this rule of thumb to start our design process, we anticipate roughly 5 degrees of residual negative phase shift associated with the lag type transfer function.

Our lead compensated system had a phase margin of 47 degrees. So what we do is look at our uncompensated angle curve, and look at the frequency where the phase shift associated with that uncompensated angle curve is minus 128 degrees. Somewhere in here. And the reason for that choice is as follows.

If we crossed over where the uncompensated angle curve was minus 128 degrees and we got another minus 5 degrees of phase shift residual from the lag network, we'd have minus 133 degrees of overall phase shift at crossover, or 47 degrees of phase margin. Fine.

We find the frequency where the angle of the af product is minus 128 degrees. And if you do this carefully, you find out that that's about 6.7 times 10 to the third radians per second. Somewhere back in here.

Furthermore, if you look at the magnitude at that frequency-- here's the frequency where the angle associated with the uncompensated transfer function is minus 128 degrees. We find out that the magnitude of the uncompensated transfer function at that frequency is 6.2. These kinds of manipulations are very easily done in Bode plot form. You can get incredibly accurate answers.

I mentioned one of the advantages of using Bode plots for actually, determining numbers in compensation is that when you draw these things, you can get accuracies to really within the width of a pencil lead without much effort at all. And so we're able to get numbers that are at least good to two places with simply a piece of graph paper and a pencil. If we have a cheap hand calculator, we can do even better. But we're able to get far better results than usually the hardware justifies with this very simple graphical technique.

So we've now concluded two things. One is that the original uncompensated system has minus 128 degrees of phase shift at a frequency of 6.7 times 10 to the third radians per second. And further that the magnitude of the af product uncompensated at that frequency is 6.2.

Well if we follow our rule of thumb and locate the zero of the lag network a factor of 10 below the crossover frequency. You see, we're going to make the 6.7 times 10 to the third be the compensated crossover frequency. We'll have a phase margin under those conditions of 47 degrees. The phase shift of the uncompensated system combining what our assumed 5 degrees of residual negative phase shift from the lag network. We locate our zero at 6.7 times 10 to the second radians per second, a factor of 10 below that. And we make alpha 6.2. We need to make alpha 6.2 in order to squeeze down the magnitude portion of the loop-transmission so that it goes through unity at 6.7 times 10 to the third radians per second since 6.2 was the magnitude of the uncompensated af curve.

So those are the parameters that we'll use. And when we do that to our loop-transmission, why we add a pole in here. Back at about 100 radians per second, something like that. We have the alpha of 6.2, which gets us out to 670 radians per second actually. We put in the zero there. And then crossover occurs out at about 6.7 times 10 to the third radians per second. The phase margin at crossover up in here somewhere is 47 degrees.

We notice the negative addition to the angle curve. But by crossover, that pretty much washes out.

Here again, we can compare the uncompensated loop-transmission with the compensated one, with the lag compensated one. And again, we notice that the magnitudes are identical at low frequencies. We get the effect of the lag transfer function on the magnitude curve to cause our compensated curve to drop below the uncompensated one. We get the negative lump of phase. But by crossover, by this frequency, that negative lump has largely washed out.

Let's look at the experimental results with this system. Here again, we have the lead compensated response. Let me take that out. We get our original uncompensated highly oscillatory response.

Now let me put in lag compensation by effectively shunting the RC network that we talked about between the input terminals of the amplifier. And once again, we get a very well damped response. Let's look at its characteristics.

Here we get a little bit more overshoot. We have about a one major division overshoot, about 20%. So p0 for this system is about 1.2. We can, once again, measure the 10% to 90% rise time.

Here we are at 100 microseconds per division. This time the measure between here and here. We get about 200 microseconds for the 10% to 90% rise time. So we conclude for this system, the lag compensated one, p0 is 1.2. The 10% to 90% rise time is two major divisions, or about 200 microseconds. These are with parameters alpha of 6.2 in the lag network. The zero at 6.7 times 10 to the second. That results in a 6.7 times 10 to the third radians per second crossover frequency. 47 degrees of phase margin. And if we keep track of things, we get a gain margin of about 14.

Well, there's one apparent discrepancy that I'd like to spend just a little bit of time looking at or talking about. We had said that for systems with corresponding amounts of phase margin, the speed of response should scale just about directly with the crossover frequency. Furthermore, there should be a very nearly 1:1 correspondence between phase margin and any other measures of relative stability.

The unfortunate part here is that we find out that in the lag compensated system we get a little bit more overshoot than we do with the lead compensated system.

Furthermore, there's roughly a 3:1 difference in speed of response. The lag compensated system being a factor of about 3 slower. Whereas, there's roughly a 4:1 ratio in crossover frequencies. And so that's a little bit bothersome. Here there's very nearly 4:1 ratio in crossover frequencies. Actually, a little less than a 3:1 ratio in speed of response. That's a little but disturbing.

It'd be nice to sort of sweep this under the rug and mumble, experimental error. But it's really not that. It's more fundamental than that.

When we use these estimators of system performance, remember we're really comparing systems that had frequency independent feedback paths. Unfortunately, we're not able to do that in the case of the lead compensated system. The physical reality, the constraints that the system placed on us were such that we were only able to get lead compensation if we were willing to take it in the feedback path.

Well, you can sort of argue the effects of that. In general, if we have a feedback system with very large loop-transmission magnitude, the closed-loop gain tends to go to the reciprocal of the feedback path.

Here we have a lead network, a network that's high pass in nature in the feedback path. That tends to make the forward-- the ideal closed-loop gain be low pass in nature. And so as opposed to a system that had a frequency independent feedback path, but the same lead transfer function somewhere in the loop. In particular, in the forward path, we'd anticipate that applying the lead compensation in the feedback path would tend to slow up the response a little bit. Tend to push down the peak overshoot. Be a little bit low pass in nature.

In reality, we don't have a large loop-transmission magnitude at the frequency where the lead network has other than flack response. So we don't get a pronounced effect that way. But that's the effect that tends to push down the peak a little bit and tends to slow up the lead compensated response a little bit, so that the ratio of response or rise times isn't directly equal to the ratio of crossover frequencies.

While I didn't allow us to place the lead transfer function in the forward path by my original statement of the problem, our little amplifier here allows us to take some liberties that exceed those in the problem statement. And so let me go back to, first of all, our original uncompensated case, which is this. And now let me look at another point in the amplifier, which will allow me to place the lead network in the forward path. And the hope is that by doing this, we'll be able to get better correspondence between the lead and the lag compensated case.

So let me, as I say, do something, which is really prohibited by my original problem statement. But I included in the experimental setup to convince you that this method really does give us very good results.

There is the result of applying lead compensation in the forward path to the same a of s. So now all we've done is put a block that's the lead transfer function in the forward path.

When we do that we, first of all, note that we get about 20% overshoot from our step response. So now we have the p0 that's the same as the lag compensated case. We anticipate that that should be true in that these systems have identical phase margins, or as nearly identical as I could make them.

If we measure the response, the rise time-- well, let's see. Here we're at 50 microseconds per division. Let me just move things around a little bit. And we see now that we have just about a 50 microsecond rise time. So if we take lead in the forward path with the same parameters as we used in our previously determined lead example, so that we're comparing comparable systems. And in particular, a zero at 2.5 times 10 to the fourth radians per second, an alpha of 10, we of course get crossover at exactly the same value, the same value of phase margin. Those quantities are dependent on the af product, not the way the dynamics are distributed between a and f. And we, in fact, get a peak overshoot 1.2 times the amplitude of the step. And we get a rise time of about 50 microseconds.

The important thing is we get the same overshoot virtually, as the lag compensated case. We hope that's true or if what I've been telling you is true, that should be so because we have the same phase margin in the two cases.

Furthermore, there's a 4:1 ratio about in rise time. And that's just about the same ratio as the crossover frequencies. So all of that hangs together. And again, reinforces my contention that we're able to make very, very good predictions of behavior based on things like crossover frequency and phase margin that are very, very easy to evaluate via the kinds of graphical manipulations that we've been doing.

Let's very quickly just compare the magnitude portion of the loop-transmission. First, for the lead compensated case, the lag compensated case, so that we see the difference in desensitivity. The desensitivity of those two systems being identical up to this frequency, and then departing, of course. Reduced a0 f0 pushing the whole curve down where we lose desensitivity at all frequencies. And a lowered first pole where we, again, lose desensitivity over a very wide range of frequencies. We didn't do these experimentally. We certainly could have reduced a0 f0 had we cared to. We could have lowered the first pole effectively as an extreme case of lag compensation. This would have been a possibility as an extreme case of lag compensation. We didn't do those, but we can at least see the effect on desensitivity had we performed those experiments as well.

Let me also look at the picture from a root locus point of view. Here is the uncompensated system. And for our particular numbers, we'd be out here somewhere. We had very low damping ratio, a considerable amount of overshoot, so for the particular numbers we use the uncompensated, as I say, we'd have sort of this situation for the dominant pole pair.

The way lead compensation improves the situation is in the following way. Here we add an additional zero. In our case, it comes relatively at this location. Somewhere above the first pole in a of s and another pole associated with the lead network. The net effect, of course, is to have this branch go to the right, terminate on this zero.

Since this is a situation where we can use the average distance rule, we have three more poles than zeros. What it tells us is that this dominant pair of branches has to move back a little bit before it eventually heads over into the right-half of the s-plane.

And if we keep track of that, we find out basically, that we can tolerate considerably larger values of a0 f0 before we get to a given damping ratio for the dominant pole pair. So here in our particular system, if we kept track of how far out on the root locus diagram we had gone, we find out that we get the much better damping ratio with a lead compensated case than with a lag compensated case. Or I'm sorry, with the uncompensated case.

The lag compensated system looks like this. We have the pole at 1 radian per second, the pole at 10 to the fourth, 10 to the fifth. Those are the ones associated with a of s. And we have the pole 0 doublet associated with the lag. Here's the pole of the lag transfer function. Here's the zero at 670 radians per second. We do the root locus diagram, these two branches come together, circle around. One branch heads towards this zero. One out here, which joins this branch. They then spread off. And the pattern out in here looks very, very much like the uncompensated case.

However, if we kept track of a0 f0 necessary to get out to a given damping ratio, we'd find out the we could tolerate considerably more a0 f0 in the lag compensated case than in the uncompensated case. In fact, about a factor of 6.2 greater a0 f0 equal to the alpha of the network that we use.

Let me finish up by just making a couple of statements comparing the various kinds of compensations. We found, in this particular example anyway, that we got by far the widest bandwidth out of lead compensation. If we compare lead and lag compensation, we got by far the widest bandwidth out of the lead kind of compensation. And that might tend to make us believe that lead compensation must be good. We can get comparable stability to that that we achieve with a lag compensator. And yet, we get considerably wider bandwidth.

Somehow, we tend to have a feeling that maybe more is better in the case of bandwidth. But that's not really true. I think any of you who have worked with digital systems, recognize anybody who solves sort of ttl kinds of problems with emitter-coupled logic deserves all the trouble he gets. And the reason for that is just that high-speed signals are difficult to deal with. Parasitics become more important. That's certainly true in the case of logic design. It's equally true in the case of analog design.

You really ought to design systems for the minimum bandwidth that will yield acceptable performance, rather than feel that more is always better. If you're building Hi-Fi systems, bandwidth much in excess of 20 kilohertz helps dogs, but not people. And so we don't really try to design for ultra wide bandwidth in Hi-Fi systems because the air doesn't respond to anything beyond 20 kilohertz. And there's no real need to do it. All we do is get into trouble.

So fine, if in many systems we find that lead type compensation does result in increased bandwidth-- and sometimes we need it. And in those cases, we ought to try to get the most bandwidth we can. Off times, resulting in lead compensation.

But in other systems, we can't take advantage of the-- or we can't really use the additional bandwidth, we just get ourselves into trouble.

Furthermore, there are some systems where certain kinds of fixed elements where lead really doesn't help. For example, consider what happens if we have a system with a pure time delay in it. A time delay has a transfer function e to the minus st. And that gives us negative phase shift at all frequencies. In fact, as we go to progressively higher frequencies, we get unlimited negative phase shift out of a pure time delay.

Well, the lead network that can give us at most 90 degrees of phase shift for incredible spread between the pole and zero really can't help very much to offset the unlimited negative phase shift associated with the time delay. The only thing we can do to compensate a system that includes a pure time delay is effectively lag compensation or dominant pole compensation. Because we're really not able to make up for the very, very large negative phase shift associated with that type of a transfer function. So there are certainly transfer functions, fixed element transfer functions, that don't lend themselves very well to the application of lead compensation. And there are other transfer functions where you get into at least more trouble than you would otherwise if you use lead compensation. So I think the message to remember there is lead is fine in cases where it's applicable, and where you really need the bandwidth. Otherwise, maybe alternatives are in order. Thank you.