James K. Roberge: 6.302 Lecture 09

Search transcript...

[MUSIC PLAYING]

JAMES K. ROBERGE: Hi, last time we had looked at an introduction to compensation of feedback systems. In other words, how do we modify the loop-transmission of a feedback system in order to improve its performance? Possibly increase its speed of response, or increase desensitivity. Improve the stability of the system, those sorts of modifications that we very frequently want to make to feedback systems.

The simplest kind of modification we can make is to change the low frequency magnitude of the loop-transmission. In the notation we've been using, the a0 f0 product. And we've looked at how that influences the root locus diagram, for example. And last time looked at how that might improve the stability of a system.

Today, I'd like to look at somewhat more complicated modifications of the loop-transmission. In particular, ones that change the dynamics associated with the loop-transmission.

The simplest thing that we can do is to add a dominant pole to the system. Suppose we start with a system that has a Bode plot as shown.

Here's the af product as a function of frequency. The magnitude and the angle in our usual Bode plot coordinates. And as I've drawn things, the system's actually unstable. This is the unity magnitude point. So this would be the crossover frequency for the system. The frequency at which the magnitude of the af product goes through unity. And we notice that by that frequency, the angle associated with the af product has gone through minus 180 degrees. Consequently, this is a system that's unstable as shown.

Now suppose I'd like to modify that system. And in fact, try to increase its phase margin possibly to 45 degrees. If I added one more pole to the system, at a sufficiently low frequency what I could do is get the magnitude to roll off and in fact get down to unity before I acquired enough phase shift to cause instability. Let's see how we might do that.

I know that if I add a single pole well below crossover, by the time I get out of the crossover frequency, the phase shift associated with that pole will be very nearly minus 90 degrees.

Suppose I look at the regional uncompensated Bode plot, and find the frequency where the phase shift of the original uncompensated system is minus 45 degrees. I recognize that if I add an additional 90 degrees of negative phase shift, I'd end up with the angle at this frequency being minus 135 degrees. In other words, if I could force crossover to occur at this frequency by adding a single pole to the loop transmission at a sufficiently low frequency, I would end up with a system that has 45 degrees of phase margin. So let's see if we can do that.

All we do, assuming the physical constraints of the system allow that, is to somehow add energy storage to the system. And let's suppose, as we indicated, we'd like to cross over at this frequency. And that tells us that we'd like to have a single pole roll-off like so, in the vicinity of crossover. If I've accomplished that by adding a single pole at a sufficiently low frequency, why we'd end up with a loop-transmission magnitude that looks something like this after compensation.

Out in this region we actually have to follow the original curve on top of the single pole roll-off. So possibly we'd get a magnitude that looks something like that.

The influence on the angle associated with the af product is simply to add an additional 90 degrees of phase shift, 45 of it occurs at the location of the pole. And then, we asymptotically trail off to an additional 90 degrees of negative phase shift. So the compensated angle might look something like so.

Here we've acquired our additional 90 degrees of phase shift. And we now go and continue parallel to the original angle curve, but displaced by minus 90 degrees. So our new angle curve might look something like that. And now we've, in fact, accomplished our objective of achieving 45 degrees of phase margin for the compensated system.

The price we paid, however, is a very significant one in terms of the bandwidth of the resultant system. Originally, our crossover frequency was out here. We've made a very, very major change in crossover frequency. We've lowered crossover frequency considerably by putting in our dominant pole, starting the roll-off at a much lower frequency than we originally did. We've also lost desensitivity over a very wide range of frequencies.

We notice, of course, here's the original magnitude curve. We've lowered the af product at all frequencies beyond this frequency. And so again, compared to the original uncompensated system, we've lost desensitivity. And this shaded area is a measure of the amount of desensitivity we've lost as a function of frequency.

So the disadvantage of that sort of compensation is simply the greatly narrowing effect that it has on the bandwidth of the system. But there's one important class of systems where that sort of approach is a very, very good one to compensation.

In particular, suppose our objective is to build a regulator. That is, a system where the intent is to keep an output variable fixed. This is in contrast to the usual following kind of feedback system where we try to command an output to follow a time varying input. And under those conditions when we're trying to track a time varying input, the bandwidth of the system is, of course, very important, the closed-loop bandwidth.

However, when we're trying to get the output of a system to be time invariant, a regulator where we want the output to remain fixed for all time, adding energy storage to the output, creating a dominant pole by basically adding energy storage to the output, really improves the stiffness of the output if you will. It improves the ability of the output to resist disturbances.

As an example of that, consider what happens if we take an ordinary laboratory power supply. And we go searching around and find the largest capacitor we can find and roll that up to the power supply and just clamp it on to the output terminals. We may have some trouble starting the supply. When we first turn it on, it may take a while to charge up the large capacitor that we've added to the output of the system. But once we finally get the capacitor charged, we're in very good shape. The capacitor at the output of the feedback system or the regulator, in this case, resists disturbances that come, for example, from changes in load current. From changes in primary unregulated voltage and so forth. And really, helps improve the resistance of the system to those kinds of disturbances.

So if our intend is to build a regulator where what we want to do is keep the output of the system time invariant, then adding a dominant pole providing we put the energy storage associated with the pole at the output of the system, at the variable that we're trying to keep constant, why we get an improvement in performance.

I do an example in the book of an electronic regulator and show how that works. What I'd like to talk about here as an alternative is a speed control system. So let's look at that a little bit.

Here we have a system where we assume that what we're trying to do is keep an output velocity, which I indicate as omega response, fixed. We might command some speed, but we assume that this command is really a fixed quantity, or at most, incredibly slowly time varying. So we put in a command and our hope is that eventually the system reaches a speed that's about equal to that command in the system I've shown. Notice that we have unity feedback in this system.

And from then on, we're supposed to keep-- or the intent of the feedback system is to keep the output velocity fixed. And the disturbances that we'll consider are torque disturbances that are applied to the mechanical system. Our system consists of an amplifier driving a motor. We measure the velocity of the motor with some sort of a device. They're called tachometers, which allows us to feed back information concerning the output velocity. We compare that with a set point or commanded velocity, amplify the difference. And of course, apply that voltage to the motor.

We assume that the total inertia of the load, this includes the armature inertia of the motor, any rotating load that we have, we assume that that total inertia is j. And as I say, the objective of the system is to keep this velocity fixed as we apply torque disturbances to the system.

We have to start out modeling the motor. And there's a large and important class of motors that are very well modeled by this simple a model. If we have a motor that has a fixed magnetic field either as a consequence of a winding excited by fixed current, or as a consequence of a permanent magnet, why we're very often able to model that kind of a motor in this way.

Here we have a resistance, which is the armature resistance of the motor. And there's a back voltage generator, whose voltage is some constant that has to do with the way the motor is constructed times the mechanical shaft velocity of the motor.

At the same time the torque that the motor supplies is simply equal to the armature current multiplied by a constant. And in fact, if we're working in a set of units such as mks units, where the product of velocity times torque gives us power in the same units as the product of current times voltage, these two K's have to be the same. So this K and this K are, in fact, identical providing we're working in mks units.

Well, with that model for the motor, let's go ahead and construct a block diagram for our speed control system. We have our commanded value for speed, or shaft velocity. And we have the response of the system out here. And we subtract the two of them to develop an error signal. We amplify that error signal, and the resultant voltage is that that we apply to the motor.

But in order to complete our block diagram, we have to find out what torque the motor supplies. And we notice from our motor model that the torque is proportional to the armature current. Well, how do we find the armature current?

The armature current is equal to the voltage that we apply to the motor terminals minus the voltage of the back voltage generator. That difference is the voltage across this resistor.

And if we divide that voltage by the value of the resistance, we get the armature current. And we can do that in a block diagram basis by taking the voltage out of the amplifier, subtracting the back voltage of the motor, which we get by multiplying the motor velocity by K. So we subtract that back voltage from the voltage supplied by the amplifier.

We divide the resultant difference by R, and we get the armature current.

Once we have the armature current, we can get the torque that the Motor is supplying electrically, simply by multiplying that armature current by the constant K. So at this point in the block diagram, we have the electrical torque being supplied by the motor. Or the torque that's a consequence of the energy conversion process associated with the motor.

We add to that the disturbance torque that's applied externally to the system. And the sum of these two is the net torque applied to the inertia associated with both the motor armature and additional rotating inertia in the system. The inertia j as we defined it earlier.

If we divide that net torque by j, we get acceleration of the motor and load combination. If we then integrate, multiply by 1 over s, we get the velocity of the motor load combination.

Once we've gotten to that point in the block diagram, of course, we have the variable necessary to complete the various feedback paths. We multiply motor velocity by our constant K to get back voltage associated with the motor. Similarly, we bring back velocity information probably via some sort of a transducer. As I mentioned earlier, a tachometer to the input of the system.

Well, we can now look at the properties of this sort of a feedback system. If I have a sufficiently large value for a, we have a system that has unity feedback between the output variable and our input summing point. So we have a system which has a so-called major loop that has an f for a feedback transfer function of 1. And in fact, providing a is sufficiently large, the gain from the input of the system to the output of the system is about equal to unity, the reciprocal of the feedback path. And this tells us that for a large a, the response speed should be just about equal to the commanded speed.

A real interest in this particular example, however, is how well the system rejects torque disturbances that are applied to the motor load combination. So what we really want to do is look at the response, the change in speed, that occurs for a given torque disturbance. So we want to write the closed-loop transfer function omega r over Td.

If we do that, this expression results. We find out that the dependence of output velocity on disturbing torque depends, of course, on various system parameters. The multiplying factor out in front, R over K times K plus a, we can sort of rationalize why that ought to be. If we had a larger amplifier gain, of course, we'd get greater attenuation. The system would be less sensitive to applied torque disturbances.

Similarly, if R is smaller, the motor itself gives us greater current and hence experiences greater armature current. And hence, provides greater torque for a given voltage applied to the motor. And so that's the reason that R appears as it does. A smaller r leading to smaller disturbances.

Similarly, if the back voltage coefficient is larger, the motor sort of intrinsically tends to minimize disturbances applied to it, velocity disturbances. So this collection of constants out in front gives us the rejection, if you will, at low frequencies. This is hopefully a small number.

And then, at some higher frequency, we get a single pole associated with this transfer function reflecting the energy storage associated with the rotating inertia. So our disturbance to control variable transfer function has some low frequency magnitude given by this expression, and then a single pole.

The important point to note is that if we increase the inertia in this expression, it doesn't change the low frequency rejection at all. But what it does is lower the frequency at which the pole comes in. And we can look at the effect on this transfer function of changing inertia in Bode plot form. And I show that over here.

Here we plot the magnitude of the speed over the disturbance in Bode plot coordinates as a function of frequency. So what we're doing is measuring the sinusoidal steady-state rejection of disturbing torques. We're proposing an experiment where we go in and apply a sinusoidally varying torque to the armature load combination. And we look at how well the motor rejects that.

There's a dc value, which was given in our earlier expression that's R over K times K plus a. So that's the low frequency value.

And then there's a corner frequency associated with the system, and it involves a collection of constants. But the important point is that that corner frequency is inversely proportional to the total inertia. It's inversely proportional to j.

And so if we add an additional j, we build up our system, we adjust K, for example, in order to control our dc sensitivity to the level that we'd like to have it, we then can, by appropriate choice of j, ensure that the system crosses over before higher order poles in the system transfer function become important. So we can ensure that we have stability, possibly as much as 90 degrees of phase margin, by forcing crossover at a sufficiently low frequency through appropriate choice of j.

And as we make j larger, the system performance improves. The rejection of the system gets better because an increasing j lowers the frequency of this pole.

For example, we might get this sort of a transfer function when we increase j. And so this then in our Bode plot, is the direction of increasing inertia. And we see that the rejection of the system-- remember, we'd ideally like to have this transfer function zero. We'd ideally like to have no change in speed for a given torque disturbance. But we push the first pole down as we increase j. And in fact, in the limit, we could increase the band of frequencies over which we got very high attenuation by using a sufficiently large rotating inertia.

So in this one important class of systems, regulators where the objective is to reduce or ideally eliminate a disturbance, why creating a dominant pole is probably the compensation form of choice. Particularly, if we can locate the dominant pole at the output. If we can put our energy storage that creates the dominant pole at the output of the system, our attempts to reduce disturbances are greatly aided by the inertia. Or in the case of the power supply, the electrical power supply, the energy storage in the capacitor that we attach to the output.

Well, suppose we have a system where we can't tolerate the bandwidth decrease that comes via creating a dominant pole, or that comes along with creating a dominant pole. There are other modifications that we can make to the loop transmission in order to improve the stability of the system. And typically, they retain more desensitivity than does the option of creating a dominant pole. And possibly, retain more of the system bandwidth than would occur if we forced a dominant pole. And let's look at the kinds of things we might like to do

Here again, I've shown a Bode plot for the af product for some hypothetical feedback system. This time I've showed a stable one. The magnitude is flat at dc, possibly a single pole roll-off in this region, and then some higher order poles, or additional dynamics at higher frequencies. An associated angle might be one that was near 0 degrees at low frequencies.

Here's the 180 degree point, or minus 180 degrees point. We start out close to 0. We get about 90 degrees of phase shift reflecting this first pole.

And then at higher frequencies, again, reflecting the energy storage, or the other modes of energy storage in the system, the angle associated with the af product begins to drop off. Eventually goes through minus 180 degrees.

In this case, I've assumed that the system is stable. In other words, the way I've drawn things, we go through unity magnitude before the angle gets to minus 180 degrees. But I've also tried to show in the drawing that we have a very small amount of phase margin. In this case, we might not feel that the system has adequate stability. Let's see what we're able to do to improve that situation.

The kinds of modifications we might like to make are the following. Suppose we could do something to increase the angle associated with a transfer function, or with a loop-transmission out in this region. If we could somehow push up the angle, and do that in a way that didn't increase crossover frequency-- here we've shown the crossover frequency of the system. If we could do that in a way that didn't increase the crossover frequency, so that we didn't get additional negative phase shift associated with the uncompensated system hurting us. We could just push up this angle, but keep the magnitude the same out here, that would certainly help.

If we could make this angle more positive, of course, we'd increase the phase margin of the system. And thereby, improve the stability. So that's one thing that we might like to do.

Another thing that we might like to do is somehow push down the magnitude, but not at all frequencies. We spoke last time about simply lowering the a0 f0 product. Which, of course, pushes down the magnitude at all frequencies. The disadvantage to that approach is that it lowers desensitivity at all frequencies.

Suppose we could find a way of modifying the loop-transmission so that we push down the magnitude in this vicinity, so that we crossed over at a lower frequency without making a significant change in the angle. If that were the case, crossover frequency would lower. We'd still have our original desensitivity at lower frequencies. Because the crossover frequency lowers, we'd end up with a greater amount of phase margin. So those are the two things we might like to do, either push up the angle portion of the Bode plot in the vicinity of the crossover frequency without making a corresponding change in the crossover frequency, or corresponding change in the magnitude. Or push down the magnitude curve in the vicinity of crossover to lower crossover without making changes, or significant changes, at much lower frequencies. And without modifying the angle associated with the transfer function.

Well, unfortunately, our ability to do that's limited. The implications of physical realizability tell us that there's a relationship between the magnitude and the angle of a transfer function. And in fact, the kinds of things we like to do, we really can't do with finite-- with elements that contain a finite number of elements.

Consider, for example, a transfer function that does the first operation. One that gives us positive phase shift without corresponding magnitude changes. Such a transfer function is e to the plus s tau. But unfortunately, e to the plus s tau is the transfer function of an element that predicts. In other words, the impulse response of an element whose transfer function is e to the plus s tau is an impulse that occurs exactly tau seconds before we put in the original impulse. And consequently, obviously, a physically unrealizable network.

In fact, if we had such a network, there's probably far better things to do with it than waste it on feedback systems. We can get tomorrow's stock market reports and that sort of thing, and not waste it on building feedback systems. So those kinds of things we're not really completely able to do. We don't have the complete freedom we'd like to have to make the kinds of changes that would give us the greatest improvement in the stability of our feedback system.

But we do have some degrees of freedom. And, in fact, the simple transfer functions that work in this direction are those that include pole zero doublets. A single pole and a single zero are the simplest forms of networks that go in this direction, or are the simplest forms of transfer functions that go in this direction.

One of the transfer functions that we're interested in, and that we can use to improve the performance of feedback systems is called a lead transfer function. And the output over the input for that sort of system as a function of s, is one that combines a low frequency zero with a pole located at a higher frequency. In this representation, we assume that alpha is a number larger than 1. Consequently, we get a zero in the lead transfer function at a relatively lower frequency than the pole.

If I normalize the expression by bringing a 1 over alpha out in front, I get an expression that can be realized with a simple passive network. And so very frequently, a lead type transfer function is written as 1 over alpha alpha tau s plus 1 over tau s plus 1.

The lead type transfer function, as the name implies, gives us positive or leading phase shift. Actually, at all frequencies. But with the greatest positive phase shift occurring between the zero and the pole.

If we'd like to have something to look at, we can mechanize a lead type transfer function with a simple network shown here, two resistors and a single capacitor.

If we define 1 over alpha as being equal to the attenuation ratio R2 over R1 plus R2, why the low frequency gain of this network is, of course, simply 1 over alpha. At low frequencies, the capacitor has no influence on the performance of the network. At high frequencies, the gain becomes 1. And so we have a transfer function that must have a zero that causes the gain to increase from its low frequency value. We have a single pole that eventually causes the gain to flatten out. If you simply write the network expressions, you find out that you get this form, the zero, at a relatively lower frequency than the pole for the V out over V in transfer function where alpha and the time constant are as defined here.

So here's a simple electrical network that provides an output over input transfer function, which has lead type characteristics. We can look at both the magnitude and the angle associated with that sort of lead transfer function.

Here's the magnitude for several different values of alpha. Here we have an alpha equals 20, so we start out at low frequencies with a magnitude of 1/20 0.05. In this case, we're normalized to a common pole location for all of the networks. The pole location being one unit on the frequency axis.

In the case of alpha equals 20, R0 would occur a factor of 20 below that at 0.05 units.

And we then get basically, a single zero rise in this region. Then we get the pole associated with the lead type network.

Similarly, if we look at the lead characteristics for alpha equals 10, why we get this characteristic a 1/10 at low frequencies. A zero location at a 1/10 of a unit on the frequency axis.

And similarly, for alpha equals 5. So this then, is the magnitude associated with the lead network. And we show parametrically, the behavior for various values of alpha.

We can look at the angle at the same time, or for the same kind of transfer function. And here is the angle associated with the lead type transfer function.

We notice that even for a relatively small value of alpha, we get a positive phase shift whose maximum value actually occurs at the geometric mean between the zero and the pole location. Remember that all of our networks were normalized to unity for the location of the pole. So all of our networks have the pole at this frequency.

In the case of an alpha equals 5, the zero occurs a factor of 5 below that at this frequency.

We notice the maximum positive phase shift out of our network at the geometric mean of those two frequencies, and we get something over 40 degrees of phase shift at the maximum when alpha equals 5.

Similarly, for alpha equals 10, alpha equals 20, we get up to something like 65 degrees of maximum phase shift. Again, occurring at the geometric mean of the zero and the pole in the case of an alpha equals 20. Here's the zero location for alpha equals 20 at 0.05 on our frequency scale. Again, here's the pole location. The maximum positive phase shift from our lead network occurs at the geometric mean of those two frequencies.

The important thing that we explore in this sort of a transfer function, or at least one of the things that we frequently exploit with this sort of a transfer function to improve the performance of a feedback system, is that we begin to get positive phase shift before we make an appreciable change in the magnitude characteristics.

If we look at our angle curve, we notice that there is appreciable positive phase shift out of all three of the characteristics shown at the location of the zero. Here's the zero location for alpha equals 20. And we actually have more than 40 degrees of positive phase shift at that frequency.

Here's the zero location for alpha equals 10. Once again, we have just about 40 degrees of positive phase shift at that frequency.

Here's the zero location for alpha equals 5. And we have somewhere on the order of 35 degrees of positive phase shift in that case. So we begin to get a very significant angular contribution. Remember, that we're interested off times in getting phase margins in systems on the order of 45 degrees. And so 45 degrees of positive phase shift can make a very important improvement in the performance of a system.

Here we have transfer characteristics that give us that sort of phase shift at the location of the zero. Well, let's look back at the magnitude characteristics for a moment.

And again, let's examine the magnitude characteristics at the location of the zero. We notice that at the location of the zero, there's only about a factor of 1.4 increase in the magnitude. Here again is the zero location for alpha equals 20. The magnitude is increased from its low frequency value by about a factor of 1.4 at the zero location.

Similarly, here, here. So the important part about the lead transfer function, or at least one of the properties of a lead type transfer function that we're able to exploit, is the fact that the phase shift becomes positive by a very significant amount-- at least in terms of improving the performance of the feedback system-- before there's much change in the magnitude characteristics. And we can use that. And the way we use it is as follows.

Here's our magnitude characteristics for our uncompensated system. Let's locate the zero, at least one of the things we can do. And of course, there are many ways of using these kinds of transfer functions to improve performance. But one of the things we might do is locate the zero of the lead network somewhere in the vicinity of crossover for the uncompensated system. We might locate it somewhere out in here. Maybe like so. So our magnitude characteristics would begin to go up in this region.

What we depend on is the fact that, in this region, before the magnitude has increased appreciably that would lead us to much higher crossover frequencies, in this vicinity we get appreciable positive phase shift out of our lead type transfer function. And so the hope would be that the modification made to the angle is something like so. Eventually, it drops off again. But our hope is that we can get enough positive phase shift in the vicinity of crossover. Notice here crossover has increased just a little bit. But the hope is that we've gotten appreciable positive phase shift in the vicinity of crossover, so that we improve the phase margin of our system. Consequently, it's relative stability. So that's one kind of modification we might like to make to the loop-transmission of our feedback system.

A second possibility is to add a transfer function to the loop-transmission that's called a lag type transfer function. And here's the expression for a lag transfer function.

Again, the output over the input is a combination this time of a pole at a relatively lower frequency than the zero. We have this sort of a pole zero diagram for a lag time transfer function. The pole occurs at a frequency 1 over alpha tau. The zero at a frequency 1 over tau. Again, assuming alpha's a number larger than 1, we get a pole located at a lower frequency than the zero.

Let's look at the characteristics of that sort of a network. Or first, let's look at how we might implement that kind of a network. And then, let's look at its characteristics.

Here's the kind of a network that will give us a lag type transfer function. We have, again, two resistors, a single capacitor. At sufficiently low frequencies, where the capacitor is an open circuit, the output over the input is simply 1. Whereas, at higher frequencies, we get an attenuation R2 over R1 plus R2. The net result is that we get an input to output transfer function of the form I showed on the blackboard, tau s plus 1 over alpha tau s plus 1.

Again, by doing a little bit of arithmetic, we're able to identify alpha as being the reciprocal of the attenuation ratio. Alpha is R1 plus R2 over R2. And tau is simply R2C for this particular network topology.

As the name implies, a lag network gives us lagging or negative phase shift at all frequencies. And let's look at the phase shift characteristics of a lag network.

For a lead network, we said the we had this sort of characteristic, positive phase shift at all frequencies. Well, a lag network basically gives us characteristics like that. It's a little bit confusing this form, we have a little problem with the axes. But it emphasizes the symmetry between a lead and a lag network, and is actually the correct form if we simply modify the axes.

Here is the actual angle characteristic for a lag network. Again, for several different values of alpha. We have negative phase shift at all frequencies, of course, running from 0 degrees down and so forth.

For alpha equals 5, we hit a maximum negative phase shift of again, about minus 42 or 43 degrees. The same amount as we got in the lead network, only the opposite sign and so forth. So this is a completely identical curve, except flipped over to that that we got for the lead network.

These are all normalized to a common pole location. And we get the characteristics as we've shown them. Let's look at the corresponding magnitude characteristics.

And here are the magnitude characteristics for the lag network. Again, all normalized in this case to a pole location of 1. The zero comes out at progressively higher frequencies for increasing alpha.

The way we exploit this network, or at least one way that we can do it, is as follows. We try to use the lag network to push down the magnitude over a range of frequencies. In other words, we locate the lag type transfer function, locate the pole and the zero somewhere below crossover. And we recognize because of the attenuation characteristics associated with the lag type transfer function, that will push the magnitude curve down. But not at dc. It doesn't have the same effect as simply lowering the a0 f0 product. It pushes the magnitude down at frequencies beyond the pole associated with a lag type transfer function.

So the modification that we might make in the case of lag, is to possibly put the pole in here. The curve would drop down. We then add the zero, and we eventually end up with our compensated curve going parallel to the original curve.

What happens to the angle under those conditions? Well, here's the pole and here's the zero associated with our lag transfer function. And we recall from the plot of the angle characteristics of the lag function and the lead function for that matter, that the most profound effect on the angle occurs over a frequency range between the two singularities. Between the pole and the zero associated with the transfer function.

So here, in the case of a lag network, put in possibly in this range, we'd anticipate that we'd put a negative lump in the phase characteristics. We'd do something like this. But the hope is that we locate the lag network, so that the angle has very nearly returned to its uncompensated value before crossover. Notice that with a lag network, we've lowered crossover back to this frequency.

And if we can locate the lag network at a low enough frequency, we can ensure that the residual negative phase shift associated with the lag network has become very small. And as I've shown in this construction, we end up improving the phase margin of the system from its original uncompensated value to this value by simply pushing down the magnitude curve causing the crossover frequency to lower. And if we've located things properly, so we don't have a great deal of residual negative phase shift from the lag network, we improve the phase margin of the system in that way.

While we've lowered desensitivity over a range of frequencies here, we haven't lowered desensitivity back in this region. So we're able to use the lag network to improve or maintain our desensitivity at lower frequencies. And yet, improve the stability of the system. And so that's the way we typically use a lag type transfer function to improve the performance of a feedback system.

One final thing that's useful for design purposes when we use these kinds of networks is the maximum phase shift that we get out of a network, or a transfer function. The maximum angle associated with either a lead or a lag transfer function-- the only difference is the sin of course. We notice that for a lead type transfer function, we get positive phase shift. For a lag type, we get negative phase shift. The maximum angle is simply the arcsin of alpha minus 1 over alpha plus 1 for either the lead or the lag type transfer function.

We also recognize that we have to have more degrees of freedom than simply adding these transfer functions. Very frequently, we combine a lead or a lag type transfer function with a modification to the a0 f0 product. And that combination is a very, very powerful way to improve or to modify the performance of a feedback system.

Next time I'd like to look at how we use these ideas to compensate an actual feedback system. We'll do a demonstration where we take an amplifier with given characteristics, and see how we use these kinds of concepts to improve the performance of the feedback system that includes that amplifier. Thank you.