James K. Roberge: 6.302 Lecture 06

Search transcript...

[MUSIC PLAYING]

JAMES K. ROBERGE: Hi. Today, I'd like to continue our discussion of root locus techniques. And in particular, I'd like to start by applying it to the stability analysis of several commonly occurring loop transmissions. One of the things that always improves the stability of a system or generally improves the stability of a feedback system is to add a zero to the loop transmission. And we'll see a number of different ways of looking at the reason for this as we go through the course.

But a good general rule is, if there is a way to add a zero to the loop transmission, that generally improves the stability of the resultant feedback system. And there may be very real practical limitations to our ability to do this. And again, we'll have more to say about that later on during the course.

Let's start by looking at a second order system. In other words, assume that we have a loop transmission that has two poles associated with it. We've looked at that configuration before. And recall that we found out that, if we do a root locus diagram for that system, the two branches come together on the real axis, split off, and then further increases in A-naught, F-naught resultant in a progressively smaller value for the damping ratio associated with the resultant closed-loop, complex conjugate pole pair.

Now consider what happens if we add a zero to the loop transmission. Here we assume the we have, initially, poles at s equals minus 1 and s equals minus 2. And under those conditions, as I say, we're familiar with the root locus diagram. Two two branches come together, split off, and you get progressively smaller damping ratios.

Suppose we add a zero possibly at s equals minus 3. How does that change the root locus diagram? Well again, we can apply the rules we know. And when we do that, we find out the branches exist on the real axis here, where there's a single pole to our right on the real axis, and here, where there are two poles and one zero to our right. Again, an odd number of total poles and zeroes to our right.

If we think about this for a while, we realize that there must be a breakaway somewhere in here. The only thing that can happen is to have a point of reentry somewhere on the negative real axis, so that one branch can terminate on this zero in the finite s-plane and the other branch terminates at a zero, at infinity actually, along the negative real axis. And so we conclude that there has to be a reentry point for the branches on the negative real axis, possibly here.

And again, if we think a little bit about how the branches can possibly get from this location, they start here, go in the indicated direction for increasing A-naught, F-naught, and eventually, one branch will be heading toward this zero. Another branch heading toward the zero at minus infinity. The only thing that can happen is for these branches to circle around in the s-plane, eventually reaching this point.

If, furthermore, we keep track of the trigonometry involved-- I think there's a proof from high school geometry that basically does this-- we find out that what we need for a trajectory, in order to keep the angle condition satisfied, is in fact a circle that's centered on the zero. So we'd anticipate trajectories like this, with this being the direction of increasing A-naught, F-naught.

We can go through the arithmetic to determine the breakaway points. If we do this for this example, we find out that this breakaway occurs at minus 1.6. Because the diagram is symmetrical and centered on the zero, why, we can conclude that this breakaway, or this reentry point, must be at minus 4.4.

So then, by doing really only one numerical calculation, which is the breakaway point, and recognizing that the trajectory is a circle centered on the zero, we're able to sketch the complete root locus diagram. The important thing, though, is that the addition of zero has greatly improved the stability of the system compared to what happens when we don't have a zero. Without the zero, recall, the branches would have simply gone off this way. And as we increased A-naught, F-naught, we get progressively smaller damping ratios.

With the zero, as soon as they leave the axis, the closed-loop poles begin to pick up an increasing real part. Here they have a considerably higher damping ratio for a given value of A-naught, F-naught than they would have without the zero. Again, further increases in A-naught, F-naught actually further increase the damping of the pole pair.

If we, for example, turned up A-naught, F-naught to this point, we'd have quite a high value for damping ratio, a damping ratio of maybe 0.8 or something like that. And still further increases in A-naught, F-naught result in two real axis poles. And so now we have the over-damp situation. We have no overshoot at all in the step response, no peaking in the frequency response.

So the important point is that adding the zero, assuming we're able to do it in view of other system constraints, allows us to get much higher values of desensitivity, much larger A-naught, F-naught products without running into any stability problem. And that theme continues for many, many other loop transmission pole-zero configurations.

Here I've shown another one where we assume we start with three real axis poles. Again, we did a calculation concerning this last time. And the numbers, of course, are dependent on the exact locations of the real axis poles. But we did a calculation where we had one pole at minus 1, one at minus 2, one at minus 10. And we found out that we could get very small amounts of desensitivity before we began to run into a stability problem.

For example, we found out the damping ratio of the closed loop pole pair closest to the origin got as low as 0.5 for an A-naught, F-naught product of only 2.2. So here, once we had increased the desensitivity of the system to 3.2, we find that we have a damping ratio of 0.5. Further increases in desensitivity result in even smaller values of damping ratio. And in that case, we found out that, if we increased A-naught, F-naught to 19.8, the system had poles in the right-half plane.

Well consider what happens if we do the same thing, start out with the same pole location. And suppose we had a pole at minus 1, minus 2 and minus 10 and then added a zero. We could determine the root locus initially by ignoring the remote pole. And under those conditions, we'd get a root locus picture very much like we had above.

So we conclude that, to the extent we were able to ignore the third pole, our root locus diagram for this situation would look something like so. And a circle between these two points with the arrows indicating the direction of increasing A-naught, F-naught. Simultaneously, there's a branch going out to the left. Here's a point of reentry on the negative real axis. One branch heads toward this zero. A second branch heads out to the left, at least to the extent that we're able to ignore the third pole.

Well we can now do some further calculations. In this particular system, there's a net excess of poles over zeroes of two. We have three poles and one zero. And so we conclude that, for very large values of A-naught, F-naught, there has to be an asymptote located at angles of plus and minus 90 degrees with respect to the real axis. We can also calculate the intersection of those asymptotes on the real axis, if we know the exact location of the poles and the zero.

Let's suppose that the asymptotes are somewhere in here. And the only way we can get there is to have a branch. The branch that starts for small values of A-naught, F-naught at this pole move to the right.

Eventually there's a breakaway point. And we'll find that that occurs somewhere to the right of the intersection of the asymptotes. This pair of branches then breaks off and actually moves back a little bit and does something like this, asymptotically.

The reason for this pair of branches moving away is really, or we can argue that reason, by considering the average distance rule. Here we notice that, as we increase A-naught, F-naught, this branch is still moving to the right. Consequently, in order to keep the average distance of the branches from the imaginary axis fixed, these two branches have to move somewhat to the left. Once again, if we had exact numbers for all of these things, we could do a more precise calculation. But at least we see the general behavior.

There's another loop transmission that's of interest that has the same basic form as this one. Once again, we start with two poles relatively near the axis and a third pole further out in the axis. We then also have a zero. This time we locate the zero relatively further out. And we'd expect considerably different behavior to the extent that we can consider the pole in the zero remote quantities we'd expect we can construct this root locus, this portion, ignoring these higher-frequency poles and zeroes.

And under those conditions, we'd expect to have behaviors something like so. However, we also know that this part of the real axis has to include a branch. And in particular, there's a branch that goes from this pole to this zero. Because we have a branch moving to the right, once again, the average distance rule holds for this case. Since we have two more poles and zeroes, remember, the constraint necessary to make the average distance rule hole is that we have at least two more poles than zeroes.

Since that's satisfied in this case, why the average distance of the closed-loop pole locations from the imaginary axis has to remain fixed independent of A-naught, F-naught, since this branch is moving to the right, why these have to move back somewhat and eventually approach asymptotes possibly in here. But again, the thing that's happened via the addition of the zero in both cases is that we're able to get larger amounts of A-naught, F-naught for a given damping ratio associated with a complex conjugate pole pair here in contrast to a three-pole case, remember, where the poles simply went into the right-half plane.

When we add the zero, we result in a system that will never be unstable. And we can get, for a given damping ratio associated with the dominant pole pair, which is now the one corresponding to this branch or these branches, we can get a much larger A-naught, F-naught than we could without the zero. Similarly here. Because these branches move back, if we do an exact calculation, we find out that we can get greater desensitivity without paying a penalty of relative lack of stability.

It's interesting to look at the dividing line between these two apparently very different forms of behavior. We start out with two poles, a zero and then a third pole in both cases. And yet we have root locus diagrams that look very dramatically different, one of them having the circular behavior, all of the branches lying for a while on the real axis.

And in this case, we never have all of the branches lying on the real axis except for very small values of A-naught, F-naught. They look like very, very dissimilar kinds of pictures, yet we have to have some sort of a gradual change from one to the other as we move, for example, the location of this zero. I've tried to show things with the poles lined up. And the only real difference between the pictures, or between the root locus diagrams, is the location of the zero.

And we'd expect that there isn't a very dramatic, abrupt change in, at least, the behavior of the system. There may be some quirk in the mathematics that shows up in a funny way in the root locus diagram. But we'd think, as we made small changes in the location of this zero, the system behavior couldn't possibly change dramatically. The world is much more forgiving than that.

And usually, if you do a calculation that shows that a very, very small change in some system parameter gives rise to very large changes in performance, it means you've done something wrong. And so what we'd like to do is look at how we might get from one mode to the other. Let's start to move this zero in. And as we do that, the root locus diagram is as shown here.

As we move this zero closer and closer to the low-frequency poles, we get close and closer to the circular conditions. So we go from this situation, to this situation, finally to a picture like so where the branches come down and almost touch the real axis before finally breaking away and maybe heading to asymptotes somewhere in here.

Now let's move this pole just a little bit further in. And then what happens is that the trajectory of the branches come down like so. They hit the real axis at right angles because they always hit the real axis at right angles and make their way, one of them inward, one out, where it now meets this one and, once again, branch away, finally, at right angles.

And so there's the very gradual transition from this case, where this pair branches, never gets back to the real axis and this case where the pair branches does get back to the real axis. But again, the only difference is whether, for certain values of A-naught, F-naught, we have two branches somewhere in this general region or whether we have two out here. And again, since the distance in the s-plane between those two situations is very, very small, any measurements we made comparing those two systems would show that the behavior was very, very nearly identical. And so, again, there is a gradual transition from this situation to this one, which is what we'd anticipate.

There is an extension of the root locus approach that we've been looking at. Incidentally, we'll have considerably more to say about this later on when we begin to look at examples of actual systems. We, to this point, have been dealing simply with made-up kinds of loop transmissions. We'll get certainly more complicated ones and ones that correspond more directly to actual physical systems, when we begin to use physical systems as the basis of our analytic efforts or as a basis of our design efforts.

There's a rather interesting extension of root locus techniques that's worth mentioning. Let's think back to what we're really doing in a root locus development. We start out with the poles and zeroes of the loop transmission and we have a single degree of freedom. We can change A-naught, F-naught, which is the magnitude of the low-frequency loop transmission. And what we get is the way the closed-loop poles of the system vary as we change the A-naught, F-naught product.

Well one can extend this analysis and the resultant extension is called root contour or the resultant diagram is called a root contour diagram. And again, the thing to focus on is that the technique allows us really to look at the variation in any single system parameter. When we look at a root locus diagram, we're varying A-naught, F-naught. But suppose we have some other system parameter, and we're interested in how the closed-loop poles change location as a function of variations in that parameter. Well we can pursue the following line of reasoning.

When we do a root locus analysis, we're starting out with a loop transmission, which is A-naught, F-naught times a ratio of polynomials. In particular, this polynomial in factored form indicates the zeroes of the loop transmission, whereas, q of s in factored form indicates the poles of the loop transmission. And when we construct a root locus diagram, we're simply arguing that 1 minus the loop transmission, which is 1 plus A-naught, F-naught times our ratio of polynomials is equal to zero. Or corresponding, the equation that we're solving when we construct a root locus diagram, if we simply multiply through by q of s, is q of s plus A-naught, F-naught p of s equals zero.

Now suppose we have some system which has some other single parameter that we're interested in varying and observing the change in system performance. For example, we might have a time constant, a single time constant in the system, that we could change by controlling a capacitor size or something like that. We might have a location of a zero or a location of a pole or the inertia associated with a rotating member in a servo mechanism. But in any case, suppose there's a single parameter.

If that quantity enters into the system equations in a linear way, then, in fact, we can write a resulting system equation or resulting characteristic equation in the following way. We can have some q prime of s plus tau times p prime of s. In this development I'm assuming that tau is the quantity that we're going to vary. So when we write the characteristic equation, we'd find certain terms in the characteristic equation that were not multiplied by tau. We'll simply associate those terms or collect those terms together and call the resultant polynomial q prime of s.

Similarly, if the parameter of interest enters the system equations in a linear way, it will multiply, scale, certain of the terms in the characteristic equation. Once again, we'll collect all of those terms, factor out the tau, and call the resultant polynomial that multiplies tau p prime of s. Well now we have exactly the same form as we used in our root locus diagram.

Here we have a polynomial plus the variable that's going to change as we construct the root locus diagram times another polynomial. Here we have a polynomial plus the variable we're going to change to construct the root locus diagram times another polynomial being equal to zero. So we can apply, once we made this identification, factored out the variable parameter tau, we can consider a system which has zeroes that are determined by q prime of s, poles that are determined by p prime of s, and we can use exactly the same techniques for construction now of a root contour diagram as we did for the construction of our root locus diagram.

The analytic difficulty that we run into when we do this is one that involves the difficulty of factoring the polynomials. Again, if we have machine computation, that may be a fairly simple task. But usually, when we do a root locus diagram, the polynomials p of s and q of s appear in factored form. And so we don't have to go through the drudgery of doing that factoring ourselves.

Normally, when we try to construct a root contour diagram, we find that the polynomials q prime of s and p prime of s, unfortunately, don't appear in factored form. And so there's a computational difficulty which involves reducing those polynomials. Once we've gotten over that hurdle, the construction of a root contour diagram is no more difficult than the construction of a root locus diagram.

Finally, I'd like to look, in our continuing discussion of root locus diagrams, at the zeroes, the closed-loop zeroes of a feedback system. We started our discussion of root locus techniques by arguing that we were only going to look for the pole locations, at least initially, the closed-loop hole locations. And as a consequence of that, we don't really have to worry about how the loop transmission is distributed between the forward path and the feedback path. However, when we're interested in system zeroes, closed-loop zeroes, it becomes important how the loop transmission is distributed. So let's look at that problem for a moment.

We can write the expression for a of s, the closed-loop transfer function. And when we do that, of course, we get our familiar form, the forward path transmission small a of s over 1 minus the loop transmission, which is 1 plus a of s f of s in our standard form. And we can now ask, where are the zeroes of a of s? In other words, for what value of s does a of s equal zero?

Well, if we look at the expression for capital A of s, we can argue that it goes to zero either when the numerator goes to zero, or when the denominator becomes infinite with the numerator remaining finite. Well the first case happens when small a of s goes to zero. In other words, when small a of s goes to zero, the numerator goes to zero, however, the denominator remains finite, or remains 1, actually, when small a of s goes to zero. Consequently, capital A of s goes to zero when little a of s goes to zero.

So we conclude that the zeroes, or one possible location for the zeroes of capital A of s, is add zeroes of small a of s, zeroes of the forward path transmission. The other possibility is to have the denominator go to infinity while the numerator remains finite. Well that occurs at poles of f of s.

In other words, a pole of f of s, this term in the denominator, becomes arbitrarily large or actually infinite. But unless there's a corresponding pole in a of s while the numerator remains bounded, consequently, this expression goes to zero. So the other possibility to have a zero of the closed-loop transfer function is to be at a pole of the feedback transfer function, a point in the s-plane where f of s equals infinity, where the magnitude of f of s equals infinity.

While we can determine the stability of a feedback system and, in fact, tell how close to instability we are and those sorts of measures of relative stability simply by looking at the closed-loop hole location, clearly, an overall understanding of the response of the system requires the zero locations as well. And I'd like to do an example that shows the dramatically different performance we can get from a single system, in other words, a system that has the same pole locations depending on the location of its zeroes. And the example that I'd like to use is a frequency-selective amplifier that uses a network known as a Twin-T.

The basic topology of a Twin-T network is as shown. There are three capacitors and three resistors. And we can then evaluate the input to output transfer function for that sort of a network. It's possible, by appropriate choice of the relative values of the capacitors and resistors, to get a transfer function of the following form.

We can get a complex pair of zeroes. In fact, you can get a pair of zeroes out of the Twin-T network that really can be located anywhere in the s-plane. In fact, we can locate them in the right half of the s-plane if we care to. So we could conceivably, by proper component or value selection, get a complex conjugate pair of zeroes on the imaginary axis. In other words, s squared plus 1, if we normalize this to one radian per second.

If we look at the topology the network, we recognize that, at both low and high frequencies, the gain of the network has to be unity. Notice that, at low frequencies where the capacitors are open circuits, we have a direct path between input and output by the resistive crossbar of the T. At high frequencies, we have a direct path between input and output via the capacitive crossbar of the T. Consequently, the number of poles and zeroes in the transfer function has to be equal so that the low- and high-frequency gains are both bounded. And a possible pole location would be two real axis poles at s equals minus 1.

Actually, I've taken a few liberties with this transfer function. There are three energy storage elements in the system. And consequently, we'd anticipate that there are three poles in the transfer function. Again, by the argument that I earlier used, that the number of poles and zeroes has to be the same so that the high-frequency gain is a constant with increasing frequency, why we'd have to have three zeroes, if, once again, we select component values correctly, the third pole and the third zero can be made to cancel. In that case, we get a transfer function which includes two zeroes divided by 2 poles.

The only remaining liberty is that it's not possible to get the two poles exactly coincident on the real axis. There's a loading problem that ensures that the location of those poles differs slightly, even for the most optimistic selection of component values. But that minor departure from reality really doesn't influence the derivation, the development, very much, so I'll sweep that under the rug.

Let's suppose that we build a feedback system where we have an input here and a forward path which is frequency-independent. We'll designated it as A-naught. We then use our frequency-selective Twin-T network either as part of the forward path, if we're going to the two-output, or as the feedback path if we're going to the one-output. We have two outputs from the system. One of them includes the T in the forward path. The other one includes the T in the feedback path, if we consider the transfer function between the input and Vo1.

Well first, let's find the closed-loop poles of this system. And we can do that by our usual root locus techniques. The loop transmission has two zeroes on the imaginary axis, a complex conjugate pair of zeroes, of course. And so we start out with-- I'm sorry. I have an error in this one. Let me indicate the zeroes and poles correctly.

We start out with two zeroes in the loop transmission at s equals j plus and minus 1. And we have two poles in the loop transmission at s equals minus 1. The root locus behavior for that thing is fairly easy to determine. We recognize that the branches associated with these two poles jump off the axis at this point.

We can, for example, evaluate the angle that they make when they reach the zeroes. We find out, in that case, that they reach the zeroes like so. So the branches terminate in this way.

And again, if we keep track of the geometry involved, we find out that the remaining part of the diagram has to be a circle or a semi-circle. And so we get a root locus diagram that looks like so for increasing values of A-naught, F-naught in that direction. So we start out with our system, which has two loop transmission zeroes on the imaginary axis, two loop transmission poles on the negative real axis, and a variable gain A-naught. And as we start at A-naught equals zero, the closed-loop poles lie here. And as we increase A-naught, why the closed-loop poles move toward the zeroes.

Suppose we consider system behavior when, for a value of A-naught, which results in closed-loop poles lying somewhere, possibly here, so these are now our closed-loop hole locations, and then we consider the overall system transfer function, both poles and zeroes for the two transfer functions, either V-out 1 over Vi or V-out 2 over Vi, in the case of the transfer function V-out 1 over Vi, the closed-loop poles are, of course, those given by the earlier development. We've cranked up A-naught to this point.

The closed-loop system poles lie here. They are independent of which output we consider. They're a fundamental property of the system. And so our closed-loop poles lie here.

However, for the one-output between the T network is in a feedback path. Consequently, our closed-loop zeroes, from a statement where the development we did earlier, occur at the poles of f of s. So we get closed-loop zeroes at s equals minus 1, the poles of f of s. So this then is the overall singularity diagram, poles and zeroes, that we get for the transfer function V-out 1 over Vi because, in that system, the Twin-T, the frequency-selective network is in the feedback path. And we can rapidly sketch the frequency response of that system with the usual vector manipulations.

We recognize that at d, c the length of a vector from a pole and a zero cancel. And since there are two poles and two zeroes, we'd at least have a magnitude that we might normalize to 1 at d, c. Similarly, for very large values of frequency-- again, move out along the imaginary axis to evaluate sinusoidal steady-state response-- again, the lengths of the vectors from the poles and zeroes just about cancel. And so the response of this system for both d, c and for high frequencies is the same.

The only place that the frequency response differs significantly from that value is immediately in the vicinity of a pole. If we evaluate the frequency response, for example, at this frequency, well let's see, we get the product of the lengths of these two vectors. There's actually two in here, of course, since there's two zeroes. That's the numerator.

And we divide that by the length of this vector and by the length of this vector. But this vector becomes quite small. And consequently, in the vicinity of the closed-loop pole location we get a resonance. We argued that the low- and high-frequency gains are the same. And if we assume that our complex conjugate pole pair were located at omega equals 1, we'd anticipate getting this sort of a picture, the usual sort of resonant curve where the high-frequency gain, again, reached one.

And what happens as we, for example, lower A-naught, is of course, the resonance becomes less pronounced, since these poles move back. We can determine that from our original roof locus analysis. So we'd get this kind of behavior as we lowered A-naught. And as we increase A-naught, we get a progressively steeper resonance like so. So this is the direction increasing A-naught.

We could argue that result physically by looking at the system with the T in the feedback path. At the resonant frequency, the T has a zero, no transmission, and so we get an input to output transfer function or a gain that's simply A-naught. And consequently, we'd anticipate that sort of behavior versus increasing A-naught.

If we consider this second system or the second possibility, in particular, looking at the second output as a function of input, we'd get exactly the same closed-loop poles. We must. But now the closed-loop zeroes are those associated with the T network because, in this case the T is in the forward path and so are V-out 2 over Vi 1, must include zeroes at the zeroes of the T network.

And so we get this sort of behavior. We get the complex conjugate pole pair. And the zeroes associated with that closed-loop transfer function are those of the T.

Again, at d, c the vectors from the poles and zeroes tend to cancel. At very high frequencies, when we evaluate this transfer function for sinusoidal steady-state response at high frequencies, again, the vector lengths cancel. And the only difference occurs at, again, points near the location of the pole and zero pair.

When we look, for example, at the frequency right here and ask for the fragrance response, it must go to zero. However, if we're even slightly displaced from that point, why these two vectors just about cancel. These two vectors just about cancel. And so the gain of this network is the same at almost all frequencies except in the immediate vicinity of the zero.

And of course, at the zero, the response must go to zero. So here we'd anticipate a rejection amplifier where the response really goes to zero at 1 radian per second. And now, as we increase A-naught and bring this pole closer to the zero, the range of frequencies over which the cancellation does not occur between the vector lengths gets narrower and narrower.

And we get a much sharper null. And so this is the direction of increasing A-naught. As we increase A-naught why that becomes a narrower and narrower band of frequencies that we reject, we can build a very, very high-quality rejection amplifier using that technique.

I have a demonstration that shows this, and I'd like to look at that for a moment. What we have here is the rejection amplifier or the frequency-selective amplifier. Here's an op-amp that provides the gain. And we have a potentiometer that allows us to adjust A-naught.

The frequency-selective network is in this region. We have one, two, three capacitors, one, two, three resistors, one of which includes a potentiometer. You can show that you can null the network. And you can cause it to have zeroes on the imaginary axis by adjusting any one of the six components.

The easiest one to do, in this case, was the resistor in the vertical leg of the T. And that's what this potentiometer does. And there's a buffer amplifier the just minimizes loading on the T network. And what we then do is drive this frequency-selective amplifier with a swept sinusoidal signal, which allows us to plot its frequency response. We look at the resultant frequency response on the oscilloscope.

For reasons of getting a clear presentation, it's easiest to do this with a sweep and without putting in the peak rectifier and all that. So what we'll see in the oscilloscope presentation is simply an envelope that includes both the positive and negative portions. What's really happening is the carrier is running along inside this envelope. And we just see the outline of the envelope.

We do a single sweep, which we store on a storage scope. The generator then retraces. So we'll look at single swept measurements of the frequency response.

Let me first erase the storage scope. And now we're running with a minimum value of A-naught. And what I will do is get a single trace. And there, this is looking at the one-output as a function of input. And this is the case where A-naught's quite small.

We notice very little peaking. Remember that, for small a-naught, the pole pair has a relatively large damping ratio. We'll now begin to decrease A-naught. And now we notice just a little bit of resonance. Let's do another trace with a little bit larger value for A-naught.

We begin to see just a little bit of resonance. Another time, and some additional A-naught. Still more. And now we're beginning to get quite an observable peaking in the response.

Larger value of A-naught. Still larger value. And finally, the maximum value of A-naught that we get with our system.

Here we have a range of A-naughts that only allows us to get down to, possibly, a damping ratio of 0.1 or so for the dominant pole pair. So we don't get a very highly resonant effect, but we certainly get a very observable one. So that's the response of the system when we have the one-output, when we're looking at the one-output.

We can change our configuration, since both the one- and the two-outputs are brought out in this demonstration, in this configuration. So we can change our system configuration. This is the one-output which runs to the oscilloscope. We can change that lead to the two-output and then look at the same sort of responses again.

So will simply move that over. And I've got to change the sensitivity on the oscilloscope, so let me do that. And let's erase our first response, in other words, the one that was the band-pass amplifier. And now let's look at these plots.

OK, there's a fairly gentle null. And let's start to increase A-naught. And we get a progressively sharper null. Again, we'll go to larger values of A-naught. And there, finally, is the maximum value of A-naught that we can achieve.

There's just a little bit of noise in that trace. Let me erase the oscilloscope and do one final transient and see if we can clean that up a little bit. And there we get a fairly good null out of our rejection amplifier.

Again, bear in mind that this amplifier only has a capability of getting closed-loop pole pair damping ratios of something like 0.1. We don't have a particularly large value for A-naught in this experimental set-up. However, if we were very careful and had a high-gain amplifier and very precisely adjusted the T network, we'd be able to get very, very sharp nulls out of the rejection amplifier. This offers a good way, for example, to build a filter that eliminates a particular disturbance, possibly 60 cycles from a set-up where that's a disturbance.

We can also look at the transient response of this sort of a system. And that's a somewhat more challenging kind of analysis than the frequency response. The sort of development I've done shows, in a fairly straightforward way, how we get the frequency response.

I'd like to ask the other question which is, what's the step response of this system? Well the band-pass amplifier is fairly easily determined or at least estimated. I have a real fear of ever having to evaluate inverse transforms and that sort of thing. So I usually try to find nice ways of estimating important characteristics of responses.

And as I say, there's a fairly simple way of doing that for the band-pass amplifier. But let's look at the rejection amplifier, which is just a little bit more challenging. And it turns out what we can do to estimate the response of the rejection amplifier is exploit a property of linear systems. If we have a linear system, we're able to, conceptually at least, shuffle the blocks in a block diagram in any order that's particularly convenient for us.

And what I hope to do is choose one that's fairly easy for analysis. And so what I'd like to do is consider the overall system transfer function for the rejection amplifier, which includes a pair of zeroes on the imaginary axis and a complex conjugate pole pair. And I'd like to factor that into a term which includes the complex conjugate pole pair and then follow that by the complex conjugate zeroes.

When we do that, we get a system topology as shown behind me. What we're looking for is the step response at the two-output, in other words, the response we get with Vi of t being a unit step or a step. And I'll factor the system so that we have a block in the block diagram that gives us the complex conjugate pole pair. And then we'll follow that with the complex conjugate pair of zeroes.

Let's put a step into this box. We've looked at the step response of a lightly-damped complex conjugate pole pair several times before. It has this general form. That was the response we looked at several times ago. And as the damping ratio gets smaller and smaller, the rate of decay of this waveform gets progressively slower. We have more and more cycles of oscillation before it dies down to a given amplitude.

We apply that signal, which is very nearly a sinusoid with just a small amount of damping, to a pair of zeroes which are located at very nearly the frequency of this natural response. We have a pair of zeroes located at a frequency that's very, very close to this frequency of oscillation. The net result is they very nearly cancel out the amplitude or attenuate, almost perfectly, the amplitude of this oscillation.

Well we can also argue that the initial value and the final value of the step has to be equal to 1, if we had a unit step. That's a fairly simple application of the initial value theorem and the final value theorem. And by putting those things together, we conclude that the step response of the overall system, the poles plus the zeroes, is unity or very close to unity initially. It's unity at high frequencies. And what we get is just a little bit of residual of this sinusoid. It's quite well filtered out by the complex conjugate pair of zeroes.

As we increase A-naught, why we'd expect the amplitude of this residual oscillation coming through, or the residual component associated with the poles, gets progressively smaller since the natural frequency of the zeroes corresponds more closely to that of the poles for larger values of A-naught. Let's see if we can look at that and go back to the low A-naught value.

We're looking at the two-output. And I have to change the generator just a little bit. We run it continuously. And what we're going to now do is put in a square wave. We'll go off the storage mode. And let me also see if we can trigger this just a little bit better.

All right, here we have the square wave response of our rejection amplifier with the lowest value for A-naught that our system allows. We can now begin to increase A-naught. And we'll see a picture that's very much like the one I've drawn on the blackboard.

As we go to progressively larger values of A-naught for the step response, we find out that the step response-- as I indicated it must, but didn't draw very well-- goes to the final value initially and to the final value, of course, for very long periods of time after the step's been applied. The initial and final value must be the same, which is to say, is fairly easy to show via a Laplace-type analysis, an initial and final value kind of calculation.

And, in between, we have a small residual of the natural response of the complex conjugate pole pair. And the amplitude of that natural response gets progressively smaller as we increase A-naught. If we had a much larger value of A-naught, why that amplitude would get still smaller. From here on in, the frequency of the residual component would not increase very much because, of course, the complex conjugate pole pair would be simply getting closer and closer to the zeroes of the imaginary component would not change very much at all from here on, so the system that has a rather interesting step response.

I think the important part of this development in the demonstration is that here are two systems that differ only in their zero locations. Clearly, the proximity of either system to instability is identical because the pole locations are identical for the two systems. An equivalent change in the location of the poles of either system would result in instability.

Yet, if we look at the responses, one of them is that that we expect from a system close to instability, the resonant phenomena. The other one, the rejection amplifier, is not one we're quite as familiar with. And so we don't identify that as being something that's potentially very close to instability. So here's an example where, in order to get an overall appreciation for the system and the difficulties we may have with it, we have to look at the zeroes as well as the poles that we get via our root locus analysis.

We'll continue using root locus as an analytic and a design tool throughout the rest of the course. But this concludes our introduction to these techniques. Next time, we'll look at another method for analyzing the stability of feedback systems. And this one will be based on frequency response techniques. Thank you.