James K. Roberge: 6.302 Lecture 07

Search transcript...

JAMES K. ROBERGE: Hi. Today I'd like to look at the stability problem in a somewhat different way. We've found for this part that we have two methods for evaluating the stability of the feedback system. One of them we looked at very briefly or principally via homework problems, in particular the Routh test. And what the Routh test does is simply give us a straightforward, numerical way of determining the number of poles of a closed-loop transfer function that lie on the right half of the s-plane.

Hi. Today I'd like to look at the stability problem in a somewhat different way. We've found, to this point, that we have two methods for evaluating the stability of the feedback system. One of them we looked at very briefly, or principally via homework problems, in particular the Routh test. And what the Routh test does is simply give us a straightforward, numerical way of determining the number of poles of a closed-loop transfer function that lie on the right half of the s-plane. And in that sense, it simply gives us a binary answer to the stability problem. In other words, the system is either stable if all of the poles of a closed-loop transfer function lie on the left half of the s-plane, or it's not stable if one or more lie on the right half.

As an alternative to that sort of stability analysis, we introduced the idea of a root-locus diagram. And this gives us considerably more insight into the performance of the system. We're able to determine relative stability. In the case where a system does have all of its closed loopholes in the left half of the s-plane, we're able to tell how close, for example, a dominant pole pair is to the right half plane. So this becomes a much more powerful tool for the design of feedback systems.

Today we're going to look at another aspect of the stability problem, in particular, how can we determine the stability of a feedback system really based on frequency response measurements performed on the system? And we'll see that this gives us yet another tool for looking at the stability of the feedback system and, again, a very powerful design tool.

Why are we interested in an alternate technique? Root-locus certainly is a fairly general way to look at the stability problem. Well, one is just that there are certain systems which are more easily investigated by root-locus. Other systems are more readily investigated by frequency domain kinds of approaches. And we can find different kinds of information by these two approaches.

But there's another reason. And that is that there are certain sorts of systems that really don't lend themselves readily to analysis by root-locus techniques. We realize that in order to perform a root-locus analysis, we have to have the loop transmission of the feedback system expressed as a ratio of polynomials in s, and in particular, a finite ratio of polynomials in s, a finite number of poles, and zeros.

But not all transfer functions are expressible in that form. For example, a time delay has a transfer function e to the minus s Tau, where Tau is the magnitude of the time delay. And this transcendental function is not expressible as a ratio of polynomials where the polynomials themselves are finite. So there's an example of a transfer function that really can't be used directly in our root-locus analysis.

Another case of major practical interest involves the case where the information concerning the loop transmission is available only in experimentally measured form. In other words, we may have measured the frequency response associated with the loop transmission. We can, if we want to, curve fit and estimate that loop transmission as a collection of poles and zeroes. But ofttimes that's a cumbersome operation and subject to inaccuracies. So again, it would be helpful if we had a technique that could work directly with the measured frequency response of a loop transformation and would allow us to evaluate the stability of the system using that information.

We start again with a system in our standard form-- an input, an output, a forward path transmission, small a of s, and a feedback path transmission, f of s. When we started looking at the stability problem, we mentioned that we can give a physical idea of why we might have instability in some systems by considering what happens if we break the loop and look at the loop transmission. In other words, we proposed a test where we apply a test generator to the system here and look at the signal the loop returns, possibly v sub r. And we found out that for certain systems, in the absence of an input, we'd be able to get V sub r to be identically equal to V sub t under sinusoidal steady-state conditions for some frequency of excitation.

When that happens, we said we could then imagine oscillation, because we'd be able to reconnect the loop, reclose the loop, take away our generator, and the system would continue to oscillate forever. In particular, we looked at this situation for a system which had a loop transmission that concluded three coincident poles on the negative real axis. And we found out that, for that sort of a system, if we had a low frequency loop transmission magnitude-- a, 0; f, 0; of 8-- the system, in fact, was capable of sustained, constant amplitude oscillations.

I also mentioned that there were certain problems with that sort of analysis-- that it wasn't a completely general one, and that it was possible that we could get into trouble with it. And I'd like to go back and look at another case of this and see where we might possibly get into trouble with that sort of simple-minded frequency response stability analysis.

Again, let's consider a system where we're making loop transmission measurements. And suppose we had an a of minus 10 and an f of 1 at some frequency. In other words, when we make sinusoidal steady-state measurements on the system, we make the input 0. We find that for some frequency of excitation, a has a magnitude of 10 and provides 180 degree phase shift. f has a magnitude of 1 with no phase shift. Consequently, the loop transmission, which includes one more inversion or one more 180 degree phase shift at this point in the diagram, has a magnitude of plus 10. And we suspect that something might be wrong with that. If we put in a test signal here, get out a signal that's exactly in phase with our test signal but 10 times as large, we might suspect that there's potential difficulty with that sort of a system.

Well, there may or may not be. As it happens, we can't really tell about the stability of a feedback system by looking at its frequency response at only one frequency. We really have to look at the frequency response of the system over all frequencies in order to say anything definite about the stability of the system.

Let's consider the system that we mentioned earlier where a is equal to minus 10 and f is equal to 1. We simply went blindly ahead. We could write the closed-loop transfer function of the system at the particular omega, which gives us the a of minus 10 and the f of 1, as simply the forward path over 1 minus the loop transmission. But the forward path is now minus 10. The loop transmission is, itself, plus 10. So we get 1 minus 10 in the denominator. And we conclude that the input to output transfer function at that frequency would have a magnitude of 10/9. That's self-consistent.

Let's consider what happens when we put in an input, for example, of 1 unit. Our equation, if it's correct, tells us that the output will then be 10/9. And let's see. When we subtract the 10/9 from the 1 unit, we'll get minus 1/9. But at this particular frequency we've said that a of s is minus 10. And when we multiply the minus 1/9 signal that exists at this point by minus 10, we, in fact, get 10/9. So the arithmetic seems to be self-consistent. And so, it's at least superficially possible for this system to behave in the manner that I've described-- in other words, behave in a stable manner-- even though at this particular frequency, it has a loop transmission of plus 10.

Well as I say, we have to look, really, at the characteristics of the loop transmission at all frequencies in order to determine stability. This simple evaluation in one frequency isn't enough and, in fact, can give us erroneous results.

The way we determine the stability of a feedback system based on frequency response measurements, in particular the frequency response associated with a loop transmission, is called a Nyquist test. And here, as we started with root-locus, we look at the characteristic equation for the system which is simply 1 minus the loop transmission. And in our standard notation, that's 1 plus a of s, f of s. And again, exactly as in the root-locus development, we recognize that closed-loop poles of the system will lie at values of s where this term goes to 0. The characteristic equation is, of course, the denominator polynomial of any closed-loop transfer function of the system. And so the closed-loop poles of the system lie at values of s where this quantity goes to 0 or, correspondingly, at values of s where a of s, f of s equals minus 1. Those, of course, are the values of s for which this quantity goes to 0.

Our system will be unstable if the closed-loop poles exist for values of s with positive real parts, in other words, if the system has closed-loop poles lying in the right half of the s-plane. So our stability test is to determine if there are any values of s with positive, real parts for which a of s, f of s equals minus 1. That's the case. If there's one or more than one value of s with a positive, real part for which a of s, f of s, equals minus 1, the system is then unstable.

How do we make this determination? Well, what the Nyquist test suggests that we do is the following. We perform a mapping. We evaluate the negative of our loop transmission, in particular the af product for values of s. So what we do is a mapping that takes values of s in the s-plane and transfers them to an af-plane. Remember that af of s is simply a function of s. And s is a complex variable in general with a real part and an imaginary part. And so, when we substitute into af some specific value of s corresponding to a point in the s-plane, we get a specific value for the af product, again a complex number.

And we choose to write that af product in a particular form. We choose to use a so-called gain-phase representation for the af product. In other words, here we have a gain-phase plot in the af-plane, if you will. This axis is magnitude, or gain. Generally for the same reasons that we do with a Bode Plot why we use a logarithmic presentation. So here we might have unit magnitude, a magnitude of 10, 100, and so forth-- 1/10, 1/100. Again, as I say, a logarithmic scale is usually most convenient because of the large dynamic range associated with the magnitude of the loop transmission.

And this axis is the angle associated with af. 0 degrees in the middle plus 90 degrees, plus 180 degrees, and so forth, and then negative angles to the left in this gain-phase plane.

Again, we'll see many uses of a presentation of the loop transmission or the negative of a loop transmission, the af product, in a gain-phase plane. But the important point is, of course, we can go from a point in the s-plane, substitute in that value of s to the expression for af and determine a magnitude and angle which we then plot in the af-plane.

And there are certain properties of this mapping that are useful for our investigation of stability. In particular, if we evaluate the af product-- as s takes on values lying on a small circle in the s-plane, we find that we get a small circle in the af-plane.

We also find out that if we evaluate the af product again for values of s that lie in a closed contour in the s-plane, possibly here, when we evaluate the af product, as s takes on this sequence of values, we again get a closed contour in the af-plane, possibly this. Furthermore, all values of s that lie to one side of this contour in the s-plane lie on one side of the contour in the af-plane. There's an issue of insidedness, outsidedness here and we have to investigate that separately. But for example, if we found that a value of s, here for example, inside-- what we conventionally think of as being inside-- the contour in the s-plane, for example lay here, what appears to be outside the contour in the af-plane, then all our values of s inside this contour in the s-plane would give rise to values of af outside this contour in the af-plane.

There's one additional property that we'd like to use. And that is that if we make a small, right angle turn-- in other words, we evaluate af for values of s that take on a right angle in the s-plane-- we again get a right angle in the af-plane, possibly this. As it happens, because of the choice of coordinates, when we make a right hand turn in the s-plane-- in other words, if we evaluated af along this path in that direction such that we took a right hand turn-- why we'd find that we actually take a left hand turn in the af-plane. And that simply has to do with the choice of axes for the angle. Had we chosen to plot positive angles to the left rather than is conventionally done, why the right hand turns would've been maintained.

Well, how do we use all of this to investigate the stability of a system? The secret is to evaluate the af quantity along a contour that includes the region of interest. We're interested in whether there's any values of s with positive, real parts that give rise to minus 1s of the af product. And so, what we do is evaluate the af product along a contour the includes the entire region of interest, in particular the entire right half of the s-plane. All values of s with positive, real parts, of course, lie in the right half of the s-plane.

So what we do is choose a contour in the s-plane that includes the values of s of interest. And the one that does that is one that includes the imaginary axis and then is eventually closed with a very large semicircle that includes the entire right half of the s-plane. So what we do is evaluate s on a contour like this that includes the region of interest. And what we want to know is whether there are any minus 1s of the af product that correspond to values of s in this region.

Well, the thing that gives us some leverage is the fact that the minus 1s are very clearly discernible in the af-plane. If we look at the af-plane, let's consider how we identify a minus 1 in the af-plane. Well, a minus 1, of course, is a magnitude of 1 and an angle of 180 degrees, or minus 180 degrees, or in general, odd, integral, multiples of 180 degrees. So we have a minus 1 right here, magnitude of 1 and an angle of 180 degrees. We have another minus 1 in the af-plane, minus 180 degrees with a magnitude of 1. And we have more and more of these. We have one another 360 degrees displaced from this at plus 540 degrees, at minus 540 degrees, and so forth.

So the minus 1s are very easily observed in the af-plane. And that's the thing that gives us leverage and makes the Nyquist test a useful way of finding the stability of a system.

Let's look at how this might work. Here we show the Nyquist construction. We, again, are looking in the s-plane. Our usual axes-- the imaginary axis here, jomega, and the real axis. And we choose a contour for our Nyquist test that includes the entire region of interest. Usually, we start at s equals 0 plus just a small amount of j. In other words, at this point in the s-plane evaluate the af product at s equals 0 plus j0 plus if you will. And then we continue evaluating the af product as s takes on increasing values along this line, and imaginary component but no real component.

We get magnitudes and angles for the af product corresponding to the values of s along this line. And of course, those are simply the values that we'd get if we made frequency response measurements. Because evaluation of the af product for s equals jomega is exactly what we get when we evaluate the frequency response or measure the frequency response of our system. So we can get the values of the af product by experimental measurements. And as I mentioned earlier, that's one of the advantages of the Nyquist test. We can use experimentally determined data in order to look at the stability of a system.

We finally have reached a point in our construction where we have gone very, very, very far from the origin in the s-plane, in other words out to s equals still 0 real part, plus jR where R is a large number. We then continue the contour into the region of interest, along really almost any path, but conventionally a large semicircle is chosen. So we let s equal r, e to the j theta as theta goes from plus 90 degrees to minus 90 degrees. And then we continue our contour here, going from s equals 0 minus jR, back to s equals 0 plus j0 minus. So this is the contour that we use to evaluate the af product.

And when we do that we get some contour in the af -plane. And I've assumed this is the one that we've gotten. Here is the value of the af product for s equals 0 plus j0 plus. As we increase omega, let s equal 0 plus jomega-- and only an imaginary component, the real component being 0. We assume for this particular illustration that the curve follows this path. In other words, its magnitude gets smaller. Here we have the magnitude in the gain-phase af-plane. The magnitude gets smaller in this direction. The angle gets progressively more negative for increasing omega. And eventually, we get down to some very small magnitude for s equals plus jR. We then have to look-- and we'll do this in detail with a specific example-- but we then have to look at what happens as we continue into the right half of the s-plane. What happens to the af product as we continue into the right half of the s-plane along our large semicircle. We'll look at the specifics of that for a numerical example.

And we find out, of course, that since af has to be a physically realizable transfer function, or has to correspond to a physically realizable transfer function, why there has to be symmetry in this af plot. And that symmetry is shown-- in other words, once we've plotted the first half of the curve starting at s equals j0 plus, going out to s equals jR and then continuing along a large semicircle back to a point that corresponds to the positive, real axis at a radius, R, from the origin. Once we've drawn that much of the curve, the rest of the curve has to be a mirror image of this left hand portion, reflecting the fact that the poles and zeros associated with the af product have to occur in complex conjugate pairs.

Well, here we see the minus 1 points. There's two of them. And our only remaining problem is the determination of insidedness, outsidedness. In other words, we look at our original construction and we readily identify this as the inside of the contour-- what appears to us to be the inside of the contour in the s-plane. And if, in fact, we can find that there is one point in here that plots in here, in the af-plane, then we conclude in this particular figure that there are two values of s with positive, real parts that give us minus 1s of the af product. That's clear because here's a minus 1. Here's another minus 1 inside our contour. In other words, if this was the region that corresponded to the shaded portion of the contour in the last figure, then, in fact, we found two minus 1s of af that correspond to values of s inside that shaded region, in other words, values of s with positive, real parts. So our system would then have two closed-loop poles lying in the right half of the s-plane. We'd conclude that it was unstable.

Let's apply this to a specific example where we assume that the transfer function is, in fact, expressible as a ratio of polynomials in s and let's see how that goes. Let's choose an af product that's 10 to the third over s plus 1 times 0.1s plus 1 times 0.01s plus 1. In other words, we have an a, 0; f, 0; a DC loop transmission magnitude of 10 to the third. Then we have three real axis poles.

Here we show that situation. In fact, to aid our construction in the s-plane, we show the three loop transmission poles. We'd also probably show zeros to help ourselves if any existed. Here we have a pole with minus 1, a pole with minus 10, and a third pole at s equals minus 100. And here's the Nyquist contour that we're interested in. And we can see, at least quickly, what's happening to the general nature of the plot in the af-plane by considering vectors drawn from these three polls to some point on the imaginary axis.

And we notice, of course, that this function would have a maximum magnitude at s equals 0. And then, since the length of these three vectors to a test point on the imaginary axis increases for increasing values of omega, why we'd find that the magnitude of the af product would decrease. We know its DC value. We know its value for s equals 0. And the magnitude will then decrease as omega increases.

Furthermore, as we go to very large values of omega, way out up here on the top of the plane, the angle approaches minus 270 degrees. We eventually get minus 90 degrees, of course, from each of the three poles. So the angle of the transfer function eventually approaches minus 270 degrees, three times the minus 90.

Once we're out to a very large radius and continue around, why our magnitude stays fixed. If R is very large compared to minus 100, then as we close the contour, the lengths of these three vectors stay fixed as we go along a radius of capital R. And we notice that the angle associated with the af product must return to 0 when we're at this point on the contour. Because again, if we drew vectors from the three poles in question, the angle associated with each of those vectors would be 0 degrees. They'd lie along the positive, real axis.

And so that sort of an argument allows us to construct the corresponding plot in the af-plane. And this is what happens when we do that. Here we start out at a value of 1,000, a magnitude of 1,000-- 1, 10, 100, 1,000-- a magnitude of 1,000 for s equals j0 plus.

As omega increases, the angle becomes negative, asymptotically approaching minus 270 degrees for x equals plus jR. The magnitude then stays fixed as we circle into the right half of the s-plane for our evaluation and the plot is, of course, symmetric. If we are careful in this construction, we find out that the minus 1 points lie in the shaded region for an a, 0; f, 0; of 1,000.

And so our only final task is to determine whether this shaded region corresponds to this shaded region. And the way to do that, or at least one way to do it, is to take a test detour. Incidentally, we could do this by depending on the property that I mentioned earlier, that when we take a right hand turn in the s-plane, we actually take a left hand turn in the af-plane.

So here, notice we've taken a right hand turn at this point in the s-plane. And we find that we take a corresponding left hand turn in the af-plane like so. And that, in fact, tells us that the shaded regions correspond.

But another way to see that and one that I find physically more appealing is to take a little test detour. Let's look at a specific point inside this contour in the right half of the s-plane. We can do that here. We can assume we're at this point and then move in this direction a small amount. When we do that, the magnitude must drop since the vectors from the poles all increase in length. So the magnitude of our function must drop. The angle remains 0 degrees if we walk along the positive, real axis. So if we evaluated af at a point in here, its angle must be 0 degrees. But its magnitude must be smaller if the magnitude that corresponds to our evaluation here.

And so we conclude that as we move along the test detour shown in the previous figure, we move in this direction. Consequently, the shaded regions in the two figures correspond. And that tells us, in this particular example, that there are two values of s with positive, real parts that will lead to minus 1s of the af product. Here's a minus 1. Here's a minus 1. There must be two values of s in the right half plane that give rise to those values of the af product. Consequently, we conclude that this particular system had two closed-loop poles in the right half of the s-plane. It's unstable.

We of course suspected that already. We can get this same development by another technique that we know. We can look at this from a root-locus point of view. And here we draw a root-locus diagram for the same system. We have a pole at minus 1, a pole at minus 10, a pole at minus 100. And we find out that if we do the root-locus construction-- this is a familiar pattern-- these two branches come together, meet at about the arithmetic mean, somewhere around minus 5 1/2, and then branch off, eventually reach asymptotes that make angles of plus and minus 60 degrees. And a third branch goes off into the left half plane.

What that tells us is that for sufficiently large values of a, 0; f, 0; the system has two closed-loop poles with positive, real parts-- here's one, and here's one-- once the a, 0; f, 0; product gets large enough.

We see exactly that behavior in this diagram. This apparently is an a, 0; f, 0; product that's large enough. For this particular location of the three real axis polls, the particular relative location, we find out that an a, 0; f, 0; product of 1,000 is enough. We can convince ourselves of that, for example, via a Routh kind of development.

But that shown in the Nyquist construction, we have two minus 1s of the af product that result [? 4 ?] values of s with positive, real parts. If we lowered the a, 0; f, 0; product, that would correspond to simply sliding the shaded curve downward. If we lowered it enough, when we lowered it about this much, we'd finally get the curve down to the point where the minus 1s were outside the contour, then the system would be stable. And that corresponds to the condition where we've lowered-- we start at a high loop transmission magnitude, a high a, 0; f, 0; product. We have two poles in the right half of the s-plane. We lower a, 0; f, 0; and eventually the system becomes stable. So we see exactly that same behavior as we must in the root-locus presentation.

Well, again, as in the case of the root-locus development, we're interested-- in addition to a binary answer to the stability problem-- sort of a relative measure of stability. Usually, we're able to fairly easily convince ourselves that the system is stable, or fairly easily get the system to be stable in an absolute sense. But the question is, how close to instability is it? Or what are some of the useful measures of relative stability? What are the values of some of the useful measures of relative stability? Really, how close to instability are we, and so forth?

And we saw we could get that very easily in a root-locus construction, since we had the location of all of the closed-loop poles of the system. And we could tell, for example, the damping ratio associated with the dominant pole pair and any other measure of relative stability that we'd like to have. We have the information available.

What we do, in this sort of a development-- in other words, when we evaluate stability based on frequency response-- is to really look at features of the closed-loop frequency response of the system, and in particular look at things, for example, like the amount of peaking that we get when we look at the closed-loop response to the system. We excite the system, look at its sinusoidal steady-state response, and we see, for example, the amount of resonant peak we get. We feel that a larger resonant peak indicates a relatively less stable system.

The evaluation is most easily made if we consider a system where the dynamics are all in the forward path. So I'll assume a system-- input, output-- and we assume that all of the dynamics of the system are concentrated in the forward path. And furthermore, again just to normalize things, we'll assume that we have unity feedback. And then what we're going to do is go ahead and look for, basically, resonances or peaking in the sinusoidal steady-state response that links V [? out ?] to Vi.

Let me point out that this really isn't any restriction. Suppose we had a general system where we were interested in the frequency response. We'd like to determine V [? out ?] over Vi for some value of omega. And we had, in general, the dynamics distributed between the forward path and the feedback path. Well, we could manipulate the block diagram as follows. This block diagram is identically equal to the lower one where we have the af product in the forward path, unity feedback, and then follow that by a block that's 1 over of f jomega. And we can really evaluate the relative stability of our system by considering just this part of the system. So we really don't lose any generality when our interest is the evaluation of stability. We found out in a root-locus sense, that depends only on the af product. Again, when we base our stability analysis on frequency measurements, once again we're only really concerned about the af product. And we can determine our stability information from this portion of the block diagram corresponding to this one.

So what we'd like to do is look at the behavior of G over 1 plus G, where G is a function of jomega. We consider the closed-loop gain a of jomega being equal to G of jomega over 1 plus G of jomega. And we can very easily evaluate that for limiting cases. Because we recognize that this quantity is just about equal to 1, when the G of jomega product is very, very large-- in other words, I should have an equals 1 here-- when the G of jomega product is very large. Under those conditions, when G of jomega is large compared to 1, the numerator and denominator are just about equal. And so the magnitude of this term is just about 1. So we have 1 limit.

At the other end, when G of jomega is very small, the denominator is dominated by the 1 term. And under those conditions, the ratio G over 1 plus G is just about equal to G itself. Under those conditions, G is not important in the denominator. The difficulty in the evaluation comes when we're working at magnitudes of G of about 1. There, the angle of G has a very profound effect on the quantity G over 1 plus G.

For example, if the magnitude of G is close to 1 but its angle is close to minus 180 degrees-- in other words, G is about minus 1-- we have a very, very large magnitude for the G over 1 plus G product. Conversely, if the magnitude of G is about 1 and its angle is 0 degrees, we have a magnitude of about 1/2 for the G over 1 plus G product. So the difficulty is finding out-- or the place we have to focus on-- is what happens for magnitudes of G close to 1. At the extremes for large magnitudes of G or for small magnitudes of G, it's fairly clear what the answer is.

Well, we'd be able to effectively make a dictionary that relates G over 1 plus G to G itself. G is a complex number. We could express it, for example, in gain-phase terms as a magnitude of an angle. And then there's some mapping from G as a gain-phase quantity, a magnitude and angle, to G over 1 plus G as a magnitude and angle. And we could simply make up a dictionary that listed possible combinations for the magnitude and angle of G and the corresponding magnitudes and angles for G over 1 plus G.

It happens that a way that's a little bit more convenient for our purposes is to not make a dictionary but to rather use a device called the Nichols Chart. And here's a Nichols Chart. It's a little bit confusing. But what there is is a grid in the background which are gain-phase coordinates. Here is the magnitude of G, the angle of G along here. Here we have a magnitude of G running from 10 to the minus third up to 10 to the third. Here we have an angle of G running from 0 degrees to minus 360 degrees. And this plane repeats every 360 degrees. So we have all the information that we need.

And what we're able to do is go in. For example, if we have a G of 10 and an angle of minus 60 degrees, that's a point in here somewhere. And we can determine corresponding values for G over 1 plus G from the coordinates, the curved lines on the figure. So if we went into our point, magnitude of G equals 10, angle of G is minus 60 degrees. That's somewhere in here. Well, let's see. We'd be about on this magnitude curve for G over 1 plus G. That's a magnitude of about 0.9. And we'd be, if we cared, on about this angle contour for G over 1 plus G. The angle of G over 1 plus G would be about minus 10 degrees.

In the Nichols Chart, the critical point is, of course, the minus 1 point-- a magnitude of 1 and an angle of minus 180 degrees. And what happens as we approach that point-- here's a magnitude of 1. Here is an angle of minus 180 degrees. And as we approach that point, the magnitude contours get larger and larger and larger. Here we're starting at a magnitude just slightly above 1. And now we're getting closer to G equals 1. Here is G equals 10 and an angle of minus 180 degrees. And we have a magnitude of about 1.1 or 1.2 at that point, or something like, about 1.1.

And now we're going down to a magnitude of the square root of 10. And we have a magnitude for G over 1 plus G of about 1.5. And here, as we approach the minus 1 point, the magnitude gets larger and larger. There'd be little ellipses in here with progressively larger magnitudes as we got closer and closer to the minus 1 point.

A very nice way to look at this is to look at a three-dimensional presentation of the Nichols Chart, something that we might call Mount Nichols, for example. And here we have exactly that situation. We are looking at precisely the diagram we have on the [? ugraph, ?] where we have magnitude along this axis-- here we're at a magnitude of 1/10, a magnitude of 1 in here-- and the magnitude increases in this direction for the magnitude of G. And the angle of G runs along this axis. Here we have an angle of 0 degrees and an angle of minus 360 degrees over at this edge.

And now the magnitude of G over 1 plus G is the height above the plane. We have sort of a contour map here, actually a three-dimensional contour map. And we get the value or the magnitude of G over 1 plus G as the height above this plane. And so we can envision, for example, drawing a series of values for the af product as omega changes, evaluate af of jomega or G of jomega for various values of omega. That'd be some contour in the G-plane. It would have a magnitude and an angle for various values of omega. And the height, the magnitude of G over 1 plus G, would be the closed-loop magnitude that corresponded at various frequencies. So when we had a G at this point, for example, that's about a magnitude of 1 and maybe an angle of minus 150 degrees or something like that. We'd find out that we had a magnitude that was this much. That's our relative elevation above the unity point.

And in particular, the behavior that we see is that as we approach the minus 1 point, as we approach a magnitude of 1 in here, right here, and an angle of minus 180 degrees right here, the magnitude gets progressively larger. And precisely at the minus 1 point, of course, the magnitude goes to infinity. And we can emphasize that, I think, in a somewhat graphic way by putting a flag on top of Mount Nichols right at the minus one point, where the magnitude gets to be very, very close to infinity.

Incidentally, this is just really a pitch for the equivalent of this subject that I teach at MIT. 6302 is the number of a feedback system subject that I teach at MIT. And we didn't change the flag for this particular presentation.

Usually, we don't need a Nichols Chart that extends over anywhere near as much area as the one that I've shown currently on view graph. We argued earlier that we were able to very easily estimate the behavior for large G magnitudes and small G magnitudes. So really, we're only concerned with what goes on near a magnitude of G equals 1. And so normally, when we're working with this aid, we use an expanded Nichols Chart that focuses on the minus 1 point.

So here we have a Nichols Chart that covers only the range of angles from minus 60 degrees down just slightly beyond minus 180 degrees. And magnitudes again, that center about the magnitude of 1. Here is 1, 1/10, 100, 10. And so we focus on the part of the gain-phase plane that's of interest, in particular the portion of the gain-phase plane that corresponds to magnitudes close to 1 for the loop transmission and angles of the af product of minus 180 degrees corresponding to a positive loop transmission with a magnitude of 1. So we focus on this corner of the Nichols Chart, this corner of the gain-phase plane, which is of greatest interest to us when we're evaluating the stability of a system. And we use this as our measure, really, of relative stability. We can imagine plotting a gain-phase plot for a loop transmission. We can plot an af plot, really just the left hand portion of the Nyquist construction-- we could plot that in this plane. Omega would be a parameter running along that curve. And what we'd do is find-- we'd be able to get the closed-loop transfer function by simply evaluating the magnitude and, if we cared to, the angle of G over 1 plus G as we progressed along the contour in the gain-phase plane.

Next time we'll look at an actual example of how we might do this construction and then continue with our evaluation, or our stability evaluation, based on frequency domain concepts. Thank you.