James K. Roberge: 6.302 Lecture 04

Search transcript...

[MUSIC PLAYING]

JAMES K. ROBERGE: Hi, last time we started a discussion of the dynamics of feedback systems. Today, we're going to extend that discussion. And in particular, begin to look at the issue of stability of feedback systems.

Well, the problem really isn't stability, the problem is lack of stability, or instability. And we're going to look at how that comes about and how we can analyze the stability of a feedback system. In particular, what we have discovered earlier is that we generally like to have a large loop-transmission magnitude, at least of low frequencies, in order to ensure system desensitivity.

What we find out unfortunately, is that when we take a feedback system and begin to increase the loop-transmission magnitude, the system invariably becomes unstable. And unfortunately, that very frequently happens before the system performs as well as we'd like it to. And so what we want to look at is the trade-off that generally exists between on the one hand, large loop-transmission magnitude that gives us good desensitivity, possibly fast speed of response, or equivalently wide bandwidth, and the stability of the system. So there's almost invariably a compromise that involves those three quantities.

We can get a very non-rigorous though, somewhat physically satisfying feeling for why we might have a stability problem with a feedback system by considering what happens in a system that has large loop-transmission magnitude.

Suppose here is where we'd like the output of a system to be and here's where the output actually is. The system senses that there's a large error between input and output. And if it's a high gain system, a system with large loop-transmission magnitude, it begins to drive the output in order to get it into better correspondence with the command, the value that we'd like to have for the output.

The difficulty comes about in that with a high gain system, that drive may be very pronounced, very fast. And the system, unfortunately, has sort of imperfect knowledge of the effect it's having on the actual error because of time delays associated with the dynamics of the elements in the loop. Even though the output may be changing, we may not sense that immediately because of the low pass nature of some of the elements in the loop. And consequently, the system may not stop driving soon enough. It may overshoot. It may drive past the point of zero error.

In fact, if we have sufficiently high gain coupled with sufficiently long time to delays or the correct/incorrect type of low pass behavior, why the error that we cause, the overshoot in the opposite direction, may actually exceed our initial error. And when that happens, it's certainly possible for the system to break in to oscillation. And that's what we have to look at.

The conventional definition of stability for a feedback system, or for any system for that matter, usually is made in terms of bounded input resulting in a bounded output. That is, if we test our system with any input which is bounded. That's to say, if the magnitude of the input integrated over all time is bounded, then any one of those test inputs should yield a bounded output. In other words, the same measure of the output, the magnitude of the output integrated over all time should be bounded for any input which satisfies the top condition. And that's a mathematical definition of stability, which is adequate for either linear or nonlinear systems.

In general, the determination of stability for nonlinear systems is a very difficult one because stability for a nonlinear system can very easily depend on the actual input that we apply. In other words, certain bounded inputs may result in a bounded output. Whereas, other bound inputs may not for a nonlinear system.

The problem is, in a sense, much simpler for a linear system. In fact, the stability of a linear system can be determined with any single bounded test input. If the system's stable for any test input that's bounded, it will demonstrate stability for any other bounded test input.

And furthermore, we can also define the stability for a linear system by looking at the transfer function of the system, the closed-loop transfer function in the case of a feedback system. And if all the poles of that transfer function are in the left-half of the s-plane-- in other words, if they all have negative real parts, then the system's stable.

The reason, of course, is that when we look at the output that results for any particular input, there are terms that correspond really to the homogeneous solution of the system differential equation. Those terms will include only negative decaying terms, terms that decay with time, if we have all the poles of the system in the left-half plane.

Conversely, if there are one or more poles in the right-half plane, why then we'll get exponentially growing terms from the homogeneous solution of the system equation. Consequently, the system will be unstable.

There's a special case which occurs when there are poles on the axis. In other words, an integrator or a sinusoidal oscillator, which has a pair of complex conjugate poles on the imaginary axis. And there, the stability at least in a bounded input, bounded output sense, can depend on a test input. But that's sort of a pathological case and we really, for our purposes, will consider it unstable if we have any poles of the system transfer function on the imaginary axis.

Let's look at a system and see how we might get into a stability problem. Let's consider a system where all of the dynamics are concentrated in the forward path. We'll see that that doesn't make any difference. The critical thing from a point of view of stability is the loop-transmission, not how the loop-transmission is distributed between the forward path and the feedback path.

Furthermore, we'll take our standard system and actually have a value of f of 1. So we have a unity feedback system. And that says that the negative of the loop-transmission is identically equal to the forward path transmission a of s.

Last time we looked at what happened to the closed-loop transfer function for certain specific cases of loop-transmission. In particular, in this particular system, if we have a forward path transfer function that's first-order a0 over tau a s plus 1, and a0 is much, much larger than 1. In other words, at low frequencies, there's a large loop-transmission magnitude. Then in fact, the closed-loop transfer function from last time is first-order. We'd have a 1 over f0 out in front. But in our particular system, f0 is 1. So we simply have 1 over tau a s divided by a0 plus 1. And the approximation involved here is simply a term, a constant term in the denominator, which is 1 over a0. So the extent that a0 is large, this is an exact expression.

Well, if we examine that expression, provided both tau and a are negative-- I'm sorry. Both tau and a are positive, there's no possibility of having a pole in the right half of the s-plane. In other words, this is a system that's always stable. We can change the location of the closed-loop pole by changing either tau a or a0. But providing they are both positive-- in other words, providing we have a system that's negative feedback. Remember, in our standard system notation, we indicate the inversion at the summing point. So providing our system is a negative feedback system, which results with a positive value for a0, and providing the pole associated with the loop-transmission, in particular, that associated with tau a is located in the left-half plane, then our system's always stable.

And in fact, it exhibits only exponential responses. There's no sinusoidal, no oscillatory response possible with a first-order system. So that's an example of a system that's really, ultimately, stable. We can change the speed of response by modifying either tau a or a0, but the stability is always, effectively, ideal.

If we look at a second-order system where a of s is a0 over two real axis poles. And again, f is equal to 1. And we put on the constraint that the low frequency loop-transmission magnitude is much, much larger than 1, now we have the possibility of oscillatory behavior. And we saw this last time.

We'd get a closed-loop response, again, that's 1 over f0. But in our case, f0 is 1. And we then get a quadratic in the denominator of the closed-loop response. And certainly, by appropriate choice of tau a, tau b, and a0, we can end up with an expression that factors into two complex conjugate closed-loop poles. And under those conditions, we can write our closed-loop transfer function in our standard quadratic form. And it will exhibit a damping ratio less than 1.

Well, again, we looked at this last time, and we saw that lower and lower values for damping ratio correspond to progressively more oscillatory step responses. Or if we look at the frequency response of the system, why they correspond to a greater and greater amount of resonant peaking in the frequency response. So when we have a second-order system, again, if we constrain ourselves to the case where tau a, tau b, and a0 are all positive-- in other words, the poles associated with the loop-transmission lie in the left-half plane, and we have a negative feedback system signified by a0 being positive, why we're able to get a system that has an arbitrarily low damping ratio by appropriate selection, if you will, of tau a, tau b, and a0. But we're never able to get poles in the right-half plane.

So again, if we start out with a system that has two energy storage elements, resulting in two left-half plane loop-transmission poles, why again we can get a system that, at least in a mathematical sense and an absolute sense, is always stable. The poles can have an arbitrarily small damping ratio, but the system, by this definition, will be stable.

The way we get into trouble is to have a third-order or a higher order negative feedback system. And we can demonstrate that if we look at a system which on a loop-transmission basis has three coincident poles. In fact, we could demonstrate it with three poles in the left-half plane independent of where they were located. But the mathematics is rather straightforward for a system that has three coincident real axis poles. So we show that by having an a of s that's a0 over tau s plus 1 quantity cubed.

And if we expand that transfer function or look at the closed-loop response that results, again, assuming that we have unity feedback. That we have the topology we indicated earlier, simply expanding out this term, this expression, calculating a of s as this over 1 plus this, we end up with a closed-loop transfer function that's a0 over tau cubed s cubed plus 3 tau squared s squared plus 3 tau s plus 1 plus a0 is the final term.

Now, we could presumably factor that. It's not nearly as clear what the condition is to have all three poles in the left-half plane in this particular case. Certainly, the condition for having all the poles in the left-half plane for the first-order system is obvious. As long as this coefficient is positive, we're all right.

Similarly, for the second-order system, as long as these two coefficients are positive, again we're sure that we have both poles of a system in the left-half plane. But there's a far more complicated relationship among the coefficients for the third-order system. So we can't simply, by inspection, look at that expression and tell whether the system is stable or not.

However, let's consider a special case. In particular, let's consider what happens if we pick an a0 of 8. In other words, if we choose this term, 1 plus a0 to be equal to 9.

With that choice of a0, we're able to factor the denominator polynomial of a of s to find its poles. This expression incidentally-- we'll use that term many times-- is called the characteristic equation. 1 minus the loop-transmission is the characteristic equation of the system.

And so if we factor the characteristic equation, we find the closed-loop poles of the system. And for this particular value of a0, for a0 equals 8, we find that we have poles, a single pole on the negative real axis, located at s equals minus 3 over tau. And we have a complex conjugate pair of poles on the imaginary axis. They have no real part. And in fact, that complex conjugate pole pair is purely imaginary.

And so the implication is that the homogeneous part of the system response would include a constant amplitude sinusoidal term. If we excited this system by initial conditions or excited its unforced response, it's natural response, why we'd find out that that response included a term that was a constant amplitude sinusoid or cosine because of the pair of closed-loop poles with no real part, the pair of poles on the imaginary axis.

Let's see if we can get some physical significance for why that might occur. Suppose we look at our loop-transmission, or at a of s, when we pick a value of a0 equal to 8. And in particular, let's evaluate a of s at the frequency j radical 3 over tau. There's something a little bit peculiar about that frequency. That's the frequency at which the closed-loop poles are located for a0 equals 8. So let's look and see if we can find out anything unusual about the loop-transmission at the frequency s equals j radical 3 over tau. In other words, for purely sinusoidal steady-state excitation at a frequency of radical 3 over tau radians per second. And with a value of a0 equal to 8.

Well, a of s at that frequency, which is identically the negative of the loop-transmission, is simply 8 a0 over j radical 3 plus 1 quantity cubed.

Notice that when we substitute in s equals j radical 3 over tau into our original expression for a of s, the taus cancel out. We end up with simply j radical 3 for the imaginary term associated with that factor. And so our overall value for a of s is a0 over j radical 3 plus 1 cubed.

If we look at the magnitude of any of the individual factors in the denominator, the magnitude of j radical 3 plus 1 is 1/2, or the magnitude j radical 3 plus 1 is 2. So the magnitude of 1 over j radical 3 plus 1 is 1/2.

The denominator then has a value of 8. The j radical 3 plus 1 with a magnitude of 2 cubed. And so the magnitude of this expression is unity at s equals j radical 3.

And furthermore, the angle associated with this transfer function is minus 180 degrees. We can see that by noticing that the angle associated with each of these terms is plus 60 degrees. The imaginary part is radical 3. The real part is 1.

If we look at the trigonometry involved, the angle associated with any one of these terms is 60 degrees. Since they're in the denominator, they contribute minus 60 degrees per term to the overall transfer function. We have three of them and therefore, the complex number has a magnitude of 1 at an angle of minus 180 degrees. So for purely sinusoidal excitation at radical 3 radians per second, a of s is equal to minus 1. 1 at an angle of minus 180 degrees.

Let's look back at our original system and see what that implies. Suppose we performed an experiment where, first of all, we took the system and removed the input. So we eliminate Vi. And then, we break the loop like this. And now we perform an experiment where we excite the system with a test generator.

So what we'll do is put in a test generator here. Let's call it V sub t. And we'll look at the signal returned by the loop observed at this point. And we might call that signal V sub r.

Well, let's choose a V sub t as indicated earlier. In particular, let's choose V sub t to be a sinusoid at radical 3 over tau radians per second. So let's allow V sub t to be some amplitude k times sine radical 3 divided by tau. Or cosine radical 3 divided by tau, doesn't matter. And let's allow the transient, the startup transient to die out. So we've had this transient of this signal applied for a very long period of time.

Eventually, we find out that V sub r is identically equal to our test signal. And the reason for that is that, remember, a of s under sinusoidal steady-state conditions with a0 equal to 8 and at radical 3 over tau radians per second, a of s is minus 1. So the minus 1 associated with a of s combines with the minus sign at the summing point to give us a positive loop-transmission with a magnitude of 1. That means that the signal returned by the loop is identical to our test signal.

Well, once we've done that, we can get the system started this way. And then we can reconnect the loop and we can eliminate the test generator. And the system doesn't realize that we've taken away the test generator. It's effectively able to supply its own input. And so we find the possibility of maintaining a constant amplitude oscillation with the test generator removed, we've found a way of forcing the system to oscillate at radical 3 over tau 4 radians per second. So that's the physical significance associated with the three pole system, three coincident real axis poles when we have a dc loop-transmission magnitude of 8, and when we test the system or look at the system loop-transmission at radical 3 over tau radians per second.

I think it's worth cautioning at this point that this is a fairly simplified view. Or, in fact, a very simplified view of stability. And the sort of test I described-- in other words, looking for points where the loop-transmission magnitude is precisely plus 1, is a little bit dangerous. We'll see how that goes a little bit further on. That's not a general test. The information concerning the loop-transmission magnitude at only one frequency does not, in fact, give us complete stability information. We have to be a little bit careful. In this case, it does give us the correct answer.

We found out by factoring the characteristic equation that there are, in fact, a closed-loop pair of poles on the imaginary axis, and that's the result we would have concluded from the experiment I described as well. But as I say, I caution you that this isn't a general test.

Well, we at least now see the possibility of having a system that behaves as a self-excited oscillator. In fact, if we examined the characteristic equation further, and we chose any a0 that was positive but greater than 8, we'd find out that when we factored the characteristic equation, we'd actually get two poles with positive real parts. We'd find two poles in the right-half of the s-plane. And so if we took this system, three coincident negative left-half plane real axis poles, if we have an a0 of anything greater than 8, fairly low desensitivity if we choose an a0 of only 8. A loop-transmission magnitude at dc of only 8.

Why, we'd find out that there would in fact be poles in the right-half plane. So here's a system where we really can't achieve very much desensitivity. If we try to make a0 any greater than 8, why we find out that the system becomes unstable.

Well, how can we evaluate stability for arbitrary systems? And that's really what we're concerned about. Suppose we have systems that are third-order where we see that the possibility of instability exists. Or even higher than third-order, how do we determine the stability of those systems?

In fact, I think more to the point from a design tool is not so much the issue of stability in a binary sense. Is the system stable as given by our original mathematical expression? Does a bounded input result in a bounded output? But very rapidly, we'll get interested in sort of relative stability.

We can usually solve the absolute stability problem. We can usually guarantee that the poles are in the left-half plane by sufficient design efforts. But then we get concerned about how stable the system is. In other words, a measure of relative stability might be the damping ratio of the dominant pole pair. If we find that the closed-loop transfer function has one pole pair closer to the origin then all other poles, we'll find out that that transfer function-- or that pole pair, by and large, dominates the transient response of the system. We might be interested in the damping ratio associated with that dominant pole pair. So we're looking for analog measures, if you will, of stability.

There are a number of ways we could proceed. One way is to simply use numerical techniques. We can always write a closed-loop transfer function for a linear system. And then we can use numerical methods, machine computation, to simply factor the polynomial, the characteristic equation.

Factoring higher order polynomials is a thing that people don't do very well, but computers do exceedingly well, and don't get bored by it. And we can get the exact closed-loop pole locations for our system.

The problem with that is it's very unsatisfying. What you find out is that you go to the giant mind and say, where are the poles? And it tells you there are three in the right-half plane and four in the left-half plane. And now, what do you ask for the next question? So you don't get any design insight by that process.

And so the thing we'd like to look at is some analytic method, something that we can do without really resorting to machine computation that maintains insight. I think the use of a machine to find exact pole locations is fine. Once we've got an idea of how the design is going, once we sort of know what we ought to do to make things better if you will, then we can go ahead and use the machine and get some very useful information from it. But as a design tool to start a design, it's very, very difficult to depend entirely on numerical calculations.

There's also the very real possibility of the garbage in garbage out problem where you can easily make a mistake in your simulation. You can easily get the wrong numbers in, get totally meaningless results, and not realize it if you don't have some sort of a hand check. So we're going to look at techniques that hopefully, allow us to maintain insight during the design process. And also, give us the possibility of preventing being led astray when we go to get exact numerical results.

One that I'll mention in passing is a technique known as the Routh test. And I discuss this in the text. It's very boring to lecture on and so I won't. But the method is described in the text. And in fact, the homework problems assigned in conjunction with this lecture will force you to carry out a couple of Routh tests. It's simply a mathematical method that was developed long before people were concerned about feedback systems. And what it does is take an arbitrary polynomial and determine the number of roots of that polynomial that have positive real parts.

And so what we're able to do is apply the Routh test to the characteristic equation of a feedback system. We can write the characteristic equation of the system, apply the Routh test to it. Which as I say, is a rather simple numerical manipulation that doesn't require factoring. It simply requires multiplications. And when we do that, we determine the number of roots of the characteristic equation that have positive real parts.

Well, the roots of the characteristic equation, of course, are the closed-loop poles of the system. And so we find out the number of system poles that have positive real parts. Except in limited cases, the Routh method doesn't really give us much design information. It does tell us in an absolute sense, again an absolute mathematical sense, whether a system is stable or not. But it really doesn't give us very much design information. So aside from possibly using it for very quick checks just to see if we're far away from useful behavior, or to check possibly the results of a detailed computer analysis to verify the number of poles in the right-half plane, we find that the Routh method is of limited usefulness as a design tool.

We're going to look at two methods in considerable detail that allow us to predict something about the relative stability of the feedback system. And also, to improve the performance of a system. We get insight that shows us how we might change either the forward path transfer function, the feedback path transfer function, something like that, in order to improve the stability of the system.

The first of these is called the root locus method. And what we do in root locus is the following. We look at an af product. Remember, this is the negative of a loop-transmission in our standard form. The loop-transmission for our standard form system being minus a of s times f of s. And we'll assume that that quantity can be represented as some dc loop-transmission magnitude a0 f0. And then we'll lump all the frequency dependent portion into a term g of s where we assume we normalize g of s, or factor out the gain expression. Combine that in a0 f0 so that g evaluated at s equals 0, the dc gain of the term g, or the dc magnitude of the term g is unity. Hence the magnitude of the loop-transmission at dc is simply a0 f0.

Using that notation, we find that capital A of s, the closed-loop transfer function, is the forward path transmission a of s over 1 plus a0 f0 g of s. 1 plus a of s, or equivalently, 1 minus the loop-transmission.

Initially with our root locus manipulations, we'll only be interested in the pole locations, the locations of the closed-loop system poles. And we won't initially be concerned about the location of the zeroes. If all we're interested in are the pole locations, why we can get that by simply looking at the denominator. Recall that for a system of this type, the zeroes are dependent on how the loop-transmission is distributed between a and f. But the poles of the system are dependent only on the af product. So we can consider only the characteristic equation, 1 plus a0 f0 g of s when we look for the system poles.

And in fact, we recognize that the system poles occur when the characteristic equation is equal to 0. In other words, if we determine the values of s for which the characteristic equation is 0, that those are the values of s for which the denominator of a closed-loop transfer function goes to 0. Consequently, they are the system poles.

Well, let's see how we might determine that. What one does in a root locus method is to determine how the poles are located as a function of, first of all, the poles and zeroes associated with a loop-transmission. In other words, the quantity g of s which includes all the frequency dependent portions of the loop-transmission. And the quantity a0 f0. Certainly the pole locations, the closed-loop pole locations, given by setting the characteristic equation equal to 0, are dependent both on the characteristics of g of s and on a0 f0.

If we would attempt to factor this equation, we'd find out that that factoring would depend both on a0 f0 and the specific details of g of s.

Let's look at how this might go. Let's start out with an a of s, f of s product that's a0 f0 over two first order poles. In other words, we have a system which has a loop-transmission that has two poles on the real axis. We've again, looked at this before.

The characteristic equation for such a system is simply 1 minus the loop-transmission, or 1 plus af. And so we get 1 plus a0 f0 over tau a s plus 1 times tau b s plus 1. And we set that equal to 0 in order to determine the poles of the system.

If we multiply through, clear our fractions, we end up with this equation, tau a tau b over s squared plus tau a plus tau b s plus 1 plus a0 f0 equals 0. And we can factor that by use of the familiar quadratic formula. Certainly, people are perfectly capable of factoring second-order equations.

And when we do that, we get the usual result in the second of the quadratic equation. The two poles are located at minus tau a plus tau b plus or minus the square root of this quantity divided by 2 tau a tau b.

And I'd like to spend a little bit of time looking at that particular expression. We now have a way of finding the closed-loop poles of the system, S1 and S2, as we vary a0 f0. Because a0 f0 appears, of course, in our expression for the closed-loop pole locations. So we've now gotten the basis for our root locus method, or what we're trying to accomplish in our root locus method. Because what we'd like to do in the root locus technique, is find out where the closed-loop poles lie as a function of a0 f0 for a specified g of s. So at least for the second-order case, we're able to solve that exactly.

Let me to show how this information is normally presented.

We normally present information concerning root locus analysis in terms of a root locus diagram. And this is a diagram in the s-plane. And we have the usual s-plane coordinates-- the imaginary axis vertically, the real axis horizontally. And we show several things.

We show, first of all, the location of any loop-transmission poles and zeroes. In other words, all of the frequency dependent portions of the loop-transmission are shown in our root locus diagram. So in our particular second-order case, we have two poles, one at minus 1 over tau a, the other one at s equals minus 1 over tau b.

We then draw some branches in the s-plane that show where the closed-loop poles lie as a function of a0 f0. And in fact, what we do is typically include arrows on those lines, on those branches, that indicate the direction of increasing a0 f0. Or a0 if we have a system with unity f.

We can determine how the root locus diagram has to go by looking at our quadratic equation. Let's see.

If we first start out for a0 f0 equal to 0, what's conventionally done is to plot the locus [INAUDIBLE]-- roots of the characteristic equation as a0 f0 goes from 0 to some very, very large positive number. So we can start out looking at the root locus diagram, determining the roots of the characteristic equation for a0 f0 equal to 0.

Well, certainly, if we factor this characteristic equation under those conditions, we'd get the original pole locations minus 1 over tau a minus 1 over tau b. And that would be the result of simply taking this general expression and setting a0 f0 equal to 0.

As we then allow a0 f0 to increase, the radicand gets smaller. It started out as some positive number, of course, for a0 f0 being equal to 0. In particular, tau a plus tau b minus 4 tau a tau b, which is a positive number.

As we allow a0 f0 to get bigger, why this term increases in magnitude. Consequently, the radicand gets smaller. And what that says is that the separation between the two closed-loop poles is decreasing. They start out for 0 a0 f0 located at frequencies minus 1 over tau a and minus 1 over tau b. And then the separation decreases as we increase a0 f0. So as we increase a0 f0, the separation between the two closed loop poles decreases.

They finally meet on the real axis when the radicand goes to 0. And therefore, they meet at minus quantity tau a plus tau b over 2 tau a tau b. That's the expression that results with the radicand 0.

And if we work with the algebra a bit, we find out that that's the arithmetic average of the locations 1 over tau a and 1 over tau b. In other words, it's the arithmetic average of the open loop pole locations, or the pole locations associated with the loop-transmission. Minus 1/2 1 over tau a plus 1 over tau b.

We can go back and look at our diagram. And what we've done is indicate this behavior in the diagram. For very small values of a0 f0, the branches start at the loop-transmission pole locations minus 1 over tau a minus 1 over tau b. The branches come together and actually meet at the arithmetic mean at minus 1/2 of 1 over tau a plus 1 over tau b. And then, branch off the axis.

Notice that, again, if we go back to our original expression, we had gotten up to the point where the branches had met. Beyond that, as we increase a0 f0 still further, why the radicand becomes negative. So we pick up an imaginary component to the pole location. And for further increases in a0 f0, we maintain a constant real part given by this value. But an increasing imaginary component because the radicand gets progressively larger in magnitude, although it's negative in sign.

And so we see the overall behavior of the root locus diagram. Again, we start for 0 a0 f0, or very small a0 f0 at the loop-transmission pole locations for this particular case. The two branches meet at the arithmetic mean, and then go north and south in the root locus diagram.

And in particular, the damping ratio associated with the complex conjugate pole pair once we have a sufficiently large value for a0 f0, can get arbitrarily small. Because if we were interested in the damping ratio when the closed-loop poles were located here, remember that that would be related to this angle. In particular, it would be the cosine of this angle.

And so, as we increase a0 f0, the closed-loop poles move out along these lines. The angle that they make with the real axis gets progressively larger. Consequently, the damping ratio gets smaller.

This again, verifies what we had said earlier, where we say if we have a system that had two open loop-transmission poles, and is a negative feedback system. And furthermore, both of the loop-transmission poles are located in the left-half plane, why we're then able to get an arbitrarily low damping ratio by appropriate choice of the combination of the pole locations and the a0 f0 product. And we find out that for any pole locations, a larger a0 f0 from this kind of analysis, results in a smaller damping ratio.

But we can never get the system to have the closed-loop poles lie in the right-half plane. We always maintained a fixed-- at least once we have a complex conjugate situation, we always maintain a fixed real part. And that real part is negative. So we can never get the poles into the right-half plane. Although we can get arbitrarily small damping ratios.

Well, this is the way one can determine a root locus diagram for a second-order system. Certainly, a root locus diagram for a first-order system is trivially simple. But for a second-order system, we can in fact, calculate it fairly easily by hand. In fact, there is a formula that allows one to factor third-order polynomials. Presumably we could always do it for a higher order system by machine computation, but I've ready bad-mouthed that. So what we're really searching for is a technique that will allow us to determine rapidly and easily a root locus diagram without the pain of actually having to factor the characteristic equation.

We'll look at that the next time, and see how we can use properties associated with the loop-transmission to actually determine the root locus diagram without actually having to factor the characteristic equation. Thank you.