James K. Roberge: 6.302 Lecture 03
JAMES K. ROBERGE: Hello, this morning we'd like to start a discussion of the dynamics of feedback systems. To this point, we've been looking at feedback systems where the quantities in the loop, in particular the forward path transmission a and the feedback path transmission f, have been pure numbers. They've been frequency independent. And we've concentrated on modeling feedback systems using these quantities and on exploring the important properties associated with the resultant systems. In particular, what we've found out is that if the loop transmission magnitude, in particular the af product is large compared to one, then in fact the properties of the system are uniquely determined by the feedback path. This is the important property of the feedback system. In particular that if we have a large enough loop transmission magnitude, we can virtually uniquely determine the behavior of the system by the feedback path. And in particular, the closed loop gain of the system is approximately equal to the reciprocal of the feedback path gain.
We also find out that as a consequence of having large loop transmission, the system becomes desensitized to changes in the forward path gain. The exact magnitude of a is relatively unimportant provided the af product is large.
Last time we also saw demonstration which showed that even if the forward path gain is nonlinear, provided the loop transmission magnitude's large, we still get the advantage of feedback. In particular, the closed loop gain is very close to the reciprocal of the feedback path transfer function.
We'd now like to look at a somewhat more complicated and certainly far more realistic case. In particular, the case where the loop transmission itself is frequency dependent. We recognize that because principle of energy storage, the low pass nature of most physical elements that in any real feedback system, we have to contend with dynamics. Typically both in the feedback path and in the forward path.
We'll have a great deal to say about dynamics throughout the subject. In fact, this subject is really a study of the dynamics associated with feedback systems. And so today's lecture is really a first look at dynamics.
We assume that we have a feedback system in its standard form. In particular, a forward path transfer function a, a feedback path transfer function f. And here I've indicated specifically that these can be functions of the complex variable s. So we emphasize that by plainly labeling a of s and f of s for the transfer functions of the two boxes.
Also, I'd like to remind you of the notation that we introduced where we associate the case of the variable and it's subscript, or use that combination to indicate the kind of a variable. Here, we have a capital variable and a lowercase subscript. And we remember that that's the notation that we use when we're interested in frequency domain representations or complex amplitudes. And so once again, to emphasize the fact that we're now concerned with the dynamics of our system that the elements in the loop and, of course, correspondingly the closed loop transfer function are now frequency dependent, we use the combination of the capital variable and the lowercase subscript.
We can, of course, use exactly the same rules that we've used prior to this to write the closed loop transfer function. The development wasn't dependent on a and f being constants. So we can write that V out over V i, the by definition, capital A, the closed loop gain of the system, is equal to the forward path gained over 1 minus the loop transmission. Remember in our standard form the loop transmission is minus the af product, the minus sign coming at the summing point. So 1 minus the loop transmission is the denominator expression in our closed loop gain.
Here again, if we're interested in, for example, frequency response. We recall we simply substitute j omega for s. And when we do that in both the forward path transfer function and the loop transmission, why we end up with the closed loop gain as a function of j omega complex frequency.
We can assume or see one important property that might come from the desensitivity associated with feedback systems. Suppose the forward path transfer function is in fact frequency dependent. In other words, the gain of the forward path changes with frequency. That's essentially what we mean by having a frequency dependent transfer function.
Well, we recognize that provided the loop transmission magnitude is large, why the system is desensitized to changes in the forward path gain. It doesn't matter whether those changes occur in a frequency independent way or whether they occur in a frequency dependent way. Providing the loop transmission magnitude is large, we've found that the system is insensitive to the exact magnitude of the forward path gain. And consequently, even if the forward path is frequency dependent, if the loop transmission magnitude is large, we'd still assume that the closed loop transfer function of the system would be determined primarily by the feedback element.
Let's look at how that works for several fairly simple systems. Let's consider a forward path transfer function a or a of s, which is first-order. In other words, there's some dc gain that we'll call a0, a naught, and we'll use that notation throughout for the dc magnitude of the gain. And then there's a first-order pole with a time constant tau sub a. Of course, we recognize that to find the dc gain, we allow s to go to 0. We find, of course, that the dc gain is simply a0.
Similarly, let's assume that the feedback path for our first system is, in fact, frequency independent. And we'll indicate that by simply having f equal to f0 not a function of s.
We then write the closed loop gain expression for our system, or the closed loop transfer function since we're now frequency dependent. We find the capital A of s, the closed loop transfer function, is equal to simply the forward path transfer function a naught over tau a s plus 1 divided once again by 1 minus the loop transmission.
Looking at our original system, we recognize that the loop transmission is minus af. Consequently, 1 minus the loop transmission is 1 plus a0 f0 over tau a s plus 1.
We can simplify this expression for the closed loop transfer function by multiplying through by the reciprocal of the second term in the denominator. And when we do that, everything except the f0 cancels with the numerator term, so we get a 1 over f0 out in front. And the denominator then, we'll break up the reciprocal of this term into a term tau a s over a0 f0 plus 1 over a0 f0. And then of course, the constant term that results when we multiply the second term in the denominator by its reciprocal. And so we get this expansion for the closed loop transfer function of the system.
Now, in most cases of interest in feedback systems, the a0 f0 product is large. In other words, we have a system that at least at 0 frequency, has large loop transmission magnitude. Something may happen to the loop transmission magnitude at higher frequencies, but we'd hope that at least at relatively low frequencies, the loop transmission magnitude is large. Consequently, the a0 f0 product, hopefully is large.
If that's the case, we should be able to ignore that term compared to 1 in the denominator. And we can then, with that degree of approximation, write that the closed loop transfer function for our first-order system, a system that had a single pole in its forward path transfer function of frequency independent feedback-- frequency independent feedback path is simply 1 over f0 times 1 over tau s plus 1, where we can identify tau, the location or the time constant associated with the closed loop pole, as simply being equal to tau a divided by a0 f0.
So reasonably enough, when we start out with a system that has a single pole in its loop transmission, in other words, a system that has one energy storage element, why we end up with a closed loop system that has a single pole. However, the location of the pole, the time constant associated with the pole, or correspondingly the frequency at which the pole is located is changed as a consequence of feedback. It's moved from its open loop location, or it's open loop time constant of tau a. It's modified by a0 f0.
Throughout the material that we're looking at, we'll be interested in the relationship between the transient response of our feedback systems and the frequency response of such systems. In particular, a very useful indicator of the performance of a feedback system is its step response. That's easy to determine experimentally. Remember that in order to measure a frequency response, why we have to get a sinusoidal generator. We have to make measurements at a large number of frequencies. That's tiring and time consuming. Generally unproductive.
Conversely, if we get simply a square wave generator, have a low enough frequency square wave so that the system thinks that it's being excited with a sequence of steps, we're able to readily measure the step response of the system. So there's very real experimental advantages in evaluating many of our systems by means of transient response. In particular, step response. And so throughout the material we'll be looking at the relationship between the time domain step response of a system and the corresponding frequency response.
Here in our simple first-order system we of course have a first-order closed loop response. If we looked at the step response of that system, it would be the familiar exponential. It would take a period tau to get to 1 minus 1 over e of its final value. So that's the kind of response that we should all be familiar with.
We can look at the same thing in the frequency domain. Again, we have a very familiar sort of transfer function in the frequency domain. But I show this just to get our notation consistent.
We will throughout the subject use the usual Bode plot coordinates to represent the frequency response of a system. In particular, you recall in a Bode plot the two axes are frequency on a logarithmic axis, usually in terms of omega, the frequency in radians per second. And magnitude, once again, on a logarithmic scale.
Many people use dB, or decibels, as the unit of magnitude. I choose not to do that, primarily because some difficulty with units that crops up in a decibel scale at times. Decibels really, should not be a dimension quantity. And so there's a technical point that I prefer to avoid. And the way we do that is to simply indicate the magnitude on a logarithmic scale.
Here we have a magnitude of 10 to the minus second, 10 to the minus first, 10 to the 0, and so forth. So throughout the subject, we'll use that sort of a magnitude axis for our Bode plots. So we do plot the magnitude and we also simultaneously, show the angle associated with our transfer function for, of course, sinusoidal steady-state excitation.
And here we have the angle scale over on the right-hand side going from 0 degrees to minus 90 degrees. The magnitude of our simple first-order transfer function normalized to unity, this would assume in the expression that we had earlier that 1 over f0 was equal to 1. So we're plotting simply the magnitude of 1 over tau s plus 1.
At sufficiently low frequencies of course, the tau s term is vanishingly small. And what we find out is that the magnitude is simply 1.
Similarly, at sufficiently high frequencies, the tau s term dominates. And consequently at high frequencies, the magnitude falls off as 1 over frequency. And so on a log log presentation, that corresponds to a slope of minus 1 going out in this direction.
And the actual magnitude drops below the so-called asymptotic approximation. In other words, the intersection of two lines, one that's flat, one with the slope of minus 1 that intersect at omega equals 1 over tau, the so-called corner frequency, the actual magnitude of the first-order transfer function drops below that at the corner frequency by an amount equal to 0.707. And again, if you're more familiar with this in a decibel scale, minus 3 dB at the corner frequency.
The angle, similarly, goes from originally 0 degrees at frequencies where the s-term in the denominator is insignificant, to eventually minus 90 degrees. At sufficiently high frequencies, the denominator is dominated by the j-term. The angle of j is, of course, plus 90 degrees. Since it appears in the denominator, that converts to minus 90 degrees for the angle of the transfer function. So we have an angle which goes from, effectively 0 degrees at sufficiently low frequencies, to minus 90 degrees at sufficiently high frequencies. At the corner frequency, the real and the imaginary part of the denominator are equal. Consequently, the angle associated with the denominator is plus 45 degrees. Therefore, the angle of the transfer function is minus 45 degrees.
Well, that's certainly the simplest kind of transfer function we can have where we have just a first-order function, a single pole in the forward path of the system. And the interesting point that we should emphasize here that ties us back to our earlier material on feedback systems or on dynamic-less feedback systems, is that assuming we have large loop transmission magnitude, then at sufficiently low frequencies-- in other words, where the tau s term in the closed loop transfer function is unimportant, the closed loop gain is effectively 1 over f0. That's what we'd anticipate. We said, look, at sufficiently low frequencies, the a0 f0 product is large. The loop transmission magnitude is large. We'd anticipate that the closed loop gain was determined simply by the feedback element. And this is how it shows up. The closed loop gain at sufficiently low frequencies is simply 1 over f0.
We then get a pole associated with the closed loop gain. At higher frequencies, the gain begins to drop off. It begins to drop off, in fact, when the magnitude of this term becomes 1. Or correspondingly, when the quantity tau a times omega-- recall we'll substitute in s equals j omega. If we look simply at the magnitude of that term, we get tau a times omega over a0 f0. When that magnitude is equal to 1, why that's the closed loop corner frequency.
We can take the inverse of that term and also recognize that the corner frequency occurs where a0 f0 over tau a omega is equal to 1. And we have a good physical interpretation of that.
The loop transmission or the negative of the loop transmission of our system is, of course, the forward path transfer function a0 over tau a s plus 1 times the feedback path transfer function, which in this case is simply a number f0.
Assuming that we have a large a0 f0 product, which is how we started this entire [INAUDIBLE]-- frequencies beyond the corner frequency of the loop transmission. In other words, at frequencies large compared to 1 over tau a, why the magnitude of this loop transmission is approximately a0 f0 over tau a times omega. In other words, this term in the denominator is large compared to 1.
If we compare that expression with the top one, we've shown that the closed loop corner frequency occurs when a0 f0 over tau a omega equals 1. In other words, when the magnitude of a loop transmission is equal to 1. And again, we see the same sort of behavior that we've associated with feedback systems before. In particular, when we're at frequencies where the loop transmission magnitude is large compared to 1. In other words, at frequencies where a0 f0 over tau a omega is large compared to 1. Then, in fact, the closed loop transfer function of the system is approximately 1/f.
We depart from that behavior at the frequency where the loop transmission magnitude is 1. Beyond that frequency, when the loop transmission magnitude is small compared to 1, the closed loop transfer function drops off. And in fact, becomes effectively identical to the forward path transfer function. So we should've been able to predict this kind of frequency behavior from our earlier discussion of dynamic-less feedback systems.
I'd now like to look at a slightly more complicated system. Let's consider a system which has two poles in its forward path transfer function. We now have a of s being equal to a0 over tau a s plus 1 tau b s plus 1. And similarly, f is equal to f0. This should actually be a of s.
If we continue the same sort of a closed loop development as we did before, we get the closed loop gain of the system is approximately equal to 1 over f0. And we now get a quadratic in the denominator. The only approximation here is the same one that we had in the previous first-order example. In particular, we assume that a0 f0 is large compared to 1.
If we make that assumption, we then find that the closed loop transfer function is once again, 1 over f0 at sufficiently low frequencies. Except now we get a second-order denominator of course, reflecting the fact that there are two energy storage mechanisms in our system. In particular, two poles in the forward path transfer function.
For certain combinations of tau a tau b a0 and f0, it's clear that it may not be possible to factor the denominator transfer function into two real roots. And under those conditions, it's convenient to write our second-order expression in the following standard form. And we'll use this throughout the material. 1 over f0 times 1 over s squared over a quantity called the natural frequency squared plus the s-term with a coefficient twice a quantity called the damping ratio divided by the natural frequency plus 1. Again, our standard form as in the case of the first-order standard form has unity bc magnitude when the s-terms are unimportant. We then get the behavior that we'll see in a moment at higher frequencies.
And I say this particular representation will be used throughout. It's convenient in the case where the denominator cannot be factored into two real roots. In other words, where we get a complex conjugate pair of roots. Under those conditions, the quantity of the damping ratio is less than 1. And I have again, a Bode plot for that.
It's also interesting to look at the physical significance of the quantity's damping ratio and natural frequency. And we show that in the view graph that's up. In particular, the natural frequency is the distance of the complex conjugate pole pair from the origin, and the damping ratio indicates really, how close to the imaginary axis the pole pair is located. How much they're rotated toward the imaginary axis. And in particular, this angle, the angle between a radius drawn to one of the polls and the real axis in the s-plane is such that theta is the arc cosine of the damping ratio, or the cosine of this angle theta is in fact equal to the damping ratio. So this is the physical significance of the two quantities. In particular, omega n gives us the radial distance of the complex conjugate pole pair from the origin. And effectively, indicates the speed of response of the system.
Recall that if we simply stretch the s-plane but maintained all relative locations the same, all the angles the same, we'd simply alter the speed of the response of the system. And so the natural frequency gives us a measure of the speed of response of the system. The damping ratio, of course, as the name implies, tells us how well damped the system is. A system with a damping ratio of 1 would have the poles coincident on the real axis. We'd effectively have two first-order terms located with identical time constants.
As we go then to damping ratios less than 1, which is of course the only case where we'll use this form, we get a complex conjugate pole pair lying closer and closer to the imaginary axis in the s-plane.
We can also look at the Bode plot for those transfer functions. Or for the second-order transfer function. And the magnitude one is presented first. Let's go back to our original expression and let's evaluate the magnitude of our second-order transfer function.
Well, recall that of course the magnitude is the square root of the square of the imaginary part plus the square of the real part. And in this case, we have two terms in the real part. We have the constant term and when we substitute j omega for s, we square the j and we get a minus 1. And so we get a term from this part of the equation that's omega squared over omega n squared with a minus sign. So the real part of our expression is 1 minus omega over omega n quantity squared. 1 minus omega squared over omega n squared. We square that quantity. That's the square of the real part.
The square of the imaginary part. Again, we substitute j omega for s. The square of the imaginary part is simply 4 times the damping ratio squared over omega n squared. And we pick up an omega squared from the s-term. So here's the square of the imaginary part.
At the natural frequency-- in other words, if we excited our system at its natural frequency, the real term in the denominator would go to 0. And in fact, there's the possibility of a very large amplitude out of the term for systems with small damping ratio. When the real term is 0, we get simply 1 over twice the damping ratio for the magnitude of a second-order system. So we have the possibility for resonance, of course, with a second-order system. And we see that behavior in our Bode plot.
Here, once again, we have the asymptotic approximation, unity normalized to unity at low frequencies. Now dropping off as 1 over frequency squared, a slope of minus 2 on log log coordinates. And superimposed on that asymptotic approximation are the actual curves for a number of damping ratios. For example, a damping ratio of 1 corresponds to two coincident real axis poles. The magnitude is simply 1/2 at the corner frequency. As we go to progressively smaller values of damping ratio however, the magnitude at the corner frequency becomes 1 over twice the damping ratio. For example, for a small damping ratio, a damping ratio of 0.05 for example, the magnitude is 10 at the corner frequency.
If you do this in considerable detail, you find out that there is actually no emphasis of frequency response. In other words, we never get above a magnitude of 1, providing the damping ratio is greater than 0.707. If the damping ratio is less than 0.707, there's some resonant peak. It doesn't occur necessarily at the natural frequency, but the magnitude will get somewhat above 1 over some range of frequencies for any damping ratio smaller than 0.707.
We can also look at the angle associated with this plot. Usually for Bode plots we have the angle and the magnitude on separate plots. In the first-order one we had them together. That wasn't very confusing. But on a plot that has this much detail, we can show things a little bit better if we keep the magnitude and the angle separate.
And so here's the angle for our second-order transfer function. And again, we notice the for a damping ratio of 1, we get an angle that goes from 0 to minus 90 degrees at the corner frequency. Ultimately, to minus 180 degrees. That's simply again, the combination of two first-order terms with equal break frequencies, equal corner frequencies.
As we go to progressively smaller damping ratios, the curves become steeper and steeper. In fact, for a damping ratio of 0, why we'd get a step change in angle at the natural frequency. We'd go from 0 degrees to minus 180 degrees at the natural frequency.
We can also look at the transient response, the step response, that corresponds to this transfer function. Again, I emphasized that throughout the material, we'll be going back and forth looking at the trade-off between time domain, measured responses usually in the time domain and frequency responses. So we've looked at the Bode plot. We've looked at the frequency response of the second-order system. Now let's look at its transient response.
Here's the step response for the system that we've described. Again, normalized to unity. In other words, this is simply the step response that corresponds to the 1 over the standard second-order transfer function. And here we see again, the expected behavior. The fact that small values of damping ratio-- for example, a damping ratio 0.05-- lead to highly oscillatory behavior. This thing goes for quite a period of time. A very small rate of decay.
Conversely, as we have a better damped response, why we get progressively smaller amounts of overshoot.
Here's a damping ratio of 0.707. Notice in contrast to the frequency domain representation, even with a damping ratio 0.707, there is some overshoot beyond final value. In fact, that's true for any damping ratio less than 1. A damping ratio of 1 has a step response that never overshoots. But any damping ratio smaller than 1 gives us some amount of overshoot in response to a step.
Again, this is normalized in terms of the natural frequency. The natural frequency simply adjusts the time scale of the step response of the system.
I have another way of looking at the step response of a system. We actually have built up a combination of operational amplifiers in this box, which simulate a second-order system. This is a technique that's used in analog computation. I discuss that in the book in chapter 11, and we may have some examples later in the term that deal with this.
But here's a connection of operational amplifiers. It includes two energy storage elements. In particular, two operational amplifiers connected as integrators. And that allows us to form a system that provides a second-order response. And we're then able to vary, for example, a damping ratio of that system and look at the responses on the oscilloscope.
Here we see a second-order response. Again, if we compare this with the figure that we had up earlier, why this might be a damping ratio of, I don't know, 0.4 or something. We'll find out in a minute.
What we're able to do is vary the damping ratio by turning this calibrated potentiometer on our second-order system box. And, for example, we can go to progressively higher damping ratios. There's a damping ratio of 1. We can back off.
Notice as soon as-- here again, we're at a damping ratio of 1. Notice as soon as I get to a damping ratio smaller than 1, we're going toward a damping ratio of 0. As soon as I get to a damping ratio smaller than 1, why we begin to get some overshoot in the response. We find that there's some peaking. We can make that peaking very large. In fact, it gets to a maximum value equal to the magnitude of the step. For sufficiently small damping ratios like so, there's a damping ratio of 0.05 for example. And we can go even a little bit further. And here we have a damping ratio of 0.03 or something like that. The way we'll normally try to adjust our systems, or designed our system is, of course, for better damping. Then the highly oscillatory response. We might look for damping ratios of something like 0.5. And in order to see how that looks, why we can adjust our potentiometer to 0.5.
And when we do that, we get quite a well-behaved transient response. We notice that there's some overshoot, a few percent overshoot in the step response. But generally, a fairly well-behaved response that damps out fairly quickly.
Well, my question why we've spent seemingly a long amount of time dealing with such simple systems. Here we've looked only at first- and second-order systems. We've started out assuming that the dynamics of the system were, first of all, all concentrated in the forward path. But in particular, there was only one energy storage element for our first-order system. There were only two energy storage elements in the system we've just discussed. My question, well, certainly the real world is different from that. There's more energy storage. There's more modes of energy storage in a typical system. Why do we concentrate on first- and second-order systems?
Well, the answer is that many, many feedback systems-- in fact, virtually all feedback systems exhibit responses that are well approximated by either first- or second-order transfer functions. And the reason for that isn't particularly obvious. I think we'll get a better feeling for it as time goes on. But if you build a good feedback system, it's virtually a certainty that its response will be well approximated. It's closed loop response will be well approximated by that of either a first- or a second-order system. And I'd like to look at that for one particular higher order system.
What we have here is a demonstration that we'll actually use several times during the term. So it has more in it than is actually necessary here. But in particular, the part we're going to focus on is one operational amplifier that's located right here. And that operational amplifier has a capacitor associated with it that can be connected external to the amplifier via two terminal, two pins on the amplifier. And what we're doing is connecting the operational amplifier simply to provide unity gain. And by way of background, this operational amplifier happens to be a commercially available integrated circuit. It's in LM301A type of amplifier.
If you look at the schematic for that amplifier, there's something like 13 transistors in the forward gain path. Consequently, if we, for example, investigated the amplifier using any of the common transistor models, why we'd conclude that there were 26 energy storage elements, two capacitors associated with each transistor. There'd be 26 energy storage elements in the amplifier in the gain path.
Furthermore, even that's optimistic. Because when one makes an integrated circuit, there are distributed capacitances associated with all the resistors, for example. So there'd be considerably more degrees of freedom, considerably more modes of energy storage than even the 26. And if we tried to write a complete transfer function, we'd rapidly conclude that the thing was totally analytically intractable.
But what I'd like to do is look at the response of that amplifier under several different conditions, and show that, in fact, the statement I made earlier holds. In particular, it's quite easily approximated by either in one case, a first-order system. And in another case, a second-order system.
What we have here is the amplifier with a relatively large, at least for this amplifier, capacitor. Something like 100 picofarads across the so-called compensation terminals. And again, we'll have a great deal more to say about this particular amplifier later on.
If we look at the step response that results under those conditions, here we have it. And we notice to a very good degree of approximation, a first-order kind of step response. Here we see something that looks very much like a simple exponential. We can calculate the effective time constant of that exponential if we cared to. And the only possible difference comes right at the very, very beginning. There's a little departure from pure exponential behavior right down at t equals 0 for the step response of our amplifier. But, by and large, this is very, very nearly a first-order response.
And as I say, this is a response that's coming from an amplifier that even in the most optimistic of analyses, has 26 capacitors. We can change the compensating capacitor. And under those conditions, we get a somewhat different-looking response. In particular, we get one that looks very much like a second-order response.
Here, now the time scale or the speed of response of the amplifier has gotten much faster. So let's change the time scale on our oscilloscope so that we can observe it. And there we have something that looks very much like a second-order response. I think I can emphasize the good approximation that we can get to this actual amplifier response by a second-order response by putting back on the same screen the response associated with our second-order system, our collection of operational amplifiers that provides a second-order kind of response.
So now I've got the response of our second-order system as the top trace, the response of our actual amplifier. Which again, optimistically, is 26th order, as the bottom trace. And let's just see how good a job we can do matching those two responses.
It looks like the amplifier is a little bit more lightly damped than the second-order system right now. So let's go toward a damping ratio somewhat lower than 0.5. And I don't know, there we're at 0.43 or something like that for a damping ratio. And now let's see how well we're matching.
We're not quite there. There are a number of degrees of freedom as far as how we line up the various traces. We can change the time scale by simply changing the time on the oscilloscope that really corresponds to modifying omega n, or the picture changes as though we were modifying omega n.
And if we change both time scale and the damping ratio associated with our second order system, and amplitude final value and so forth, and the exact time location of the two traces, we're able to get quite a good match between the second order system and the actual amplifier. And just to show that I haven't cheated, let me move this one back up. And there we see the two traces once again. So we've shown, at least in this one particular case, that we can get very good approximations to the behavior of a complicated feedback system as either a first- or a second-order system.
What that allows us to do is again, for example, translate between transient responses, which we might measure, and frequency responses, which we might like to know as an aide to the design process by really taking advantages of properties of first- and second-order systems. There's a fair amount of this sort of material in the book. And the translation has to do with parameters measured in the time domain. Here we look at the output signal's a function of time for input signal, which is a step. So we look at the step response of the system, and we contrast that with the frequency response. In other words, the magnitude of V out over V i as a function of s, or as a function of j omega plotted versus frequency, the usual Bode plot kind of presentation.
And as you imagine, there are relationships that have to do, for example, with the amount of overshoot in response to a step compared to the amount of peaking that occurs in the frequency response. Those two quantities are related in some way. Larger overshoot indicating a more poorly damped system for step responses. Greater resonant peak indicating a more poorly damped system when we look at its frequency response.
There's some sort of correspondence between the time that it takes for a step to reach final value or some fraction of final value in contrast to the frequency response of a system. Again, a wider frequency response implying a faster time response. And in particular, that relationship is such a good one that I think it's worth emphasizing even beyond the material in the book.
Again, one of the things that's very easy to measure, as I indicated before, is the step response. And it turns out that if we define a measure of the speed of the step response, the rise time. In other words, the time it takes to go from 10% of final value to 90% of final value. We've normalized the step response to have a final value of unity.
If we define the 10% to 90% time as the rise time of the system, tr, and we compare that with a measure of performance, a measure of speed of response in the frequency domain. In particular, omega h, the so-called 1/2 power frequency, where the magnitude of the transfer function has dropped to 0.707 of its low frequency value. This frequency. For virtually any system that has a frequency independent low frequency gain, the product of the rise time times the half power frequency is about equal to 2.2. This is really an approximately. You can't prove this in general. But if you look at virtually any physical system, you find out the approximation holds within a few percent.
Similarly, if we calculate the product of the rise time times fh, simply omega h divided by 2 pi, the 1/2 power frequency in Hertz, we find out that product is 0.35. Again, what this does is allow us, for example, to make a simple step response measurement, an experimentally simple step response measurement, and by using information on the rise time, estimate what omega h, estimate what the frequency response of the system will be. And that gives us a much more convenient way of estimating the frequency response than actually making the detailed measurement, which is experimentally cumbersome.
I would like to, just by way of review, introduce one more experiment or technique that we'll use. We've looked at Bode plots for first- and second-order system. Very frequently, while the closed loop response of our system can be well approximated as either first- or second-order, we find out that the loop transmission is really much higher order. For example, again, the loop transmission in our operational amplifier is optimistically 26th order. And we'll find out the we have to do a lot of manipulations that involve a Bode plot representation of the loop transmission of an operational amplifier, or another feedback system. And so I'd like to very quickly simply review how we might construct a Bode plot for a higher order system.
And here what I've assumed is that we're trying to draw a Bode plot, which is a constant term 10 to the seventh. There's a pole at the origin. In other words, we have 10 to the seventh over s at sufficiently low frequencies. So this is a system that includes an integration.
There is a first-order pole on the negative real axis with a time constant of 1/100 of a second, the 0.01s plus 1.
There's a first order 0 with a time constant of 10 to the minus fourth seconds. Or correspondingly, a corner frequency of 10 to the fourth radians per second.
And then there's a second-order term, a second-order pole with a natural frequency of 10 to the sixth. Remember, the coefficient of the s squared term is 1 over omega n squared. So 1 over omega n squared is 10 to the 12th. Consequently, omega n is 10 to the sixth. This particular second-order term has a damping ratio of 0.2. We have 2 times the damping ratio times s divided by omega n plus 1. So we have a fairly complicated transfer function.
And the way those terms are combined to determine an overall Bode plot, is probably most simply to first plot the individual factors. Up here, I show the magnitude of the 1 over s term. We still have a factor of 10 to the seventh to take care of, but that's fairly easy. So we have the magnitude of the 1 over s term. That simply has a slope of minus 1, actually at all frequencies. We have the magnitude of the pole located at 100 radians per second. We have the magnitude of the 0 located at 10 to the fourth radians per second. And we have the magnitude of the second-order transfer function like so, located at 10 to the sixth radians per second. That's this final curve.
Similar, we can plot the individual angles. The angle of 1/s or 1 over j omega is simply minus 90 degrees at all frequencies. The angle associated with the pole is this one going from 0 to minus 90 degrees. Minus 45 at the corner frequency. The angle associated with the 0 goes from 0 to plus 90 degrees. The angle, again, associated with the second-order term, goes through minus 90, eventually reaching minus 180 degrees at sufficiently high frequencies.
What we can then do is to recognize that the magnitude of a complex variable is simply equal to the product of the magnitudes of the individual terms. Whereas the angle of a complex variable is simply equal to the sum of the angles of the various terms.
Since we're doing a Bode plot which uses a logarithmic magnitude presentation, in order to get the product we simply add the individual terms. They're plotted on a logarithmic scale and so adding logs corresponds to multiplying magnitudes. And so we can somehow go in and maybe graphically just use a pair of dividers or something to combine all of the magnitudes at any particular frequency. If we wanted to know the magnitude at 10 to the second radians per second, for example, we take this magnitude plus that one. These are effectively 1. And we'd combine that with the 10 to the seventh term we've left out.
Similarly, we can do the same thing with the angle. We can again, graphically, with a pair of dividers or something, add up all the angular contributions. Recognizing the fact that the angle of our complex variable is the sum of the angles of the individual terms.
And when we do that, we'd get the overall Bode plot for our system. Remember that at sufficiently low frequencies, the magnitude was going as 10 to the seventh over s, or 10 to the seventh over omega. So here at omega equals 1, we have a magnitude of 10 to the seventh. The angle, in the meantime, is minus 90 degrees. Reflecting the angle associated with the 1 over s term, we get the pole of 0. The pole drives the angle down toward minus 180 degrees. The 0 adds another plus 90 degrees, driving it back toward the original minus 90. Then we get the angle and the magnitude associated with the second-order transfer function. So by drawing the individual terms and combining them graphically, we're able to quite rapidly sketch a Bode plot that gives us a very, very good indication over a very large frequency range, and over a very wide dynamic range. Here we have an amplitude range of 10 to the eighth to 1 plotted. A frequency range of over 10 to the sixth to 1.
That concludes our introduction to dynamics. As I say, we'll have a great deal more to say about dynamics during the rest of the material. We'll see that this really is only a first look at dynamics. Thank you.