James K. Roberge: 6.302 Lecture 16

Search transcript...

JAMES K. ROBERGE: Hi. In this lecture, I'd like to continue with the describing function analysis that we introduced last time. And for the first example, I'd like to consider a system where we can actually get an exact answer by just tracing through the operation of the system even though it is a nonlinear system. Consequently, we really wouldn't use describing function analysis for this particular system since we can get an exact answer by actually what amounts to a far simpler technique. However, I think it's a good example of how well the describing function analysis supports the exact answer, even in a situation where at least one of the assumptions used in describing function analysis is rather compromised.

There's a fairly popular circuit that's used for function generators, the sorts of equipment that one uses as signal sources in a laboratory very frequently. And that system, or that technique, is shown in the first view graph. What we do is take a Schmitt trigger-- here I've shown one that's normalized-- and combine it in a negative feedback loop, the inversion indicated here, with an integrator. The Schmitt trigger has rather interesting nonlinear characteristics. And of course, the way it works is suppose we started, for example, at this point. If we made the signal vA , the input to the Schmitt trigger, negative, we could continue going negative as far as we wished to. The output of the Schmitt trigger would remain minus 1.

Now, as we make vA change in a positive direction, when we get to a trigger point, in this case a plus 1, the output would abruptly switch to plus 1. Further increases in the voltage vA, positive changes in the voltage vA, don't change the state of the Schmitt trigger. And what we have to do is go back, go past the original switching point, and in fact change vA to minus 1 before the Schmitt trigger once again changes state. It then flips down and so forth. Again, once we've reached the state where the output is minus 1, further negative changes in vA don't change the state. We have to get back up to a level of plus 1 for the input voltage before the circuit once again changes state.

The operation, then, of the oscillator is fairly easily seen. This is a circuit that has, really, two modes of energy storage, if you will. We have the integrator. But we also have the Schmitt trigger which has memory. And so we really have two initial conditions to specify, one being the initial voltage out of the integrator at some time, and the second one being the state of the Schmitt trigger.

Let's suppose, for example, we're at this point on the Schmitt trigger characteristics-- the voltage vA, which is the voltage out of the integrator, equal to 0 and the output of the Schmitt trigger at minus 1. There are, of course, two possible values that the output of the Schmitt trigger vB can take on. Let's assume it's in the state where the output is minus 1.

If that's the case, what we would find out is that we apply a negative signal to the integrator, we have a minus 1 volt applied to the integrator. Consequently, since the integrator includes an inversion, the output of the integrator would change in a positive direction. So we walk along the characteristics starting at this point with the output of the integrator at 0, its input minus 1 volt. We'd integrate the minus 1 volt or actually integrate the negative of that, walk along the characteristics to this point.

At this point, the circuit would switch. We'd now have plus 1 volt applied to the integrator. As a consequence, the output of the integrator would be a negative ramp, plus 1 volt, the inversion provided by the integrator. The output of the integrator would be a negative [? ramp. ?] vA would start the change in a negative direction. We'd walk back along the characteristics in this direction. Once again, when we get to this point, switch. And the circuit then follows this trajectory in the vA vB plane associated with the Schmitt trigger.

We can get an exact solution in that case, of course. Let's look at the wave forms. Remember that vA is the output from the integrator. vB is the output from the Schmitt trigger. And we've started at the initial condition that I mentioned earlier, in particular the voltage vB being minus 1 volt, the voltage vA being 0.

We integrate the minus 1 volt, or actually the negative of that. The slope associated with vA , therefore, is 1 volt per second. At time, t equals 1 second, the output of the integrator reaches plus 1 volt. At that instant, the Schmitt trigger switches. Its output becomes plus 1 volt. We integrate that or the negative of that. The output of the integrator is now a negative ramp. That time lasts for a total of 2 seconds, from t equals 1 to t equals 3. So it's the slope, or the magnitude of the slope at the output of the integrator is 1 volt per second. We have to undergo a total change of 2 volts before the circuit switches once again. And then the operation continues. So here, as I say, we're able to get an exact solution for this particular system.

The reason for the popularity of this technique as a function generator is it's quite an easy circuit to implement. The Schmitt trigger can be done with a comparator of some sort. And the integrator, of course, uses an operational amplifier with a capacitor in the feedback path and a resistor as the input element to mechanize the integration.

The nice part about it is that the frequency of this configuration can be changed by adjusting a single component. Typically, what's done is a potentiometer is used to adjust scale factor on the integrator. At least part of the input resistor to the integrator will consist of a potentiometer. And in that way, we can change the scale factor associated with the integrator in a smooth fashion. And that changes the overall operating frequency of the system. The large changes in frequency are typically accomplished by decade switching, or in some other sequence, switching the feedback capacitor associated with the operational amplifier.

So this particular oscillator has the advantage that you can tune it by changing a single element. That's in contrast to most sinusoidal oscillators, where typically at least two elements have to be changed simultaneously to tune a sinusoidal oscillator.

Furthermore, the topology gives us both a triangle wave and a square wave. The square wave, of course, coming out of the Schmitt trigger, the triangle wave coming off the integrator. And those are two of the wave forms that one frequently wants in a function generator.

We're also relatively easily able to get a fairly good approximation to a sine wave. What's typically done in this sort of a circuit is to take the triangle wave and go through a nonlinear network that approximates a sine wave as a series of straight lines. I talk about that sort of approximation in Chapter 12 and go through an example of how one might design a nonlinear network that converts a triangle wave to an approximate sine wave.

So by incorporating one of those circuits in addition to the basic function generator or oscillator loop, we're able to build a function generator that provides the three most frequently wanted wave forms for test purposes.

I have one of these circuits running. What we have here, again, is the same power supply we've been using throughout. We have a store-bought function generator that actually uses this technique. This is one built by Krohn-Hite that uses basically the kind of a technique that I've described. It's a three function, three waveform function generator. It generates a triangle wave, a square wave, and a sine wave, by shaping the triangle wave. It also has a few more frills that allow you to adjust symmetry of the square wave, and so forth, and get pulse type waveforms. And it has an ability to voltage control the oscillator.

But the basic loop is that that I've described-- the integrator plus the Schmitt trigger. This is actually used only as a test signal source to display the characteristics of the Schmitt trigger in our own function generator.

Here is our home-built function generator. We actually use an integrated circuit comparator to provide the Schmitt trigger function. Here's an operational amplifier. This is the amplifier that mechanizes the integration. This is the capacitor used for the integration. And this is the resistor, or at least part of the resistor.

We also have a potentiometer that allows us to change the total input resistance to the integrator. And that's what we use to change the frequency of operation. And then, of course, various output signals are displayed on the oscilloscope.

What we are showing now are simply the characteristics associated with the Schmitt trigger. And here, in order to do this, we apply a triangular sweep to the input of the Schmitt trigger from the function generator. We use that same sweep on the horizontal axis of the oscilloscope. We also then look at the output of the Schmitt trigger on the vertical axis of the oscilloscope. And we see, on the oscilloscope screen, the Schmitt trigger characteristics. Presumably, there's a very rapid transition here in a positive direction, a very rapid negative transition here. We don't actually see those. But we do see the flat portion of the characteristics-- here is the input signal going positive, then reverse and going negative, the Schmitt trigger flips. So we're tracing out the characteristics that we expect from the Schmitt trigger.

I can convince you that it's actually going around in the correct direction. By simply slowing down the sweep-- let's see-- there we begin to see it. And let's go even a little bit slower. And then we can follow the dot. And here we very clearly see the pattern being traced out by the Schmitt trigger. The positive going input-- now it's going negative. It switches once again, passes the original transition point before it switches to the positive state. So that's simply the characteristics from the Schmitt trigger. And as I say, that's mechanized using a commercially available integrated circuit comparator.

Now let's complete the loop and look at some of the signals. First off, disable the function generator, the external one that we were driving the system with. And I'll apply that signal to a second axis on the oscilloscope. This is actually now the input signal to the Schmitt trigger or, equivalently, the output of the integrator once we close the loop. This switch simply closes the loop with the switch in the position shown. It disables the loop so that we're able to easily plot the Schmitt trigger characteristics. When we close the loop, the circuit oscillates. And we can look at it on a time base.

And here now is the square wave coming out of our function generator. Let's look at it on that time scale possibly. And let me lower the amplitude of that a little bit so that we can duplicate the picture that we had earlier in the view graph. We'll now switch the second trace onto the oscilloscope screen so that we can simultaneously see the triangle wave and the square wave that our function generator is making. And we notice basically the picture I showed before. If we change phase, we can get very nearly the picture that we showed before, something like that.

Notice the phasing. When the square wave into the integrator is negative-- this is the signal out of the Schmitt trigger-- the integrator is integrating in a positive direction, as an inversion associated with the integrator. We then get to the positive peak. The Schmitt trigger now switches, applies a positive voltage to the integrator. The integrator now integrates in a negative direction and so forth.

And as I mentioned, one of the advantages of this configuration is that we're able to very easily change the frequency of oscillation by simply changing a potentiometer. And all this is doing is changing the value of an input resistor, or the input resistor associated with the integrator.

And here we see we can run from something like-- we're at 50 microseconds per division now, so that's something like-- 4 kilohertz. And here, we've increased by just about a decade. We're at 5 microseconds per division. So that's running at about 40 kilohertz. So we're able to get, very easily, that sort of change in frequency by adjusting a single component and an easily adjustable component, a potentiometer as opposed to an energy storage element that has to be tuned. And as I mentioned, then typically the capacitor will be switched in, possibly, decade steps in order to change the basic frequency of operation, something like this where there are a series of decade steps available on the commercial function generator.

Well, as I mentioned, this is a circuit where we can very easily get an exact solution to its behavior. And so it would be a little bit silly to actually use describing functions to analyze this circuit, since describing functions are, of course, an approximation. But let's go ahead anyway just by way of illustration.

Let's look at the behavior of an element that has hysteresis. Here I've assumed general characteristics with an output amplitude of E sub N or minus E sub N and input amplitude switching points of plus and minus E sub M. And, as we normally do in describing function analysis, we assume that the input signal to our nonlinearity, the voltage applied as its input, is a sinusoid of peak amplitude, E.

And what we note, of course, is that the output of Schmitt trigger or the element with hysteresis is a square wave. The peak amplitude of the square wave is E sub N or minus E sub N in the other state. And the switching point occurs when the input sinusoid gets up to a level of E sub M.

Notice that this is an example of an element whose characteristics are frequency independent. We know where, in this development, how to specify the frequency of operation of the Schmitt trigger. So its characteristics are frequency independent. Yet it provides a phase shift. So here we have a slightly more complicated nonlinearity than we saw in the past. The phase shift sort of reflects the fact that the element has memory. In this case, the phase shift, though, in contrast to most of our linear elements that have energy storage where the phase shift is a function of operating frequency, in this case, it's a function of driving amplitude. We can see that because the switching point occurs when the input sinusoid reaches a level of plus or minus E sub M.

That tells us that if the input signal is very, very large-- if E is a very large number so that the signal is nearly vertical on this scale, we'd go to a level, E sub M, very shortly after time t equals 0. And under those conditions, the output square wave would have 0 crossings, very nearly synchronized with those of the input signal.

Conversely, as we get a smaller value for E, the output signal lags behind the input signal and eventually reaches a maximum negative phase shift of 90 degrees, when E is exactly equal to E sub M. When E is equal to E sub M, switching occurs right at the peak and at the minimum of the sine wave. And in that case, the output square wave is phase shifted by 90 degrees relative to the input sinusoid.

If we keep track of the trigonometry involved, the amount of negative phase shift-- this angle, alpha-- the amount of negative phase shift is simply equal to the arc sine of E sub M over E.

Of course, the characteristics of the Schmitt trigger are cyclical only if E sub M exceeds E or if E exceeds E sub M-- I'm sorry. If the value of E is smaller than E sub M, the Schmitt trigger will remain trapped in one of these two states. And we really don't know, without further information about its initial conditions, which state it's in.

So for a normal operation of the magnitude of the test sine you saw, it has to be at least as large as E sub [? M. ?]

The describing function associated with that element, G sub D of E, is once again the output amplitude, or the amplitude of the fundamental component of the output signal, which is of course, 4/pi times E sub N. The output is a square wave. The fundamental component is 4/pi times the peak value. So that's 4/pi times E sub N.

We then divide by the magnitude of the input signal to get the gain in the describing function sense of the element. So this is the magnitude portion of the describing function. The angle, as we mentioned in connection with the view graph, is minus the arc sine of E sub M over E. And that expression is, of course, only valid for the input test amplitude exceeding the switching magnitude.

So that we don't have to keep track of all the factors of E sub M and E sub N and so forth, let's go ahead and simply normalize and assume that E sub M and E sub M are both equal to 1. Our describing function then, of course, is simply 4 over pi E, the same one as we had with the infinite gain limiter, as a matter of fact. Of course, this is valid only for E greater than 1. But the same expression, because the output signal is the same in magnitude as that of the infinite gain limiter, once E is larger than 1. And the angle, under these conditions, is simply minus the arc sine of 1/E.

Well, we can then calculate the reciprocal, or minus the reciprocal, of 1 over G D of E, which is of course what we need for our construction, our describing function analysis. The reciprocal of the magnitude is simply pi E divided by 4. And then we have to get the angle. We have a minus sign, so that gives us minus 180 degrees. The inversion associated with the minus sign gives us the minus 180 degrees.

Since we're taking 1 over G D of E and the angle associated with G D of E is minus the arc sine of 1/E, when we take the reciprocal of that, we simply change the sign-- the S-I-G-N-- and we get plus the arc sine of 1/E. So the total angle associated with minus 1 over G D of E is minus 180 degrees plus the arc sine of 1/E.

We can use that information as part of our describing function construction. Here we have minus 1 over G D of E. We're showing this, of course, as we always do, in a gain-phase plane, the angle running from minus 45 degrees in this case to minus 180 degrees. Here's our minus 1 over G D of E.

For very large values of E, we said that the phase shift associated with element with hysteresis was very nearly 0 degrees. Consequently, minus 1 over G D of E would approach minus 180 degrees for very large values of E. As E drops down, we remember that the angle from our earlier development is minus 180 degrees plus the arc sine of 1/E, as the amplitude E gets smaller, as we reduce the magnitude of our test signal, we eventually get a net plus 90 degrees from the 1 over G D of E. That added to the minus 180 degrees gives us a net minus 90 degrees.

So the minus 1 over G D of E curve goes like so. This is the direction of increasing E.

And here we have the transfer function in gain-phase form for the linear element. This is an integrator whose value was normalized once again, or whose time constant is normalized to unity. So we simply have 1 over s or 1 over j omega for sinusoidal excitation. That's simply a vertical line here. And the phase shift associated with that is always minus 90 degrees. This is the direction of increasing omega.

We notice that there is an intersection. It's not as clean an intersection as we had in our other example of this sort of thing. When we looked at the phase shift oscillator with the infinite gain limiter, we found a very, very clean intersection. In that case, the curve that represented the linear portion of the system crossed the minus 1 over G D of E curve, went through it. They were nearly perpendicular at the crossing point, or at least made an appreciable angle with respect to each other. And we could very easily determine the frequency of oscillation and the amplitude. There seemed to be little ambiguity.

Here the curves are perpendicular. But there's a little question-- the curve never really crosses. They intersect. But the two curves never really cross. We might expect there was something a little bit funny about that. We'll see that there is some significant error in the analysis in this case.

But let's proceed anyway. We notice that the curves intersect for a value of E equal to 1. And we can use that information to determine the frequency at the intersection. Notice that at E equals 1 or 4E equals 1, the magnitude of 1 over G D of E is simply pi/4. But since there's an oscillation or an intersection with the transfer function associated with the linear elements, we conclude that 1/omega is the magnitude of the transfer function of the linear portion of the system, the integrator. We conclude that at the intersection, the value of 1 over G D of E, pi/4, is equal to 1 over the frequency of oscillation. We, of course, determine the frequency of oscillation from the linear elements curve. And that tells us that omega oscillation is 4/pi.

If we convert that from radians per second to Hertz, we divide by 2 pi and using a pretty good engineering approximation that pi squared is just about equal to 10, we end up with about 1/5 of a Hertz for the frequency of oscillation as predicted by our describing function analysis.

Now, we know from our exact analysis that the exact frequency of oscillation is about, or is exactly 1/4 of a Hertz. So that gives us an indication of the error associated with the analysis in this case.

We can also look at the amplitude. We recognize, of course, that the actual signal at the input to the non-linear element, which is of course where our describing function analysis tells us amplitude. The actual signal at that point is a triangular wave with a peak value of 1 for our normalized system.

Our analysis tells us that E should be equal to 1. We notice that the intersection occurs for a value of E equal to 1. However, that E in that analysis is predicting the value of the fundamental component of the signal. For a triangle wave, the fundamental component is actually about 8/10 of its peak value. So again, depending on how we interpret the magnitude part of the analysis, there's a roughly 20% error. The describing function analysis predicts an amplitude of 0.8, or 1, for the fundamental component of the signal into the non-linearity. The actual, fundamental component is about 0.8.

The reason is-- or at least one of the difficulties with the analysis, aside from the fact that there's not a good, clean, crossing kind of intersection in the game phase plane, is that there is considerably more harmonic distortion in this case than there was in our earlier example. The signal, of course, into the non-linearity in this case, is a triangle wave. The harmonic, or the Fourier series for a triangle wave includes only odd harmonics and they drop off as the square of the order of the harmonics. So the third harmonic-- the magnitude of the third harmonic is about 1/9, or 11% of the magnitude of the fundamental. The fifth harmonic is about 4% of the magnitude of the fundamental and so forth.

So there's significant harmonic distortion in this case. We have a triangle wave rather than an approximate sinusoid at the input of nonlinearity and again, that leads to some errors. But I think, in spite of the very large amount of harmonic distortion in this case-- the linear elements are low pass but only single order-- we still get a reasonable approximation. If we didn't have a better way of solving this system, I think we'd probably be very happy to settle for 20% kinds of errors.

I'd like to consider another aspect of nonlinear system behavior that, again, we can begin to interpret in terms of describing functions. Let's consider a loop of the following form. We have an input and an output, some linear elements, and then, reflecting one of the harsh realities of the real world, we have saturation somewhere in the loop. We simply have an element that's maybe the output of an amplifier or the maximum capability of a motor, or anything else that has a finite dynamic range sort of a thing where, again, we'd normalize things. We get a slope of 1 in this region. The input and the output of the nonlinearity are equal. But when the input exceeds 1 or is more negative than minus 1, we assume that the element saturates and its output is bounded at plus or minus 1.

Well, we'll actually calculate the describing function for this element exactly next time. But we can estimate important features of it as follows. G sub D of E is simply 1 at an angle of 0 when E is less than 1. For E less than 1, we don't encounter saturation. And so the input and the output are identical. G sub D of E approaches 4 over pi E for very, very large values of E.

When we put in a very large value of test signal to the nonlinear element, the output of the nonlinear element is basically a square wave. It spends a very short period of time in the transition region. So when we have a very large value for our test signal, we get, essentially, a square wave out. The magnitude of the fundamental component of that square wave is 4/pi. We divide by E and we get the magnitude of our describing function as 4 over pi E.

If we plot that, it looks something as follows. We have a magnitude of 1. For amplitudes smaller than 1, for very large values of test signal amplitude, the characteristics become hyperbolic, inversely proportional to E. And the transition occurs at an amplitude of 1. As soon as we get up to an amplitude of 1, we begin to get some clipping out of the saturating element. And so the gain of our nonlinear element begins to drop off.

Well, a potential difficulty exists if we combine that sort of a nonlinearity-- and virtually all physical systems include saturation for appropriately large signal levels, of course, with a particular kind of dynamics associated with the linear portions of the loop.

Let's consider an a of j omega that might have the form shown here, possibly a constant value at [? DC ?] then a long region where the gain versus frequency characteristic, the Bode Plot associated with a of j omega, falls off more rapidly than 1/s, possibly 1 over s squared in this region.

We then might break back in the vicinity of crossover-- if this is the unity gain point-- in the vicinity of crossover, we might break back to a 1/s slope. And then usually at higher frequencies, something else happens. We get additional-- modes of energy storage become important, and the characteristics, once again, fall off rapidly.

The angle associated with that sort of transfer function is something, in this case, it would start at 0 degrees. If there was a very long 1 over s squared region, the angle-- somewhere in that 1 over s squared region-- would drop down very, very close to minus 180 degrees. We'd then come back up reflecting the 0 and the 1/s slope, again, depending on the duration of the 1/s slope we might approach minus 90 degrees in here. And then, conceivably, the angle gets progressively more negative at higher frequencies.

We've seen these general sorts of characteristics before in at least two examples that we've looked at. One was lag compensation where we ended up with a 1 over s squared region in the loop transmission of our system. And again, you'll recall that there was a dip in the phase. It wasn't quite this pronounced, but it did dip down and then recover somewhat in the vicinity of crossover because we had taken the precaution of locating the 0 associated with the lag network back fairly far from crossover.

Another case where we didn't actually draw the Bode plot in great detail, but we spoke about it, was our operational amplifier example when we used two-poled compensation. Again, we get a curve for the open loop gain of the operational amplifier that's very much like this when we use two-pole compensation. We have a DC gain in the amplifier. We then get into a long 1 over s squared region. In the case of our operational amplifier, that 1 over s squared region might last for a factor of 100 in frequency or something like that.

And then we had a 0 so that we went through unity gain or unity loop transmission magnitude with a slope of 1/s in that example. But again, we'd anticipate that the phase curve might look something like this. Particularly if there was 100:1 ratio between these frequencies, we would get down quite close to minus 180 degrees over some range of frequencies.

Now, we feel that the system should be perfectly well-behaved because we note that, as we've arranged things-- here's the crossover frequency-- and we have plenty of phase margin. There's no problem as I've drawn things. And in fact, that's what we hope for when we use lag compensation or possibly two-pole compensation. They're really variations on the same theme.

We hope that the system crosses over in a region where the slope is minus 1/s or 1 over frequency. And in that region, we certainly hope that we have adequate phase margin.

The difficulty comes, or a potential difficulty exists, if there is a saturating element included in the loop, because consider the loop transmission in a describing function sense, or the magnitude of the loop transmission for the normalized saturating element that we showed in this example. For small signals, the magnitude of the gain associated with the nonlinear element, the magnitude of the describing function is 1. And that's fine for small signals. This is the sort of loop transmission plot, in a describing function sense, that we have. And we have a perfectly well-behaved system. Stability is determined by what happens out here [UNINTELLIGIBLE]. And we can certainly arrange things to have acceptable phase margin in that region.

But now let's put a big signal into the system such that we get the signal at this point larger than 1-- have a peak value larger than 1. The element saturates. Its gain in the describing function sense begins to drop. And now the effect of the gain dropping is to push down our describing function loop transmission-- maybe something like that-- for large signals.

And now we begin to cross over in a region that has very, very limited phase margin. And if we assume that we drove the system some way such that the signal at this point were even larger, we might be able to cross over at this very unfortunate point.

Now, this sort of an analysis is very, very difficult to make exact. And what do we really mean by a large signal? And what do we really mean by saturation or hard saturation and so forth? But we get the feeling that if we excited this sort of a system with large signals, drove the system into saturation, it certainly might be possible that the stability would deteriorate as a result of that.

Well, we have an example of that. Here, once again, we have the test vehicle that we have used so often. We have our operational amplifier. In this case, we have the ability to compensate this operational amplifier either with 1/s single pole or with two-pole compensation. We have a buffer amplifier. If you recall, we used the same configuration before when we were looking at 1/s compensation. And we were actually using an inverting configuration. There's another buffer amplifier here that allows us to look at the [? error ?] signal, the input signal for the amplifier, although we won't really use that in this demonstration. And again, we have the familiar generator that allows us to apply test inputs, the power supply, and an oscilloscope to observe the step response or other responses of our system.

And what we're looking at now is the small signal step response. This is in the linear region. We have a signal level of something on the order of 50 or 100 millivolts, we're at 20 millivolts per division. So we have quite a small amplitude signal applied to our system. And right now we're running with 1/s compensation. And we notice quite a well-behaved, transient response. 10% to 90% rise time of several hundred nanoseconds. We're at 200 nanoseconds per vertical division. A very well-behaved kind of transient response.

If we go to 1 over s squared compensation, we get very much the same kind of picture. We notice-- we mentioned this, I think, in connection with 1 over s squared compensation before-- here we're at 1/s. Here we're at 1 over s squared. You'll recall that the 0 associated with 1 over s squared compensation introduces a long tail to the transient response. If you do a root-locus plot for this sort of an amplifier, you find out that it includes a relatively low frequency pole-zero doublet. The 0 being the zero associated with the two-pole compensation, and the pole being one of the system poles that, in a root-locus analysis, moves close to that 0 for large values of a, 0; f, 0.

That pole-zero doublet gives us a tale with quite a long settling time compared to the basic dynamics of the system. We see that prolonged settling time here. But that really isn't a stability problem. The percentage overshoot-- at least if we measure it from sort of the initial value of this tail-- the kind of overshoot, or the percent overshoot we get, is very much the same in the 1/s case and in the 1 over s squared case.

But now let's look at what happens if we look at the large signal behavior of the amplifier. What I'm going to do now is simply up the amplitude of the input square wave that we're driving this with. Let me get to a more convenient scale.

The amplifier, at its output, saturates voltage levels a little bit in excess of plus or minus 10 volts. Let me apply a signal which actually never exceeds plus and minus 10 volts. We now have 1/s compensation. Let me slow down things just a little bit.

We're looking at the response to a square wave applied to our inverter. Let's see if we can center things. All right. Now we have a basically full-scale at the output signal applied to our inverting amplifier. What we notice is that we get a trapezoidal sort of a waveform. The reason for that is really a process known as slew-rate limiting associated with the operational amplifier. We notice that these lines look very, very nearly constant slope lines on the oscilloscope face. And I talk about that, I believe, in Chapter 9 and then in Chapter 13 of the text.

That process really is the first stage input current going into the capacitor that's used for minor loop compensation. And basically, the slew-rate is determined by the ratio of that capacitor to the current that the first stage can either apply or absorb from the compensating capacitor. And if you go through that calculation with the compensating elements that we're using, you find a very good match between the ratio of the compensating capacitor's size to that current and the slope that we actually observe. So that's the basic mechanism.

It's not an output amplitude limiting, for example. We never get to the plus or minus 12 volts, roughly, that the amplifier can supply. So it's not a basic output voltage-level saturation problem, but rather some kind of an internal saturation mechanism that has to do with limited current driving, basically charging, a capacitor.

If we examine what happens as the amplifier comes out of saturation, we put in a full amplitude step. It takes quite a while to slew. We're at 5 microseconds per division. So these transitions are taking 10 microseconds. It's slewing at a couple of volts per microsecond. We're going through a total 20 volt excursion in 10 microseconds. So the amplifier is slewing at a couple of volts per microsecond.

Right in here and right here we get out of limiting, where the amplifier enters its linear region, and the transitions, of course, not abrupt. But the amplifier reenters its linear region in these places.

And we can get a little feeling for the nature of that recovery. If I can't quite get it all on the screen, but you see just a little bit of overshoot. Of course, we're still at 1 volt per division. So we have a little bit of overshoot. The amplifier enters its linear region somewhere right in here, and there's just a little tiny bit of overshoot.

We see the same general kind of a thing on the negative waveform. Again, the amplifier enters its linear region and just a little bit of overshoot. Fast recovery, at least compared to the 5 microsecond per division time scale that we have here.

Now this is the situation with 1/s compensation. There is a slew-rate problem. But the amplifier is certainly very well-behaved in terms of stability. There's a little bit of overshoot as it comes into the linear region, but very nicely behaved.

Now let's look at what happens to the waveform if we use 1 over s squared compensation. We notice things get quite dramatically worse. Here again, we don't have quite straight line slewing or quite constant velocity slewing. Again, if you chase through the analysis, that's reasonable. These should be partially parabolic. And there's a little bit of that.

But now what happens is, here the amplifier should be entering its linear region. But it doesn't. Or it [? flips ?] right through the linear region. You get a massive overshoot. The output gets up about 3 or 4 volts above final value. This point is 10 volts. We hit a peak of 13 or 14 volts here.

Similarly in the other direction, we bang very close to the negative supply, so a very tortured kind of a recovery. And finally we reach final value. But it's a very tortured sort of a recovery.

The same kind of thing happens even for sinusoidal behavior. Let's go to a somewhat lower frequency. And rather than use a square wave, which of course forces us into slew-rate limiting on the transitions, let's look at the sinusoidal behavior. Let me just lower the amplitude a little bit.

And here we're running at a frequency that's low enough that's-- let's say we're at 50 microseconds per division, so that's a couple of kilohertz. That's low enough so that we're not slew-rate limiting. But now let's increase the operating frequency. We're now up to about 20 kilohertz. Let's continue. We have to go up one more notch. Function generator.

OK. Now we begin to get some noticeable distortion. Here we have a pretty good sine wave. Here we begin to see some distortion. Let's look at that again. Here we're going up in frequency. And notice a rather abrupt change in characteristics. I haven't increased the frequency any further. I increased them until there was this sort of a jump.

Now I'll back down. Notice a completely different behavior. The amplitude gets bigger, and bigger, and now, all of a sudden, pops back to the original linear case. Let's go through the cycle one more time. I'll increase frequency. We're at about 60 kilohertz right now. Things get distorted. And all of a sudden, pop. That occurs in this particular amplifier at about 75 kilohertz, that transition to a new state.

I'll now lower the frequency. We get a state that never existed on the way up. And now it pops. And that transition occurs at about 55 kilohertz. So there's a hysteresis in the frequency characteristics. The width of the hysteresis is about 20 kilohertz. That phenomenon is called jump resonance. I had read about it. I'd never observed it until I started playing with this particular configuration. And then I found it another 1 over s squared kinds of systems. I originally thought it was just an academic exercise. You can make up virtually impossible problems in describing function analysis that deal with jump resonance. But here's an example of a jump resonance that actually does exist.

There's a possibility of having even more severe problems with this sort of an amplifier or this sort of a system. Suppose we had a roll-off that was even greater than 1 over s squared. Here we have a 1 over s squared roll-off. The angle gets down close to minus 180 degrees. If this roll-off included a portion that were more rapid than 1 over s squared, it's conceivable that over some portion or for some frequencies, the angle would actually drop down below minus 180 degrees, something like so.

Now again, we could hope to have the characteristics of range so that there was perfectly adequate phase margin that crossover in the absence of saturation. But if the system's saturated and the curve in the describing function sense moved down because of lowered gain, as I've shown things, we'd actually cross over in a region of negative phase margin. And that sounds as though there's certainly the potential for very bad performance in that situation. We can look at the same thing in a gain-phase plane, do our conventional describing function analysis. The same kind of curves-- this sort of a magnitude curve and combined with this kind of a phase curve in gain-phase coordinates of course, starts out-- the magnitude is always decreasing. The phase goes down more negative than minus 180 degrees over a range of frequencies, omega is a parameter along the curve, eventually recovers to values more positive than minus 180 degrees. Hopefully, that occurs in the vicinity of the unity gain frequency so that we have a system that has positive phase margin.

If we then construct minus 1 over G D of E for this particular nonlinearity, the saturation, we get a curve. Remember G D of E has a magnitude of 1 for amplitudes less than 1 applied to the nonlinearity. The magnitude of G D of E goes down as E increases. So 1 over G D of E goes up. So we get a construction like this. This being the direction of increasing E.

We see that there's two intersections with the curve that corresponds to the linear portion of the system. There's a question. What does this mean? Does this system oscillate? There's two intersections. If it oscillates, how do we determine the parameters of the oscillation? Which of those intersections controls the oscillation? Which dominates the characteristics of the oscillation? This sort of a system is called a conditionally stable one. And next time we're going to go through the analysis of this sort of a system and illustrate it with an example.

Thank you.