James K. Roberge: 6.302 Lecture 15

Search transcript...

PROFESSOR JAMES K. ROBERGE: Hi. In our last session, we considered one technique for the analysis of non-linear systems, that of linearization. Linearization basically involves approximating a non-linear system by a linear one with the approximation valid over some restricted range of operating conditions. Today I'd like to look at another technique for the analysis of nonlinear systems-- that of describing functions. Describing functions are an outgrowth of an attempt to apply the familiar frequency domain techniques that work so well for linear systems to nonlinear systems, to find extensions that work for at least certain classes of nonlinear systems.

Describing functions can, in fact, be used to predict behavior under sinusoidal steady state conditions, and we can determine things like the gain, the closed loop gain, of a nonlinear system as a function of frequency, and as we'll see, as a function of the input amplitude, the driving amplitude applied to the system. However, that sort of an analysis is quite difficult by describing functions. In fact, generally it leads to rather involved numerical calculations. I guess my own feeling is that we probably ought to avoid approximate techniques that involve very difficult numerical manipulations. If we're going to go to a machine for computation anyway, we can really get an exact numerical solution of the problem, so there's no need for approximations.

However, one case of interest for which describing functions work very, very well is that of the oscillator. We can look at a feedback system, and we can find out if the system is going to oscillate, and if it is going to oscillate, we can discover the amplitude of the oscillation and the frequency of the oscillation, and generally, the place that I have used describing functions in my own work is for this sort of analysis, for looking at the behavior of oscillators. And I think the examples that we'll do later on will demonstrate this aspect of the use of describing functions.

Let's see how describing functions are developed. Remember how we defined the transfer function of a linear system? Suppose we excite a linear system with an input sinusoid VI sine omega t, and we allow the system to reach steady state-- in other words, we wait until all the start up transients have died out. And under those conditions, of course, if the system is linear, the output, again, will be a sinusoid at the same frequency as the input, and the only difference between the input and the output will be, first of all, a possible scaling-- V0 may be different than the amplitude Vi-- and a phase shift between the output and the input. In other words, the output sinusoid may be displaced by some amount theta relative to the input sinusoid.

If we characterize the linear element by its transfer function G of s and evaluate that for s equals j omega, we recall that the ratio of the output magnitude to the input magnitude, the quantity V0 over Vi in our notation, is equal to the magnitude of the complex variable G of j omega, and the angle between the output and the input sinusoids is simply equal to the angle of the complex variable G of j omega.

Now let's see what happens if we have a nonlinear system. Let's first consider systems which don't rectify, so when we put in a sinusoidal signal, the output remains 0 mean. And let's also assume that we introduce no subharmonics from our system. If that's the case, and we apply a test signal to a non-linear element where the test signal is a sinusoid at frequency omega with amplitude capital E, we obtain an output, and we're able to represent that output by a Fourier series. I've said that the element is not rectify, so the output contains no DC term, and similarly, there's no subharmonic generation. Consequently, the first term in the Fourier series for the output signal is at omega, and consequently, we can express the output signal as some constant B1 times sine omega t plus another constant A1 times cosine omega t, and then we get all the higher order terms in the Fourier series, B2 sine 2 omega t plus A2 cosine 2 omega t, and so forth.

And what we do is define a describing function which is in general a function now of e and omega, as being equal to the gain and the angle of the fundamental component of the output, relative to the input. So we define our magnitude, the magnitude of the describing function G sub D, which may, of course, be a function of both e and omega, as simply being equal to the amplitude of the fundamental component of the output-- that's A1 squared plus B1 squared to the 1/2, that's the magnitude of the fundamental component of the output-- divided by the magnitude of the input. So the physical significance of the magnitude of the describing function is that it's the gain from the input amplitude to the fundamental component of the output.

Again, let me emphasize that for a nonlinear system or a non-linear element, the, quote, gain in a describing function sense, of course, may be dependent on the amplitude of the signal that we apply to the nonlinearity. We recognize that for a linear element, that's not possible. The gain of the linear element is only a function of frequency. Similarly, we define the angle of our describing function in direct parallel to the angle of a linear transfer function, as simply being equal to the arc tangent of A1 over B1. If we look at that, that's the relative phase shift of the fundamental component of the output compared to the input. So the angle of the describing function, exactly as the angle of our linear transfer function, simply reflects the relative phase shift between the fundamental component of the output and the input signal.

Usually we don't have to calculate describing functions. There are excellent tables. I provide an example of several describing functions in the book. But there are also much more extensive tables of describing functions for many commonly encountered non-linear elements. However, since we're starting now, let's go through the development of at least one describing function so we see how this works. And for an example, I'll use the so-called infinite gain limiter. In other words, we have an element whose input-output characteristics are such that the output is plus 1 for any positive input, the output is minus 1 for any negative input.

Let's now test our system, and recall that what we do is apply a sinusoid to the input of our non-linear element. And the amplitude of that sinusoid will be e. That's the amplitude of the sinusoidal test signal that we applied to our nonlinearity. In response to that input, the infinite gain limiter gives us an output that's plus 1 whenever the input is positive, an output that's minus 1 whenever the input is negative, so we simply get a square wave-- the figure shown in white here a square wave for the output wave form from the infinite gain limiter. And the zero crossings of the square wave are, of course, synchronized with those of the sine wave.

You recall that the Fourier series for a square way is 4 over pi times the fundamental, plus 1/3 of the third harmonic, plus 1/5 of the fifth harmonic and so forth, only odd harmonics are present, and the magnitude of relative harmonics drops off as the order of the harmonic. And so we've sketched here the fundamental component of the output sinusoid. It's, of course, in phase with the input signal. This is a describing function that has no relative phase shift. And the magnitude of the fundamental component of the output is simply 4 over pi.

Consequently, the describing function, in this case, is only a function of input amplitude. It's not a function of frequency. It's the fundamental component of the output, which is 4 over pi, divided by the amplitude of the input signal, which is e, and of course the angle associated with our describing function is 0 degrees, since the fundamental component of the output is in phase with the input signal. If we plot the magnitude of G sub D of E, as a function now of e, since it's not a function of omega again for this particular case, we simply get a hyperbola.

Well, how do we use describing functions to analyze the stability of non-linear systems? What attempt to do is a range our non-linear system in the form shown. We hope that we can break up the system, so that if we lump all of the linear components in one path here, this may be the reduction of some far more complex, multi-loop system, but the hope is that we can break up the system as shown. We get all of our linear elements in one path here. We have a single non-linearity, or a collection of non-linearities that can be represented by a single describing function, and we specifically indicate that we're considering a negative feedback system by including a minus 1 here in the loop.

Notice that the loop, as shown, has no input and no output. I mentioned earlier that my own personal experience shows that the use of describing functions is most valuable for oscillators. It certainly is valid to determine whether a system that's intended to have some given input-output relationship will oscillate. But when we're looking for oscillations, we can ignore the input and output, and simply end up with a loop as shown. So one of the constraints, or one of the things that certainly makes describing function analysis practical, is that we are able to break up the loop as shown, get all of the linear elements in one location, combined with a single non-linearity.

If we assume that the amplitude of the assumed sinusoidal signal in the loop at this point is e, in other words, the signal here under steady state conditions has the form e sine omega t, then the describing function analysis says that oscillations may be possible, and we'll have to explain a little bit later on exactly what the hedge is with may be possible, but oscillations may be possible for combinations of amplitude e and frequency omega, such that A of j omega times G sub d of e omega, e and frequency of amplitude and frequency, that product is equal to minus 1.

This is, of course, exactly the describing function parallel of positive loop transmission with a magnitude of 1. Here we have a loop. We explicitly show the inversion that may occur to summing point. If the A of j omega times G sub D of E omega product is equal to minus 1, that minus 1 combined with the inversion gives us positive loop transmission with a magnitude of 1. And that's, of course, the parallel o the condition, or at least a possible condition for oscillation, in linear systems.

The assumptions in describing function analysis are now beginning to emerge. We assume that the signal at this point contains no harmonics. In other words, our test signal, our assumed test signal, is a pure sinusoid. Consequently, we'd expect that if there were a large harmonic content associated with this signal, that the analysis might be suspect and might have relatively large errors. So what we might hope is that first of all, the nonlinearity itself is of a type that introduces relatively small harmonics relative to the fundamental. So we might hope that the fundamental component of the signal at this point dominated, and that higher harmonics were relatively smaller in amplitude.

Further, we'd anticipate that the technique would work better for systems that are low pass in nature. If A of j omega is low pass in nature, then harmonics in this signal would be filtered to a greater extent than the fundamental. In other words, the linear elements will then provide a gain that's greater for the fundamental than it is for higher harmonics, and consequently, the signal here will be purer in a harmonics than is the signal going into the linear element. And if those two conditions are satisfied in some way, if there's relatively small harmonic generation by the non-linearity, and i further, those harmonics are attenuated well by the linear portion of the loop, then we could be reasonably certain that the describing function analysis will well-predict the behavior of the system.

Let's consider another special case, which is really necessary, again, to make describing function analysis practical. As I describe the limitations of the method, it seems as though there are very few cases where this technique can be used. But that's really not true. There are a number of examples of practical interest, and we'll see some of them, where describing function analysis gives us a very good indication of the behavior of a nonlinear system. While it does have limitations, we recognize the getting any insight into the operation of a nonlinear system is a very, very difficult task anyway, and I think we're willing to settle for approximations that are a little bit crude, and that have somewhat limited applicability, because they do give us at least one technique for gaining insight into certain classes of non-linear systems.

Well anyway, let's consider a non-linearity which is, in fact, frequency independent. And again, I think this is necessary to make the kinds of manipulations that we have to do practical. It's not an absolute restriction. We can still use describing function analysis for elements which, in fact, are frequency dependent, but we find out, when we do that, that again, the numerical effort involved is considerable, and we begin to wonder if maybe we wouldn't be better off actually doing a numerical simulation of the system.

However, if the non-linearity is, in fact, frequency independent, our condition then reduces to the constraint that oscillations may be possible if the product A of j omega times G sub D of E equals minus 1. e is now independent of frequency, or G sub D is independent of frequency. It's dependent only on the assumed amplitude of the signal at this point in the loop.

We can rearrange that, again, in a form very reminiscent of some of our linear system manipulations, and get the relationship A of j omega equals minus 1 over G sub D of E. And a convenient way to look for equalities in that relationship is to plot both of the indicated functions in a gain phase plane.

So let's consider a gain phase plane. Here we have the magnitude of both A of j omega and 1 over G sub D Of e. And along this axis, we have the angle of A of j omega, and simultaneously, the angle of minus 1 over G sub D of E. And then in those gain phase coordinates, we'll plot the transfer function of our linear system, A of j omega. And so in this curve, we are representing the magnitude and the angle of A of j omega, the magnitude and angle of A of j omega, and in general, omega is a parameter running along that curve. And similarly, we'll plot the magnitude and the angle of minus 1 over G sub D of E. Again, recall that in this case, we're considering describing functions that are frequency independent, so we only have to consider them as a function of amplitude E. And presumably, E is a parameter along this minus 1 over G D of E curve.

If there is an intersection y, that intersection is, of course, the solution of the equation A of j omega equals minus 1 over G D of E. We plotted A of j omega. We plotted minus 1 over G D of E on the same coordinates. At this intersection of those two curves, we've solved this equality. So that's one way of looking for combinations of omega and E that will solve this equation, give us equality between A of j omega and minus 1 over G D of E.

We would find the combination of omega and E necessary by looking along the A of j omega curve, and determining the value of omega at this point from the A of j omega curve. Similarly, we'd find the amplitude necessary to force the intersection by looking along the minus 1 over G D of E curve. E is, of course, a parameter running along this curve, and so we can find the value of E that creates the intersection by looking at how this curve changes as a function of E.

Well, let's apply this kind of analysis to one system. As I mentioned earlier, I find most useful for the analysis of oscillators, and of course, following along with that, why we'll look at an oscillator. And let's now consider a loop where the linear elements, or the linear element, consists of three real axis polls. I'll drop the minus 1 out here and include it in the linear transfer function. And let's assume that the poles are normalized to unit frequency, so the transfer function of our linear element is then simply 1 over s plus 1 quantity cubed.

And since, to this point, we have a very limited collection of describing functions we know-- in particular we know one, the one we've worked out-- let's or that non-linearity included in the loop. So here we have our infinite gain limiter, again, let's assume that it's normalized to unit amplitude. And we saw earlier that the describing function for that infinite gain limiter, G sub D of E, is simply the magnitude of the fundamental component of the output, 4 over pi, divided by the magnitude of the assumed test signal applied to the limiter E. And the angle associated with the infinite gain limiter is 0 degrees. Well, following along, then, minus 1 over G sub D of E is simply equal to the reciprocal of this quantity, pi E divided by 4. Since the angle of G D of E itself is 0 degrees, the angle of minus 1 over G D of E is simply minus 180 degrees.

We've looked at the transfer function associated with this kind of linear element earlier. We did look at sort of the phase shift oscillator, the case of three coincident poles on the negative real axis, when we introduced the idea of stability, and showed that that configuration was at least capable of oscillation for certain combinations of loop transmission magnitude. And if we recall, the magnitude of this transfer function starts, of course, at 1, at sufficiently low frequencies, and then drops off past 1 radian per second, basically falling off as frequency cubed pass that break frequency. The angle of 1 over S plus 1 cubed, ignoring the minus sign, which really simply represents the inversion in the loop, the angle of 1 over S plus 1 cubed starts at 0 degrees, at very low frequencies, and eventually reaches minus 270 degrees at very high frequencies.

So here, in gain phase coordinates running from 0 degrees through minus 180 degrees, eventually to minus 270, I've plotted the transfer function of our linear element, A of J omega. This is a function that starts at 1 for omega equals 0-- a magnitude of 1 and an angle of 0 degrees for omega equals 0-- drops off, eventually heading asymptotically toward minus 270 degrees.

If you recall from our earlier development. the angle associated with the three coincident pole transfer function reaches minus 180 degrees at a frequency that's radical 3 times the break frequency. So in our case, where we have the poles located at 1 radian per second, at omega equals radical 3 radians per second, the angle of the linear portion of our system goes through minus 180 degrees. Also recall from the development done earlier that the magnitude of the transfer function at the frequency where its angle is minus 180 degrees is 0.125. So here we have the important point, as we'll see, for describing function analysis, the point where this curve, or for this describing function analysis, anyway, the point where the linear curve goes through minus 180 degrees, because that's the point where it intersects the minus 1 over G D of E curve.

Minus 1 over G D of E is pi E divided by 4 at an angle of minus 180 degrees. So it lies directly along the minus 180 degree line. And this, then, is the direction of increasing E on our minus 1 over GD of E curve. As E gets larger, minus 1 over G D of E gets larger. Consequently, this is the direction of increasing E, and that code to see is simply a straight line along the minus 180 degrees line in our gain phase plane. It encompasses all values along that line, because certainly it goes from 0, for very small values of E, to infinity, or approaches infinity, for very large values of E. So this line runs across the entire gain phase plane.

We can solve for the amplitude E the amplitude of the assumed sinusoidal signal at this point in the loop necessary to satisfy that condition. The condition, of course, is the intersection between the minus 1 over G D of E curve, and the A of j omega curve. We've already found out that at that intersection, the magnitude of A of j omega is 0.125, and that must, of course, be also the magnitude of the minus 1 over G D of E curve. And consequently, pi E over 4, which is the magnitude of 1 over G D of E, must be equal to 0.125. If we solve that for the amplitude of the signal E that creates the intersection, we find out that E is simply equal to 4 over 8 pi, or 1 over 2 pi.

So our describing function analysis tells us that an oscillation may be possible with a signal amplitude at this point-- that's 1 over 2 pi. And the frequency of that signal is simply equal to radical 3. So we'd have 1 over 2 pi times sine radical 3t. And under those conditions, if we conducted a loop transmission test in a describing function sense, if we, for example, broke the loop, drove this point with a signal 1 over 2 pi sine radical 3t, we would find that the signal that came back would have a fundamental component that was exactly identical to the signal we had applied.

Now we have to look at the hedge that I mentioned earlier. And the issue really revolves around whether the oscillation predicted by that intersection is stable in amplitude. In other words, we have to look at a funny kind of stability analysis now. We think that the system may be oscillating, but what we have to do is see what happens if we perturb the amplitude of the oscillation a little bit. Is the system restorative? In other words, if we somehow consider a perturbation that forces a slightly larger amplitude, does the system tend to react to squeeze the amplitude down to its original value? Conversely, if we envision a perturbation where the amplitude of the oscillation, the amplitude of the signal at this point, shrinks a little bit, again, is the system restored? Does the system work in a direction such as to increase the amplitude back to its original value? If that's the case, then in fact, the system will oscillate at the intersection predicted by this analysis. We have a stable amplitude oscillation-- that's called a limit cycle.

The converse is also possible, and we'll see an example of that next time, where when we envision a perturbation about this operating point-- and describing function analysis is really a kind of linearization about an operating point defined by a sinusoidal amplitude. When we consider a perturbation about that operating point, why, suppose once again we envision a slight increase in the amplitude, the system might work to force the amplitude to be still larger. If that's the case, the oscillation predicted by the intersection will be divergent.

There's a parallel here, of course, in linear systems-- the stable equilibrium versus the unstable equilibrium, the pendulum hanging in this direction where a perturbation is restorative, versus the inverted pendulum, with a slight perturbation from the equilibrium point results in the pendulum falling over. Or the example we had last time, where we had a ball in a magnetic field. We found out that there was an equilibrium point. It happened to be an unstable equilibrium if the magnetic field is constant.

Well, let's see how we might argue stability of amplitude for this particular case. These rules that allow you to get this in a sort of blind way-- and I list them in the book, but somehow I always forget them, and we can argue it physically, and I think that's probably a better way to determine the question of whether the amplitude predicted by the intersection is stable or not. Let's consider drawing a loop transmission, or a Bode plot, for actually the negative of the loop transmission. When we've linearized our system in a describing function sense, we're predicting an oscillation with an amplitude of 1 over 2 pi for the signal at this point. So let's draw a loop transmission, or the negative of a loop transmission, the equivalent of the AF plot for a linear system. Let's draw an A of j omega times G sub D of 1 over 2 pi. That's the amplitude that we assume for the oscillation at this point, or that our analysis predicts will be the amplitude of the oscillation.

And so let's go ahead and draw the negative of the loop transmission for our system as a function of frequency, assuming that the test signal has the amplitude 1 over 2 pi and is applied at this point. Well, the gain of our describing function-- the magnitude of the describing function, the gain of our non-linear element-- for E is equal to 1 over 2 pi, we can get from this expression, and that gain will be simply 8. When we substitute in e equals 1 over 2 pi, the pis cancel out, and we get simply 8 for the magnitude of our describing function.

We've already found at sufficiently low frequencies, the magnitude of the linear portion of our transfer function is unity. Consequently, the indicated magnitude, A of j omega times G sub D of 1 over 2 pi, has a low frequency value of 8. It has a corner frequency of 1 radian per second. It rolls off to unit magnitude by radical 3 radians per second, because A of j radical 3 is equal to 1/8, that combines with the magnitude G sub D of 1 over 2 pi of 8 to give us a unit magnitude for the product at radical 3 radians per second.

In the meantime, the angle associated with that, with the same quantity, is one that starts at 0 degrees, heads toward minus 270 degrees. For our purpose, it's only necessary that we look at the angle in the vicinity of the crossover frequency out here. So I haven't drawn the complete angle curve, but we can find out what we need to by simply looking at the behavior of the angle in the vicinity of the crossover frequency.

Let's consider what happens if, for example, we assume that the amplitude at this point increases a little bit. Our describing function operating point is an amplitude of 1 over 2 pi. Let's assume that that amplitude increases just a little bit. Well, G sub D of E is inversely proportional to E. Consequently, if E increases, the gain of our describing function, the magnitude of G sub D of E, will decrease.

We now have a somewhat perturbed operating point. We've changed the magnitude to be slightly larger than 1 over 2 pi. Test amplitude to be slightly larger than 1 over 2 pi. And consequently, the magnitude G sub D has decreased a little bit. And that tells us that our new curve parallels the original one and lies below it by some small amount, like so. In other words, this is the resultant negative of the loop transmission plot for a somewhat increased value of E.

If we decrease E slightly, we get a higher plot, like so. Because a decreased value for E, a smaller value of E, would result in a larger magnitude for G sub D of E. So this is the sort of curb that results for a smaller test signal. Or if we assume that the amplitude of the oscillation that existed had decreased slightly.

Well, now let's look at the magnitude plot in conjunction with the angle plot. If we assume that the amplitude of the signal at this point has increased slightly, we notice that the system crosses over in a region with positive phase margin. We cross over where the angle associated with the negative of the loop transmission is more positive than minus 180 degrees. In other words, the system is stable under those conditions, at least in an absolute sense. It has small phase margin, but nonetheless a positive phase margin.

Well, we assume that E increased a small amount, and our system becomes stable in an absolute sense, which means that the amplitude E would shrink, tending back toward its original value. We assume that it increased, we found the system becomes stable, the amplitude would shrink back down to its original value. Similarly, if we assume that we had a smaller value for E, the system crossover now occurs in a region of negative phase margin, we have an unstable system, we get exponentially growing responses, tend once again to increase the magnitude of E, restore us to the original operating point.

So here's an example of a system where there is an intersection in the gain phase plane, and we find that the intersection represents a stable amplitude oscillation. We conclude that if we built this system, that we should find it oscillating at radical 3 radians per second, and that the amplitude of the signal at this point should be about 1 over 2 pi. And in fact, if we did that, we'd find out that those numbers were very, very nearly satisfied by the system.

We might want to go back and check. We mentioned that one of the assumptions in describing function analysis is that the signal at the input to the non-linearity is harmonically pure. And let's look at how well that assumption is satisfied for this particular example. We recognize that we'll have a square wave at this point in the loop. Our assumption is that the signal at this point is sinusoidal, and so what we want to do is look at the harmonic content of the square wave, chase it through the loop, and see how much remains at this point.

Well, let's look at that calculation. At the output of the non-linearity, the ratio of the third harmonic to the first, to the fundamental, is simply 1/3. The Fourier expansion for a square wave, of course, is 4 over pi times the sum of the odd harmonics, where the amplitude of the various harmonics is inversely proportional to the order of the harmonic. So at least the third harmonic relative to the fundamental has an amplitude of 1/3. Similarly, we can evaluate the gain all of the linear portion of our system to the third harmonic, and compare that with the gain to the fundamental component. And when we do that, the gain to the fundamental component, of course, is simply equal to A evaluated at j radical 3-- that's the fundamental frequency-- and we divide that by A evaluated at j times 3 radical 3. That's the third harmonic.

If we perform that calculation, we find out that the ratio of the gain at the third harmonic to the-- I have an inverse here. The gain at the third harmonic is, of course, smaller, so this equation really should be flipped over. The ratio of the gain at the third harmonic to that at the fundamental is simply 0.057. The linear elements attenuate the third harmonic by a factor of nearly 20 compared to the fundamental.

Since we started out with the third harmonic being only 1/3 of the amplitude of the fundamental, the net result of that original lower harmonic amplitude, and the much greater attenuation applied to the third harmonic, is that the distortion, or at least the third harmonic distortion in the signal, is about 2%, the ratio 0.057 divided by 3. So we'd have about 2% third harmonic content in the signal at this point. Higher harmonics are attenuated even further. The fifth harmonic is first of all down relative to the third. Its relative amplitude is 1/5, rather than 1/3. Furthermore, the linear portion of the loop attenuates the fifth harmonic to an even greater extent than it attenuates the third harmonic. And so that fifth harmonic is attenuated even to a much larger extent. The net result is that the total harmonic distortion for this particular system, at the input to the non-linearity, is about 2%.

Let's say under these conditions, the describing function and analysis works very well. We find out that if we actually tested a system like this, constructed a system like this and tested it, we'd find out that the frequency of oscillation and the amplitude of the signal at this point were very nearly the values predicted by the describing function analysis.

Next time we'll look at how we apply this analysis to several other systems of interest. Thank you.