16th Annual Killian Award Lecture—Jay W. Forrester (1988), Lect. 1

Search transcript...

FRIEDEN: I'd like to welcome everybody to the first of our two 1988 Killian Award lectures. The James R Killian, Junior Faculty Achievement Award to recognize extraordinary professional accomplishments-- a tribute to Doctor Killian, one of the great figures at MIT in this century. It has been established since 1982.

A special faculty committee selects the award winner each year to presents the academic lectures. This year's recipient is Professor Jay W Forrester of the Sloan School of Management. You'll be hearing from him very shortly.

My job today-- I think above all-- is to keep the introduction short. I'm mindful of a story of an event that took place at a Yale commencement, when the commencement speaker announced that he was going to take as his text something that would suit all the letters in Yale. And he proceeded to announce he was going to speak about youth, age, life, and education.

He began his speech, and droned on for a half hour. At the end of a half hour, hadn't even got beyond letter A of Yale. And at that point, one platform guest turned to another and said in a loud voice, "Thank God we're not at the Massachusetts Institute of Technology.

With that story in mind, I had fleeting thoughts of going through the letters, and saying that today's lecture will speak about modeling, accuracy, social systems, et cetera. But I gave it up as a bad idea. Let me turn quickly to Professor Forrester.

I think in life, origins are often important. And you might be interested to know that Professor Forrester grew up on a homestead cattle ranch in Nebraska. He told me the other day that at the age of 13, he put in a wind-powered electrical system, which was the first electricity available on that ranch. I commented that it seemed kind of a young age, already to be tinkering with electrical equipment. Professor Forrester's comeback was, anything was better than taking care of cattle.

He studied Electrical Engineering at the University of Nebraska, and got a graduate degree from MIT in the same subject. By age 28, he was Director of the MIT Digital Computer Laboratory, where he was responsible for the design and construction of Whirlwind I, of the first high-speed computers. At about that time, he invented the magnetic storage system that was the standard memory device for computers for the next 20 years.

As head of the Digital Computer Division at Lincoln Laboratory, he was responsible for the planning and design of the SAGE Air Defense System built in the 1950s. since 1956, Professor Forrester has been Professor of Management at the Sloan School. He pioneered a new field known as System Dynamics, which you'll be hearing about in this week's lecture and next week's. He's been Germeshausen Professor of Management since 1972.

His books include Industrial Dynamics, Principles of Systems, Urban Dynamics, World Dynamics, and most recently, Collected Papers. He also has another book in the works that you'll hear something about next week. He's been the recipient of many honorary degrees and awards. I mention only two-- the Poulsen Gold Medal from the Danish Academy of Technical Sciences, and membership in the National Inventors Hall of Fame.

At this time I'd like to read the citation that's presented to Professor Forrester in connection with the award. We honor Professor Forrester and announce his appointment as Lecturer in recognition of his pioneering achievements in the development of digital computers, and their early applications to large-scale problems of national defense, and in the creative use of computer science, and the insights of engineering feedback processes, and the analysis of managerial and social systems. He's left his mark on some of the most important technological advances and critical national issues of our day. I'd like to present you with the award, Jay.


Thank you.

FORRESTER: Thank you.

This is, indeed, a great honor. And I'm very pleased to be with you. I would suggest that since we have lost Jim Killian from amongst our midst, that perhaps we stand for a moment of remembrance and silence.

I especially regret that Jim Killian is not with us, because I feel that I owe him a very personal debt for even being here today. It was at the Lincoln Laboratory in 1955, when we had pretty well completed our responsibilities for getting the SAGE Air Defense System underway that I had begun to wonder what I would do next. For some of you who have come into the field more recently, it may seem a little strange to say that I felt the pioneering days of computers were over at that time. But we had been in the work for 10 years or more, and I was looking for something else to do.

It was on the occasion of Jim Killian bringing a group of distinguished visitors to the Lincoln Laboratory, and just walking down the hallway beside him, that he called my attention to the new management school at MIT was getting underway, and suggested that it might be something that I would find of interest. And indeed, that led to what we are talking about today.

It has been suggested to me that these lectures are more a personal statement than they are a concentrated, intense scientific lecture. I'm going to follow that advice, and try to present a mixture of philosophy, some details so you know what the field is like, applications, purpose, and the goals that I see coming up in the future. There will be two lectures. This is a brief outline of today's lecture.

I want to talk some about the MIT background that leads into the subject of social systems-- their dynamic behavior-- that we're discussing today. It's primarily a feedback loop viewpoint, or a viewpoint of the endogenous behavior of systems. What is it about their structure and their internal policies that produce their successes and failures?

Today, I'll give you an example drawn from the corporate world to illustrate the approach followed. And I want to talk about the incorporation of psychological variables, some important ideas in the perspective of building models, say a few words about people in systems, and some discussion of the general characteristics that we have found apply to most systems.

And then next week I will turn more to the applications of the field of system dynamics, with a discussion of some corporate examples, urban growth and behavior, work that we did several years ago on the world environmental system, pollution, industrialization, agriculture and population. My present work, which is largely on economic behavior-- give you a brief thumbnail sketch of that. And then talk some about the future of the field and where it is going.

Through both of these lectures, I want to stress the idea of combining mental models with computer models. The mental models are largely what the world runs on. Our mental models govern our behavior, most of our institutions.

The extent to which we do not understand those systems, I think, can be augmented and improved through combining mental models with computer models. And this is a field that's beginning, on an experimental basis, to find its way into the high schools. And I think that is very exciting, because it stands some chance of changing the way people look at the world around them and the way in which they understand politics, social systems, corporations.

And beyond that, I see the field as ultimately providing a bridge between the two cultures of technology and the liberal arts or humanities. Because both of these branches of our life have dynamic behavior. And there is a foundation under both of those aspects of existence that have common ideas, common principles, that-- when they are perceived-- give people, I think, a much greater mobility between fields, much more penetrating insight into what is going on.

Now, why should we be concerned about any of this? I think we take our motivation right out of the newspaper headlines. We're not very happy with the world we see around us-- the political world and the socioeconomic world.

It is filled with various kinds of problems and difficulties. And there's been not much improvement over the last 1,000 years, 2,000 years, in the understanding of such systems. As someone-- I think maybe it was BF Skinner pointed out-- that if the ancient Greeks could come back now and walk among us, they would not understand anything about our technology and our science.

They wouldn't understand the reasons, the philosophy, the details, the techniques. It would all be totally foreign. But they would be quite at home in our social problems-- wars, various kinds of difficulties.

And I'm reminded when we visited that great set of castles, the Alhambra built by the Moors in Spain above Grenada some 700 years ago, they guide who was taking us around pointed out one particular room. And he said, this is where they met to discuss their problems of inflation and the unfavorable balance of trade, which suggests that very little has happened in our ability to both understand and to deal with these kinds of social and economic difficulties. I think it need not be that way.

And so I hope to give you some kind of some vision of the possibility of this changing over the next several decades. It is not going to happen quickly. But I think we can come to a better understanding of our social and economic systems, to a more effective education in the whole span from high school through college, a way to unify intuition with rigorous analysis, and to provide a common foundation that underlies all fields-- business, medicine, economics, environmental change, technological development, population, standard of living. These are all tied together in some great systems that are only, at this stage, dimly perceived.

Let me back up to the relevant history growing out of work at MIT. Now, there is a prior history that goes back much further than this diagram suggests. Here, I've presented some of the background for the field of computers and the field of information feedback systems, feedback control systems, servomechanisms, cybernetics-- all terms that mean approximately the same thing.

The history of computers goes back more than 100 years. Charles Babbage, Lady Lovelace in the middle of the 1800s wrote some very elegant papers about the nature of digital computers and the programming of them. In the same way, the history of feedback control systems goes back more than 100 years.

Perhaps the first really scientific paper that I'm aware of was written by Clerk Maxwell, better known to most of you as the person who developed Maxwell's equations for the propagation of radio and light through space. But in 1867, before the Royal Society in London, he developed a paper called "On Governors." And it was an analysis of the stability and the behavior of the flyball governor that was being used for governing steam engines such as James Watt had developed.

Not an easy problem, because if one tries to control speed very closely, very precisely, one can produce a system with instability that will tear apart the machinery. If you reduce the sensitivity of the system, you then find that it doesn't control accurately. And there has been a perennial problem in engineering to develop control systems, feedback systems, that had the desired characteristics.

Somewhat later-- early 1900s-- there were Russian scientists who began to develop the same ideas in terms of the automatic steering of surface ships. And then I think the field really began to take off in the 1930s, with the Bell Telephone Laboratories developing electronic feedback amplifiers as an essential part of the transcontinental telephone system.

At MIT, the thread that I want to call attention to begin with Vannevar Bush, who was behind the development of the mechanical differential analyzer, a machine that was still operating up here in Building 10 when I came to MIT in 1939. It was a rather massive mechanical machine, having six integrators and some plotting tables where people could cause a pointer to follow curves to put in non-linear functions. It was practical, and I believe was used to a considerable extent.

The Rockefeller Differential Analyzer was a machine with electronic switching, much easier to set up-- not electronic but electrical-- electrical relay switching-- much easier to set up as you change from one problem to another. And then in the digital computer days, the TX-0, the IBM 704 were some of the first digital computers other than Whirlwind that were at MIT. Bush was the person who guided Harold Hazen, who later came to the head of the Electrical Engineering Department.

And Hazen was the person behind the Network Analyzer, an analog assembly of components for studying the stability and the transient behavior of electrical power systems. And it was Harold Hazen who inspired Gordon Brown, whose doctoral thesis was a cinema intergraph, I believe an idea that might have originated with Norbert Wiener. And Gordon Brown started, in 1940, the Servomechanisms Laboratory, which is where I entered this stream of activity.

I was already been said I studied Electrical Engineering at the University of Nebraska, and came here for graduate study. At that time, electrical engineering was almost the only academic field that had a well-developed treatment of dynamics. And I think there's a reason for this.

It is in electrical engineering that simple, linear, solvable systems are still practical and important-- or were at that time. In other words, electrical engineering had that unique characteristic of importance and simplicity that was amenable to the kind of analysis that was available at that time. And so there was a development hand in hand of transient dynamics methodology for dealing with dynamic behavior and with feedback systems.

I came and worked for one year with John Trump, within the field of Van de Graaff generators-- high-voltage, direct current, million volt generators-- being applied to medical treatment. And then in 1940, I was commandeered by Gordon Brown to work on what became the beginning of the Servomechanisms Laboratory, which launched in a major way understanding of feedback systems as applied to military equipment and to chemical process control. Whirlwind computer was, I think, the primary force leading to the creation of the Lincoln Laboratory, because it was with the Whirlwind computer that the first experimental demonstration of digital computers in combat information centers and the first demonstrations were made.

And then the Division Six-- the part of Lincoln Laboratory that did the SAGE Air Defense System-- spun off to become the present Mitre Corporation. Now, through all of that, there was the development of dynamics feedback concepts using computers for simulation, and a lot of practical experience with people in systems. Through it all, however, I think very few of us-- not I, and I think very few others-- saw feedback control systems as something within which we live, and to which we are largely subservient.

Feedback control systems were engineering devices at that time. And you built control systems. You built systems to control military equipment, or chemical plants, or later spaceships. But they were seen as a part of technology and not as a part of Nature and human affairs. I want to suggest to you that feedback control systems are fundamental to anything that changes through time, including all of the world within which we are embedded and all the processes that we take part in.

My move to the Management School following Jim Killian's suggestion was largely negotiated by Eli Shapiro, who was Associate Dean of the Management School at that time, and Edward Bowles, who was earlier a Professor in Communications in the Electrical Engineering Department here. I came in to give some attention to a vision that I was told Alfred Sloan had when he gave the money to start the Management School here. As it was told to me, Sloan felt that a management school in a technological environment would develop differently from one in a liberal arts environment-- differently from the management schools at Harvard, and Columbia, and Chicago-- maybe better, but in any case different. And it was worth $10 million to run the experiment.

The school had not done anything prior to-- it had been in existence officially for four years. Started in 1952. I joined it in 1956. And the question was, what could I bring to a management school from the technological side of MIT?

Others presumed-- and I think I also did-- that this probably meant the use of computers for handling management information, or else pushing forward the field of operations research, which was then dealing with many of the same kinds of problems that you see it dealing with today. I had my first year with nothing to do except to try to decide why I was there.

And in that process, laid aside both of those potential objectives-- laid aside the use of computers for data processing, because it seemed that the manufacturing companies, the banks, and the insurance companies were by that time developing such a momentum in the area of the managerial use of computers that it was unlikely we could have any further major influence on that field, at least at that time.

And in looking at the field of operations research-- very interesting, worthwhile-- but it did not seem to really deal with the big issues that related to the success and failure of corporations. And it was out of discussions with people in industry-- largely some people in General Electric at that time-- that I began to look at a different class of problem. They were, in that particular case, much disturbed by the way in which their household appliance division was experiencing over time, one year-- and half the people laid off three years later.

And what was it that was causing this? You could blame it on the business cycle, but that wasn't very persuasive. And we found that if you would take the policies that they were following for the hiring of people and the management of production, and even just sit down and do a hand simulation on a notebook page of what would happen week by week if those policies were followed, and they hired people, and they produced, and they build up inventories, and they looked at inventories and backlogs to see what they would do next in production, that you in fact had a highly unstable system that would exhibit major fluctuations of production, even without any reason coming in from the marketplace. That was the beginning. And then we worked in a number of other corporations where we began to see that in general, the difficulties of an organization arise not from outside influences, as almost all of us would like to think, but rather from what we do ourselves.

Now, the field of System Dynamics, which has been developed at MIT since the 1950s, deals with high-order nonlinear systems involving the interactions of people, nature, and technology based on a feedback structure viewpoint, and combines a high input from the mental models in the world around us, converting those into computer models so that we come to understand better what those models imply. And by now, there is a System Dynamic Society, an international society that holds an annual meetings in such places as Oslo and the United States, and last summer in Shanghai, China, this coming summer in San Diego, and the year after that in Germany-- a society of some 300 members at the present time-- relatively small, but growing steadily, recently has begun to publish a journalist, the System Dynamics Review.

There are various ways of looking at this field. And looking at where it comes from is one way. I see three background threads converging into the field of System Dynamics-- the traditional information that exists in our managerial and political systems-- principles that come in from the field of feedback systems and computer simulation.

And we're dealing here with an effort to combine the strengths of the mental models, which we use all the time, and avoid the weaknesses in those models while we use computer models to provide the strengths available from that domain and avoid the weaknesses in the field of computer modeling. If we look at traditional management and politics, one of the greatest strengths is the tremendous database that is available-- the experiences and the knowledge in people's heads. I'll say more about that later.

But that, in turn, is one of the weaknesses. We're overwhelmed with information. And perhaps one of the most serious pollutions that we face is information pollution-- too much information.

And within that paradigm, there are generally no criteria for what information you can throw away, or what you do with the information that you keep. It is from feedback systems that we can get a number of insights-- principles of selecting information, and principles of organizing that information into structure, coming up with a model. Now, system dynamics does not impose models on you for the first time.

All your actions-- every action you take-- is a consequence of your using a model. One does not have a family, or a university, or a city, or a country in one's head. One has assumptions about those systems.

And one manipulates those assumptions. And in so doing, you're using a model. The question then is, where are the strengths and weaknesses in those models?

The strengths lie in the tremendous amount of information available. The weakness arises from our inability to manipulate high-order nonlinear dynamic systems in our heads. And so we very often take the information that's quite valid and produce with it results that are inconsistent with the assumptions that we are making.

I want to stress the feedback viewpoint. In some ways, it's very simple. And in other ways, it's extremely subtle and difficult to grasp.

Most of our conversations in cocktail parties, and Congress, and boards of directors' meetings I think have the characteristic of saying, here's our problem. We will take a certain action. And we'll get a result. And we'll cure the problem.

An open-ended system-- problem, action, result, period. You're through. But it doesn't usually work.

And it doesn't work partly because that's the wrong image of the world. The far more appropriate image is to see the world as a circular, ongoing process in which there is no beginning and no end. Information about a problem leads to action. The action produces a result, which is the information about the next state of affairs.

And one is continuously acting in a system that is continuously reacting, and continuously presenting one with a new state of affairs to which one reacts. And this ongoing system is the real environment in which we live. And it is a very dynamic environment. It is a very complex environment, and it's one that on the whole, we have not historically had the means to understand and cope with. That situation, I think, is now changing.

Let me dwell on this matter of the simplicity of such systems a bit more. If I were to ask you casually what happens when you get a drink of water, you would probably say, well, the water flows out of the faucet and fills the glass. That is half right.

It would be just as correct to say that what goes on in filling a water glass is that the rising water shuts off the faucet. That would be the other half. It is a circular control process with a goal-- reaching a certain water level-- and a control system that observes where we are, where we want to be, and provides the necessary control to achieve the desired results. If we were to stylize that, we would have a diagram of this sort, represented by a first-order differential equation, if you want to look at it that way.

There are some very general characteristics of systems which are illustrated even in this high degree of simplicity. Every system has in it two concepts, and only two. Now, that may strike you as surprising.

You may not believe it, even after these lectures are over. But when you come to believe it, it is a powerful concept because as you look at the world around you in all its complexity, and realize that everything you're looking at is one or the other of these two classes, it is a powerful simplifying idea. And these two classes are the system state-- the integrations, the accumulations that go on in our systems-- information about the system state or accumulations, and a policy-- or what we often call a rate equation-- a policy that states how one, or how nature, or how a technological system reacts to the state of affairs to control an action stream, which, in turn, changes the system level.

Another characteristic-- all systems have this circular loop structure-- very much more complicated in the practical systems, but always this structure in which there is a rate, a level-- they alternate with each other as you move through the system-- a goal-- sometimes implicit, sometimes a little hard to identify, but a goal that the system is trying to achieve, where the state of affairs is compared with the goal-- that goal may be dynamic itself and changing-- to govern action. Now, if you go just a little more complicated-- and only five times more complicated, which is not very much-- you will get a system that looks like this. I suggest you not try to read it.

We'll talk about this a little more next time-- the interaction of population, industrialisation, resources, agriculture, and pollution in some studies that we did in the early 1970s. But I put this up to show how quickly the apparent complexity can mount as you add one, two, three, four, five levels. Because each level is potentially an input to every rate of flow. And the rates of flow, in turn, control the levels. And so one rapidly builds up a high degree of complexity.

Now, when you appoint a person to be head of a corporation, or elect someone to be president or to Congress, you are, in effect, patting that person on the back and saying, I'd like to have you go in there and by intuition, and guess work, and compromise solve for me a 30th order, nonlinear differential equation. I doubt there are many in this room that would venture if I put a simple third or fourth order linear differential equation on the board-- I doubt that there are very many that would venture to come up and sketch by inspection the behavior of that particular equation. We have, in other words, assigned our leaders what are technically impossible tasks. And it's no wonder that we're unhappy with the usual results.

Now, let me try to make this a little more tangible by taking up one example-- an example out of the corporate world. We could have taken it out of the technological world, or social, or economics. But the problem of corporate or market growth-- a very important question to any company, especially new startup companies.

They may exhibit symptoms of low profits, little growth in sales, large order backlog, falling market share. And they almost always blame it on their competitors, or their bankers or their customers, whereas in effect, in actuality, it is almost always a consequence of the interactions which they are managing between their sales effort, their production and delivery delay, their own incentives to expand, and how they set prices. The structure of such a system-- simplified, but sufficient for the purpose-- it look something like this, where orders come in.

They go into an order backlog. The backlog is filled out of the production process. The sales bring in revenue, and some fraction of those sales go into the sales budget and are used to hire salesmen, which bring in orders.

Now over on the other side, there is the idea of a delivery delay. You have an order backlog of 1,000 items to be filled. And you're making 200 of them a week. It's going to take you five weeks to get through that order backlog and fill the next order.

And so the production rate and the backlog will indicate to you what the delivery delay is as the customer will see it. How long will the customer have to wait for his order? And the delivery delay is also one of the primary inputs to the incentive to expand. If we are running behind in our orders, then it is an indication that we should expand production.

And up in here, delivery delay makes the product less attractive to the customer, and makes it harder for the salesman to sell orders. And I want to give you some detail in a moment. But the trap that many companies find themselves-- and in fact, the whole US economy with respect to Japan is significantly within this trap-- we deliver poorly, deliver slowly, deliver low quality, deliver unsuitable products.

A whole set of things come in here that make the product less attractive. We try then to sell harder and harder. We may lower price in order to sell, not realizing that price generally is not the problem. It is the character of the product. We lower price, and then pretty soon, we have no revenues to solve the real problems, which lie over here in the production capacity, or the design, or the quality.

I want to carry you to a small level of detail, so you have a glimpse of some of the things that lie underneath this field. I said there are two kinds of equations-- the level equations or system states. These are simply accounting statements of integration or accumulation that the present order backlog, time k being what we use to represent the present, is the backlog at the previous time interval j plus the solution interval-- how long it was since the last time you computed this-- multiplied by the incoming orders minus the production rate-- the orders that you filled. And dimensional consistency being as important here as it is in any of the engineering fields, you have to make sure that in fact, you've set up equations where the dimensional analysis is correct.

Now, the level equations or integrations seem pretty simple and straightforward. And yet, as I'll say later, they are the place where all the dynamics come from. And they are tremendously important.

And in the social sciences, there has been a tendency to under-rate the level equations, and to leave them out, and to look at equilibrium states and differences between equilibrium states without having put in the system levels that allow input flow rates to differ from output flow rates. And it's in this differing of the flow rates in and out of the system states or levels that the dynamic behavior of all systems is generated.

The other variable are the policy statements, or the rate control equations. And they are more subtle, and less obvious, and a great deal more complicated in a model formulation. And I'd like to distinguish two ideas-- a policy as distinguished from a decision.

A policy is a set of rules that govern how decisions are made. The rules arise, the policies arise from physical laws, from availability of information-- one cannot base a decision on information one does not have. So you look at a system and see what information is available. That often tells you a great deal about what can or cannot happen.

Psychological attitudes, organizational structure, traditions-- what have we always done here-- powerful influence on how decisions are made-- orders from superiors, power centers, who are people afraid of, social conditioning-- all these are part of the policy structure of an organization. And you can learn a lot about these if you know what to look for, simply by sitting for extended periods of time and talking to people in organizations. And see what goes on. See what has happened in various past crises. Find out why they did what they did at certain times in the past.

And you will be amazed at how much personal and confidential information will come out in that kind of discussion, which is very illuminating as to how the organization works. And then the decisions are the actions that are taken as a consequence of two things-- the governing policies and the state of affairs that exists now. In other words, the condition of the system is one input to action. And how we analyze and how we react to those conditions-- in other words, the policies-- are the other ingredient. And so one has a combination of what's coming in from the system, and how the particular decision point reacts that determines what will be done at any particular moment in time.

And a policy statement can be simple like this, or it can run to dozens of equations that are interrelated. But basically, the orders coming into this company that we just looked at depends on the number of persons-- number of salesmen-- depends on the normal sales that a salesman can make under some normal or ideal set of conditions, and multiplied by the sales effectiveness that I spoke of a moment ago. In other words, as the sales effectiveness declines because delivery delays are getting worse, it is harder and harder to sell the product, and the yield from salesmen falls.

And this is a policy statement about how orders are generated in real life and in the model. Real life will have a great deal more to it-- more complexity. But that's a central core of what goes on.

Now, let's look at the idea of the sales effectiveness. I'm giving you this by way of just illustrating ideas, not to really make you familiar with that particular model. You get a lot of the information about systems by talking to people about what would happen under extreme circumstances that have never been experienced.

Because a lot of our knowledge about systems lies out in the region where, in fact, we have no experience. If I were to-- I won't take time for the experiment, but it's fun to not put this curve up first, but to ask people, what is the relationship between sales effectiveness and delivery delay? Well, first of all, a third of the audience at least will present a curve that is obviously something versus time, not sales effectiveness versus delivery delay.

But as the conversation converges-- and it takes only about 10 minutes or 15-- one will find that essentially everyone agrees that there is some delivery delay that is short enough that the customer does not care. Now, this may be five seconds for some product across the drugstore counter. It might be a year for some piece of heavy machinery that you have to do your own planning for, and design, and build a building for. But for every product, there is some delivery below which the delivery delay does not affect orders, and therefore, a horizontal line.

And then there is a region where the customer becomes sensitive, and orders are going to fall off as you have to wait longer, and longer, and longer. But they don't go through the axis. They tail off, because wherever you are out here, there will be some special customer that will wait a little longer, or you can sell a little harder, or be a little nicer, or be more persuasive. And you can probably get one more order out there.

So it's a curve that has this shape. Now you can establish that very solidly out of the sheer logic of talking about it. And in a particular case, for a particular product, you can probably say approximately what's the delivery delay at this point by just thinking about your own experiences. But it does not matter in this kind of analysis that you get this 50% point exactly right.

The thing that matters is that there is a downward sloping section. Because it is that downward sloping section that forces the market demand down to the capacity of the organization. Most companies think that the customers supply orders, and they are independent of the company, and the company simply reacts to orders.

In fact, the market is reacting to the company. And this is a powerful function in all markets.

What happens to you if you call up an airline to make a reservation? First thing that happens is you're put on hold. You call up a competing airline going to the same city, and you're put on hold.

Here you have airlines sitting out there, all with empty seats, all giving you discounts to ride with them, all or doing frequent traveler things. And the market is dividing in proportion to their ability to answer the phone. In other words, they are shutting people off by the lack of capacity, the delivery delay, in taking an order.

That delivery delay can be products. It can be how long the switchboard takes to answer. It can be all kinds of shortcomings in the view of the customer.

And so what does that do? Well, in companies-- and even in economies-- it produces this kind of behavior. You start out with a nice, adequate production capacity, and have a good product, and sales grow. This is now a curve versus time.

Sales grow. And then, without people quite realizing it, they come to a capacity limitation. In fact, people like to have a capacity limitation, because a capacity limitation means a large order backlog. And a large order backlog is very comfortable. Because if I have 10 weeks worth of orders, I am immune to sudden drops in demand. And I think I would like to have large orders.

And so back in better times-- a few years ago, at least-- you could look through any 10 annual reports of corporations. And in probably seven of them, you would find in the letter to the stockholders written by the president, said with great pride, "This year we have twice as big an order backlog as last year." Sounds wonderful, doesn't it?

What he's really saying is, we're making our customers wait twice as long this year as last year. And we are giving our market away to people that don't make them wait as long. That's what he's really saying.

And that is not perceived. I mean, perhaps it seems obvious to you. But it is just remarkable how hard it is for people immersed in the complexity of a management situation to see the kinds of trade-offs that they're faced with. They are so imbued with the idea that lower prices produce sales that they will reduce price to increase sales, and increase the order backlog, and slide out along this curve by just a corresponding amount.

That reduces profits. That reduces the ability to expand. But by reducing price and giving up longer delivery delay, you have a well-balanced system that leads to the kind of stagnation on the previous slide.

It's very important to be able to include psychological variables in models of human systems. Here's one of eroding goal structure-- very common in all of our social systems. A situation where we have some present condition-- it may be product quality, it might be that delivery delay we talked about, it might be the integrity of the organization-- but some present situation which we compare with the present effective goal to take some action, an action based on the difference between how things are now and how we would like them to be.

Now, how do we decide what we would like them to be? Because goals don't come down from heaven. They get generated somewhere.

And one of the ways that they get generated is to look back at what we've done in the past-- the traditions of the organization. And what does the tradition mean? The tradition means the average of what we have done in the past.

So the present condition gets averaged and delayed to become the traditional condition. And that's one of the inputs to the effective goal. The other input I will call the absolute goal.

Now, there is no such thing truly-- absolute goal. But the absolute goal held by certain very strong and perceptive managers is a goal that will transcend difficulty-- a goal that they will adhere to, and stay with, and come back to even after several years of failing to meet it. They know it's their goal. They want to come back to it.

But many managers don't have that. If you do not have an absolute goal here, and the traditional goal becomes the effective goal, then you see what happens. As we fail to meet our goals, the present condition is less than we would like.

The goal goes down. The effective goal goes down. We're always under pressure. That keeps us from quite meeting our effective goal.

And so what we do is the present condition, which is a little less than we would like, which becomes a tradition, which becomes the lower goal. In some new companies, you can see this spiral to extinction go on in a matter of three or four years. A company will start with the idea, we're going to sell the best product, the highest quality, the promptest delivery, the best service, the lowest price.

I've already described an impossible circumstance, but that's what they will start with. And then they find life is tough. And next year, they don't quite meet it.

And they tell you, well, life is tough. We can't quite meet it. But people don't like to have the dissonance in their life.

And if you can't meet it, there's a tendency to adjust the goal to suit the reality. And a year later, they'll say, well, we were wrong about that. You really can't do it. That's not the nature of the market. We had to reduce our goals.

Two years after that, gone farther. Four years, they will deny they ever said what they said the first year. Because now they have come completely at ease in a downward spiral-- eroding set of goals. Airplane pilots have a term for this.

It's called the deadman spiral, where you fly into a fog bank without blind-flying ability or instruments, and you feel that you're diving. And you pull back on the stick. And that doesn't seem to get you out of the sense that you're diving. You pull back further.

Well, the whole thing is being misperceived. You weren't diving. You were in a tight turn.

You pull back on the stick, tighten up the turn. You keep doing this until the wings come off. And this same sort of thing goes on in our corporate and social systems.

A word about the nature of our information sources-- I have said already something about the mental database. Let's look at three kinds of information-- the information that collectively we carry around in our heads, the information is written down in newspapers and journals and libraries, and the information that's available to us in measured numerical form. There is a vastly more information in the mental database-- orders of magnitude more information in the mental database-- than there is in the written database.

There are no set of written instructions which you could turn over to a set of novices and expect them to run MIT. There is no written instructions for how to build an automobile. These things are all learned by being there, by apprenticeship. One learns about some of the underlying science, but when it comes to really doing important things, there are no written instructions.

And in turn, the written database is vastly richer then the numerical database-- again, by orders of magnitude. There's been a tendency in the social sciences to focus largely on the numerical database. It is my feeling that one is thereby excluding the vast amount of information that the real world operates on.

And if you're going to model the real world, you should use the information that the real world is using. So we place a heavy dependence on the verbal descriptive interview, being there kind of information. That's looked upon with considerable skepticism by many people. They say it is subjective, intuitive, non-scientific.

I would suggest that science is nonscientific. There are no 10 rules that I know of that will guarantee that you get a Nobel Prize in physics. But on the frontier of science it is highly nonscientific. It is subjective, intuitive.

And so is, I think, dealing with any of the realities of the world around us. We have no objection to using numerical data, and we do wherever it's available. But I'd simply say that's not where most of the information, in fact, resides.

Let's look at our mental images in three other dimensions. If we would divide everything we know about the systems around us into three categories, one category would be what we have observed or know about the structure and the policies-- who's connected to whom, what flows where, and why do people do what they do in the short run, in the immediate instantaneous situation. Then we have expectations about the behavior that will arise from this observed structure. And then we have observations about the actual behavior.

Congress passes a law because they think the law will lead to some expected behavior. And it doesn't happen. It leads to some other behavior.

And when that happens, there is almost a universal tendency to say, it didn't work out right because we must have been mistaken about the parts. We must have been mistaken about the policies, the connections, or the structure. Indeed, that's not generally the trouble.

That if you take this observed structure, and what people know to be the policies and structure of the system, and put them into a computer simulation model, you get the actual behavior. The dissonance, the discrepancy in this diagram is not between the observed behavior and actual. It is between the observed behavior and what we believe that structure implies.

Or put another way, the discrepancy is in the solution to that 20th order, nonlinear differential equation. We know the pieces. We expect it to behave in a certain way. That expectation is incorrect. And actually, it behaves in a way that is quite consistent with the structure that we know, if we come to the point where we can see and understand the behavior that is implied by that structure.

I'd like to stress a little bit more the idea of the integrations or levels in a system. Integration alters the time shape of something that goes into it. If you pour a constant stream of water into a barrel, the water rises continuously. The constant stream of water leads to a rising level in the barrel. An increasing stream of water leads to a parabola, an upward-sweeping curve.


If there are questions, I'll be happy to do what I can for them.

AUDIENCE: [INAUDIBLE] seems to me that the freshness of your insights into the way some systems behave is really remarkable. And I'd be curious about how you research the decision rules that people follow-- to what extent you do it by setting up simulations, and to what extent you and your colleagues do go out and sit down with the people who are making decisions, and try to figure out how they do it.

FORRESTER: Question is, how do we determine the way people react-- the policies they are following. And both of your suggested answers are appropriate-- going out and talking with people, and also setting up counterparts in the laboratory. Initially, we went out almost exclusively, and depended on talking with people.

Now, I came into this myself with a considerable experience with research organizations and manufacturing organizations-- 15 years or more of experience. So that quite a lot of it I could draw from my own knowledge. But then we would go out and we would talk to people in companies.

And we discovered that you could go into a company that had serious, but widely known difficulties-- falling market share, high instability of employment, the thing that everybody in the company knew about. The community knew about it. You read about it in The Wall Street Journal. No mystery at all as to what the symptoms were.

You go into the company. And of course, everybody is doing the best they can to solve the problem. You sit down with Manager A, you talk about what goes on at that point.

Now, this can be a half a day. It can be many half days over several months. You learn a lot by just sitting there quietly, and saying nothing, and watching who telephones, and how the person responds. What meetings does he have? Where does he go? What happened at them?

You learn where the power centers are, who's afraid of whom, where the influence is, find out what people have done in various crises in the past. Then you can go to Manager B, and talk to Manager B about Manager A. And you will get essentially the same story.

There is a high consistency through the organization-- the way people see themselves and each other. Furthermore, it makes sense. And there's no reason-- I mean, these have to be penetrating discussions. And you have to push people beyond the simple answers.

And you talk to them in the context of your knowledge about feedback loops, and systems, and the kinds of things that have to be going on as a matter of logical necessity. So they bring out things from the individual that the individual sometimes is surprised that that individual knows. Because he hadn't ever had any real chance to think about it, but yet knew what you needed to know.

You then take this conversation, and you reshape it into a set of computer instructions, like I put two examples up a moment ago. And now you've made that conversation very explicit. The spoken language is very ambiguous.

And you say this is more important than that. Well, what does that mean? You ask a person how much more important. And is it more important than this? Under which circumstance?

And you begin to get an idea of a quantitative nature of what that person is saying. You convert it into computer instructions. And you go back, and now you back translate.

And you can back translate into a very precise English. Because you know exactly what you've said here. And you can be very careful and say exactly that in the conversation.

And ask the person how is this? Does this capture what you feel is the essence of the situation? Well, when they say that you've overemphasized this. I forgot to tell you that. But there's a fairly quick convergence to where they essentially say, it's as good as we know.

Then you put that either into a group role-playing, like this team of people that I gave you the game board for. Or what is much more practical, you put it into a computer simulation model. And generally you'll find that these policies that people gave you, and which they gave you because they assumed those policies would contribute to solving the problem-- you put those policies together and you have a laboratory computer model that generates the difficulty they are in.

In other words, it is absolutely implicit in what they know they're doing that they shall have the trouble that they are experiencing. But bear in mind, you're talking about a 40th order nonlinear differential equation. You should not criticize them for not realizing that they have solved it incorrectly in their heads.

And so then you can look at what there is. And then in a computer model, with enough time and skill, you can always find out why it's doing what it's doing. And then you can begin to work toward finding different structures.

Sometimes one of the easiest things to do is to eliminate some information stream that is being used. Very often, people use information they shouldn't use. And you can't get them to stop by saying you ought to use this instead. The only way you can get them to stop is to take away the one that they're using. And then they will turn to something else. And they might turn to what you say they should use.

But you can get vast amounts of information-- very relevant, very realistic-- and information that, in fact, will produce models that behave like the organization is behaving. Now, John Sterman and others have followed that other track that you mentioned, which is to carry this more into the laboratory, and set up things like this Distribution Game and others to find out if people under those circumstances in fact react the way we say they do from having gone out and doing the interviews. And in general, they do. And so you can get both a laboratory test and the real life inputs, and begin to tie them together.

This is a very new field. It's been in existence only 30 years. And I gave you 30 years of MIT history in computers before we had the first IBM machine here. We have 30 years in this field. And I would say we are probably at about the point that computers were around 1960.

AUDIENCE: I've always been fascinated with the tendency [INAUDIBLE] queuing behavior. You're talking about, for example, [INAUDIBLE]. But there is a tendency to put people waiting on line, which creates such a tremendous inefficiency. And it seems to be a very common solution for these things. I wonder if you've dealt with that. [INAUDIBLE].

FORRESTER: Why do we always make people wait? Is that--

AUDIENCE: There's at least a [INAUDIBLE]

FORRESTER: Well, there is a comfort in having a large-order backlog. There's a comfort in having people waiting at the counter of the department store. It suggests that there is a demand. It suggests that you can keep your clerks busy. But it also suggests that the customers will go somewhere else, if they have a chance where people don't keep them waiting so long.

This is a remarkable situation. I [INAUDIBLE] think of your point-- people waiting. I was talking to the vice president of one of our big machine tool companies about six years ago. The news on CBS this winter had an item that said, we've now passed the point where more than half of the machine tools sold in this country are coming from Japan.

Six years ago, I talked to the vice president of a machine tool company. And it was very brief conversation in a hallway. And he just casually said that for all practical purposes, they and other American machine tool makers were out of business in Seattle.

So I said, why? And he said, well, if you're in Seattle and order an American machine tool, it will be six weeks-- it'll be 18 months before it's delivered. It will be installed by a crew that does not know how to make it work quite right.

And when you need service, it will take six weeks. Says if you're in Seattle and order a Japanese machine tool, it will be delivered the next week out of a Seattle warehouse. It will be properly installed. And if you should need service, it is overnight.

No mention of price, and end of the story. There was no moral in that for him. He was just describing the state of the world.

Now, why was there no moral? I do not know. Is it that he didn't see the point? A little hard to believe, but maybe.

Or was it that in his culture, he simply couldn't do anything about it. Could he not get the resources? They had probably tried to solve this falling market share by lowering price.

By that time, you see, they do not have the resources to have the installation crews and the warehouse full of product, and the expert maintenance people. And so they gradually drive themselves into a corner from which it is very hard to escape. They have created a box for themselves that is very hard to get out of.

And a great deal of the Japanese incursion into the US market is simply that. You can find Japanese products. And they suit your purpose. And you cannot find an American-made counterpart. It doesn't exist.

And so the US is more and more causing people to wait while the people go somewhere else. It's a lack of perception, I think, of what we were talking about this afternoon. And it may seem to you almost unbelievable that that lack of perception could exist.

But I can tell you that it is bordering on the impossible with many managements to tell them or convince them that there is a trade-off between price and delivery delay. They think that price is all-important. And delivery delay does not matter, and that there is no such thing as that trade-off. And you would just be amazed at how hard it is to make that point.

AUDIENCE: I'd like to ask the other side of that question. A few years ago, I listened to a Dartmouth business professor talking about innovation. And he had been a regulating bureaucrat.

And I asked him how, in a tightly controlled regulatory situation such as telephone utility, you could have any innovation. He says, that's very simple. We simply put off our regulation by three years.

OK, now, that was, I think, somebody stating the delay. We'll put enough delay in the loop so it now works. Would you care to comment on that?

FORRESTER: Well, there is nothing that is either intrinsically good or bad about delay, or integrity, or anything else you want to name. You can have very successful systems, like the Mafia does, at a certain kind of integrity and regard for the law. It works fine, as long as the rest of the policies are consistent.

And there are delays that are essential. There are delays that are good. There are delays that are bad. It depends on where they are in the system.

And what you've just said about delaying the regulation seems like an interesting idea that I had not heard. Of course, another part of the success of the old telephone system before the break-up lay in the incentive structure within Bell Laboratories back through the years. Because the designer of a piece of equipment was responsible for its working in the field.

And that meant he was responsible for Western Electric being able to produce it. He was responsible for its being produced properly. And when it went into the field for the first several installations, he was going to be there to be sure it worked. And that gives a person an entirely different view of what they're designing than someone who thinks that he's just producing it as a set of papers to be turned over to someone else. And so, very important to successful organizations are the motivation and incentive structures that are built in. Over here.

AUDIENCE: Do you find as an outside investigator your teams going into a company, that you have an easier time solving the problems of a company because you have a more complete set of state information than the managers share among each other?

FORRESTER: The question is, do we from the outside have a more complete insight into a company-- a better body of data-- than those within the company? I would say, generally yes. Because people on the whole inside the company don't spend very much time listening to one another. They don't go throughout the organization from top to bottom.

And you would be amazed at the compulsion people feel to talk when they are listened to. We did some experiments a number of years ago, long before solid-state electronics, in which we were tape recording the conversations with managers in one of our big grocery chains. And the equipment was a big box of stuff here, and a tape recorder there, and a microphone sitting up in front of the person.

And we told him these are not confidential discussions. If we find that they are interesting, we are going to play parts of them for the top management. And that was understood. And this person will sit there right in front of the microphone and say, I wouldn't want anybody to know this, but-- I mean, you just listen to those tapes, and you'll get the feeling this is the first time the person has ever been able to answer a question and say all that he wanted to about it-- that he was not being cut off and told that what he really means was something else.

AUDIENCE: Follow-up comment on that-- doesn't that mean that some of the problems that people have making decisions [INAUDIBLE] the lack of being able to integrate the information by intuition with the lack of information [INAUDIBLE] associates?

FORRESTER: Well, there are two problems here. They may not have, by any means, the whole system structure as it actually operates. And then if they did have it, they don't have the ability to solve for that dynamic system in their heads.

And then you'll sometimes find it the other way around. You will find that the top manager understands quite clearly what the real problem is, and is absolutely baffled at how to do anything about it. I'm reminded of one company where the problem-- the whole tradition of the company had been to hold down inventories. There had been generations of management that had put pressure on holding down inventories, and having high, rapid turnover in inventories.

Then you get a finance department that does not want to write off obsolete inventory. And so you've got $50 million worth of obsolete inventory. And you're supposed to have a turnover of 10 times a year.

You've got to do it all out of that small fraction of inventory that in fact is active. The dead inventory is one part of that equation. And it becomes nearly impossible.

And so you keep using less and less inventory. You turn it over faster and faster. And you discover that everybody is out of material most of the time. And the costs go up tremendously.

I worked with one situation where we estimated-- and no one in the company cared to disagree-- that if they would raise inventory carrying cost by $50 per unit of product, they could reduce labor by $500 per unit. The whole situation was starved for inventory. And everybody got used to it.

I talked about eroding goals. Three or four years in that company, and an outsider came to accept this situation. They were appalled and shocked when they first got hired and went into it. Four or five years later, well, that's just the way it is here.

When I walked through that plant the first time-- I'm not a manufacturing person, but I've seen factories-- and I knew something was wrong by the time I'd walked 500 feet through that factory. Machines everywhere, a third of them shut off, not running, a third of them running and nothing in them, and only a third of them doing anything. And why? Well, there wasn't any material to work with.

Now, you might think that's impossible. But it's not. In that company, I was with the production manager once when he was absolutely frantic to try to get some things out of the production line. And he was out of certain parts.

He called up the supplier, and laid him out, and told him that he had to have these. He was holding up the production. And the supplier said, well, our truck with that material was there this morning. Your financial vice president said he didn't want it showing on his inventory, and take it away. And when it gets back here, I'll turn it around and send it to you again.

I came back and told Bill Pound, who was Dean of the Management School at that time of my experience. And he said, well, he was at Western Electric the previous week, and exactly the same thing had happened there. Now, this goes on in some of the best companies. I mean, these are both top of the line companies in their industry. So you can imagine what's happening in those that aren't so successful.

AUDIENCE: How wide a dichotomy do you find between the policies people say they follow and the way they actually behave?

FORRESTER: What dichotomy between the policies they say they follow and how they actually behave? There may be a large dichotomy between the first quick answers and how they actually behave. And you have to understand how to penetrate beyond that.

And you will get answers that you know just logically are either not correct or certainly not complete. And so you ask, well, but what about this circumstance? Oh, well, we would do something different then.

And you keep unfolding this picture until you have dug to the bottom of it. You will not get the kind of answers you want from a questionnaire. I mean, this whole statistical questionnaire, send a student out to ask a few questions and follow a questionnaire-- that will not get it for you.

You have to understand the nature of systems in general. It's very good if you know something about the particular system, and sometimes you do, and occasionally you don't. But you must dig for this kind of information.

I would say after you've done that, it's very good, but it may be very different from-- well, it'll be very different done inside the organization from what it is at the top. You will find the image of how this organization works from the chairman or president's position may be very, very different from the image of how it works from down inside the organization. And so you don't get these pictures, you don't get these policies from any one place. You've got to really go top to bottom.

And this is where you come to a better understanding than anyone in the company is apt to have. Because first of all, people will talk to the outsider in a way they would not talk to an insider. I mean, this idea I wouldn't want anyone to know but-- they will do that for an outsider, particularly if they are assured that the details aren't going to be carried away to other people.

And so you can get-- if you know how to go about it, if you have experience in going about it-- and people develop this experience. It's a skill. It's a profession.

And you wouldn't expect to be a dentist or a medical doctor without training, and study, and internship. It takes those kinds of things. But it, I believe, can be done very, very effectively and for very powerful results.

AUDIENCE: The point at which you were talking about generic structures emphasized that there were two elements that were common regardless of the business. Were you talking about policies, structure, or the state of the problem? And which is more difficult to modify?

FORRESTER: Well, I said there were the two parts to any structure-- the policies and the system states or integrations. That was one theme. That's at the detailed level of the nature of the structure of a system.

I would say when it comes to generic structures, I said there weren't very many-- wouldn't require very many-- to cover a high percentage of the cases of interest. We don't have them all yet. But I make the somewhat casual estimate that if we had about 20 of the most important corporate structures, this would cover perhaps 90% of the situations that managers encounter.

It may not be the right numbers, but it's the right idea. And so it's an entirely manageable stable of generic structures that will carry one a long way. On the other hand, I'd say that if you want to go all the way, it is a profession like medicine, where a first-aid course for an hour is useful. I would hope you would carry away some ideas from a discussion like this that are interesting and useful.

But at the other end of the medical spectrum, there's doing a heart transplant or a kidney transplant. And you have sort of the whole spectrum in between, of progressively more insight, progressively more ability. You can begin to apply it wherever you are, but it's a kind of an open-ended profession.

And that's why, I think, we are really now just on the point where it's going to open up. There are many universities around the world that are teaching in this field. I was in China last June. They claim 1,000 people studying in System Dynamics. They're very early, very elementary, but a tremendous interest there.

Japan has usually been the first language into which our books have been translated. The Russian edition of my World Dynamics book has a foreword in it by [? Gavichiani ?], who is [? Cosican's ?] son-in-law. This is a field that is thinly spread, just taking root, but is seen as very important by people that have gotten close enough to understand what it is. Yes.

AUDIENCE: On that note, can you discuss your projections of the role of systems dynamics versus econometric models for national economic policy [INAUDIBLE]?

FORRESTER: Can we leave that till next time? Because I will be talking about our work in the national economy as one of the main things in the next lecture. The short answer is that the approach here is very different from the econometric models, which tend to be based on historical data, much less based on this looking inside of the system and its structure, and its, shall we say, realistic policies of how the system actually operates from the inside, and from the complexity that, probably, you must deal with if you're dealing with a national economy.

If you get the system dynamics national model that we'll talk about next time, it's a 200th order, highly nonlinear system. It has a huge number of potential modes of behavior. Being nonlinear, those modes can interact with each other, as they do in real life. And you find insights about the nature of the economy that generally have not previously been available. But I will leave that till next week.

AUDIENCE: Have you encountered any important aspects of human behavior that are impossible to model quantitatively?

FORRESTER: Well, I would make a rather sweeping, and perhaps dangerous answer to that. Is there anything about people you cannot represent? I would say you can put into this kind of a model anything that you can explicitly describe.

If you can't say anything meaningful about it, if I can't push you into saying that that characteristic does something, that characteristic is more than some other characteristic, that characteristic has some effect-- if I can't push you into that kind of a conversation, then I'm not going to be able to deal with it. But if you can say meaningful things, then the answer is yes. I mean, as soon as you say "more or less"-- more important, less important, bigger, smaller, more influential-- then I'm on the road to getting my arbitrary numerical scales and things that I need to set up an actual model.

Now, setting up the model doesn't make your conversation correct. But I can set up a model of what you're thinking and saying. And we can see what happens.

And that's part of the educational process. Because as we dig into our mental models, we find that there are great inconsistencies between what we know about the parts and what we believe to be the behavior. And so you, first of all, don't try it for accuracy. You try for explicitness, and define where the inconsistencies are, and to begin to clarify the nature of the debate.

Now, a lot of people are very reluctant to go into that sort of a domain. Because you will build a model that may be wrong. But it's part of the process of learning.

You build it. And you show it to people. And they say, look, you're absurd, because of such and such. And maybe they're right, in which case you've made a step forward. And you have to be willing to take those intellectual risks, and see where the chips fall.

AUDIENCE: Getting back to your discussion on inventory-- what happens when you [INAUDIBLE] and they run contrary to, say, a deep-seated belief about management, like the example you were saying they wanted to keep their inventory exceptionally low?

FORRESTER: Question is, what happens in the conflict between, shall we say, the logic of systems and past practice? Well--

AUDIENCE: Common practice.

FORRESTER: Common practice-- very often, common practice wins. And I would say that's one of the great frontiers for learning that's now before us. How do you, in fact, get acceptance, motivation, response to what is intellectually known?

I worked for one company a number of years where we built a very complicated model-- production, marketing, some of the psychological stresses that existed between finance and production, their relationship to customers and competitors, their ability to borrow in Wall Street. Nobody in that company that I know of ever differed with any of the details of the model. Nobody differed with the fact that the model showed the troubles the company was in-- a falling market share, unstable employment. Nobody intellectually differed with the fact that certain changes in policies would correct both of those difficulties.

But the management had a great deal of difficulty acting on it. The problem was that the recommended policies were diametrically the opposite of what three generations of top management had made public speeches about as the basis for their success. The three generations of top management-- past management-- were all alive, all in town, and all on the board.

In spite of that, I think as a gesture of goodwill to me, what this required was that they keep up production in the face of a recession. In a downturn, with sales falling, keep up production and build up inventory, and have the inventory in the upturn, and reduce the amount of layoffs in the meantime. Well, in one recession, they said, okay, we'll try it.

We will only cut back half as much as we were planning to. And we will allocate money to put that extra product in inventory. Well, as one person in the company said, every time they started to put product in the warehouse, some damn fool went and sold it.

And the president was quoted to me by another person as having said that this alone meant $10 million of net profit after taxes that they otherwise would not have made, and was the first time they had gone through a recession without losing market share. And in the next downturn, they did not have the courage to do it again. So that is an indication of how deeply embedded these are, and how, I think, maybe the real answer lies at the level of fifth-grade and high-school education to come up with a sense that there is something to these systems that you carry on into politics and business.

Now, it's not all that bad. But I'm simply agreeing, there is a very major problem there. And there's a big difference between logic and its execution. And this is something for psychologists, and social scientists, and management schools to try to dig into, because there is not much use in teaching managers or anybody else to do things that they won't do.

I think maybe we've come to the point. Let's see, was there something else that we were supposed to do at this point?

FRIEDEN: If there's another question, if you want to take them.

FORRESTER: Well, all right. I'm in no hurry myself.

AUDIENCE: I was wondering how you decide what the appropriate level of detail for a model is, where on one hand, you have a hard time understanding what's going on, and the other where you have too few things to capture within reality?

FORRESTER: The question is, what level of detail for a model? And that's one of the most subtle, difficult professional judgments in the field. Because there are no criteria that will explicitly give you the answer to that question.

Some rules of thumb are useful. I would venture out on a limb and say that almost any system, from one living cell to a corporation, probably ends up with a model of about the same complexity-- that in fact, you go from the level of the symptoms and the problem down two or three layers. And you've gotten enough to capture what is causing that behavior. Now, you don't go down to the individual person in the company. You don't go down to the cells that make up that person. You take a certain broad brush through it.

But if you were building a system dynamics model of what goes on at the interface of a transistor, as one Electrical Engineering graduate student did a few years ago, then you deal with clouds of electrons. And interestingly enough, he'd been a doctoral student in Electrical Engineering and Solid State Physics. And he got through building this model based on the integrations, the control that was going on.

And he said, it's the first time I ever really understood what was happening. Because he'd seen it as mathematical equations. But they had not coupled to sort of a physical structure of what was happening.

The matter of aggregation-- complexity of a model-- is one of the research areas that's not well codified. We don't have rules. But in all of these things, science progresses by, first of all, operating at the level of art, and then asking the question, what is that art, and then beginning to define the roles and the logic for it, and then moving the intuition out another stage.

And in social systems, there is much less that is rigorously known to a high precision, high accuracy. But on the other hand, you don't need high accuracy to greatly improve insights in a situation that is that poorly understood. I would say that what we're doing in the System Dynamics National Model at 200 integrations may be overkill.

And Sterman and others have taken individual modes of behavior, and wrapped them up in very simple models that a high-m school student or a one-person game can sit down and see why that behavior is occurring. Now, it's nice to have the more complex system to know that the simple one is doing the right things for the right reasons. But once you have the complicated system, you usually get the best insights and education by pulling out of it a simpler essence. And I would say those simpler essence models tend to be 3rd to 20th order, not a lot higher.


FRIEDEN: Thank you very much, Professor Forrester. That was a very stimulating presentation. And it gave us some real insight into what you've been doing. And we look forward to the other half next week-- same time, same place.