Computation and the Transformation of Practically Everything: Physical Sciences and Engineering
DEVADAS: The session on physical sciences and engineering. We have two talks in this session. Unfortunately, because of the government shut down Dr. Gabriel wasn't able to make it. Our first talk is by Chuck Vest, President of the National Academy of Engineering, and of course, President Emeritus at MIT. It's my honor to introduce Chuck who is going to be talking about computation, engineering, and some of his personal experiences.
VEST: Well I've learned a lot this morning. But one thing I learned from Ken Gabriel's absence-- I've often jokingly posed a question for people. If the government shut down how would you know? Now we know. The speaker doesn't show up.
I am truly, truly honored to be part of this celebration and introspection about CSAIL and computing and computer science and engineering in general. And I especially want to salute my dear friend, Ed Lazowska. I really like your left side of the country view of the history of computing. Thanks, Ed, very much. And all three of those talks were just marvelous.
Computation, as the title of this symposium indicates, really has transformed virtually everything. But I want to give you a very particular example. About two weeks ago, I suddenly woke up in the middle of the night realizing I was dreaming. Well, that's not unusual. But what startled me was I was dreaming in PowerPoint. It can't get any worse than that. And I decided right on the spot, okay. Next talk, I'm going to have written notes and I'm going to stand at the podium and there's not going to be any PowerPoint. So that will explain it.
Victor also asked me, when I said, what really should be my role in this? I'm certainly anything but a computer scientist. He said, well, I'd just like you to speak about the role of computing in all of engineering. Now the last time I got asked to do something like this was in early 2007. I was sitting peacefully in my office over in the Stata center. And I got a call from Venkatesh Narayanamurti, known to all of us universally as "Venky." And Venky explained to me that there was to be an official ceremony at Harvard celebrating the establishment of Engineering and Applied Science as a school in Harvard University.
Well, I told Venky I thought it was really terrific that after only 358 years Harvard had suddenly recognized the importance of engineering. But then he got to the real point. He asked if I would speak at this ceremony. Venky, I said, you can't ask me to do that. I'm an MIT guy. And he said, now, Chuck. You've just been elected president by the National Academy of Engineering. You have to be able to speak for all engineers in the nation. So this is kind of my second go at this.
But I still wasn't very sure what I could do, useful and productive, here this morning. But then I realized I am doing what I have, unfortunately, done for many years spent in administration and policy and so forth, and that is stand in front of a group of people who really know a whole lot about a subject and try to say something that they might find interesting. So over the years, I've developed a number of specific defense mechanisms for this. And here are five of them.
You can pose a really profound sounding question or two. And if it's good enough, people will think about the question and they won't realize that you're not saying anything intelligent about it. Another is to try to sound mysteriously wise. And my role model here, many of you will remember, was Peter Sellers in the movie "Being There." Remember this guy who was just totally isolated, knew nothing about the world. They put him on a television show talking about the future of business and economics. And he says, in the spring the flowers will bloom. And everybody goes crazy trying to understand what the profound significance of this is when really, all he meant was, in the spring the flowers are going to bloom.
Now, you can also use a timeworn quotation, something like, all politics is local. And then try to twist it around as if it had something to do with the topic. You can pose a topical question, perhaps relevant to current public and political debates. Or you can make an observation about something very macro-scale that's going on and try to sound like you know something. Well, I'm only going to use one of these on you this morning. And that's the one of posing a topical question.
And the question is this, why do we argue endlessly about rising health care costs instead of doing something that might reduce them? And here, I'm very serious for a moment. We have had in Washington, as you all know over the last couple of years, and enormous, so-called health care debate. But it's not been about health care at all. It's simply been arguing about who's going to pay for the same old system and who's going to be left out.
And it seems to me that one of the most obvious ways to utilize systems engineering thinking and information technology to directly address a huge societal and public policy issue and to make life better for many people, would be to apply them to redesigning the structure that might actually create a healthier America. Now, I certainly don't believe that technology is the solution to all such things. But this seems pretty obvious to me that we could make some strong strides. It seems ripe for serious study and innovation, including, of course, the social science of how to get it into the public debate, how to deploy it in an effective and humane way, and so forth.
And actually, the National Academy of Engineering and the Institute of Medicine, our sister Academy, intend to attempt to do exactly this over the coming years. And at a more modest level, I'd like to point out that the IOM and the NAE, the Institute of Medicine, National Academy of Engineering, have just launched a student competition called Go Viral to Improve Health, IOMNAE Health Data Collegiate Challenge.
The idea is to get the kids engaged in social networking and building mobile apps, et cetera, to increase appreciation and awareness of the power of health data to identify problems and suggest solutions and inspire positive action at a community level. And in order to launch this, we're making available the whole Health and Human Services Health Indicators Warehouse and a number of other databases like this. So it's going to be a little attempt to engage young people in thinking about how to apply the new tools and technologies they're so familiar with to hit a really important social issue.
Now, fortunately while I was cowering under the suggestion that I was supposed to speak about the role of computation and all of engineering, I received another email from CSAIL. And embedded in the middle of it, I found this. What was it like before ubiquitous computing and infinite storage were available? How did your world change when they arrived? Now this is something I can deal with. So I'd like to share a few personal experiences, frankly reminiscences and observations.
Now think back a little bit, those of you who can think back this far. As a grade school kid in the late 1940s and early 1950s, I was usually found doing one of three things. Either I was standing next to our mailbox waiting for my Tom Mix whistling arrowhead to arrive, or maybe I was downtown at the shoe store zapping myself with unknown levels of x-rays looking at the bones in my feet. If I wasn't doing one of those two things I was usually daydreaming.
And by the way, I hope kids today still have time to daydream. Now, I usually dreamed about one of three things. One was being a pioneer or a colonial soldier trekking through the woods with my musket. The second was going by rocket ship to the moon. And the third was having a tiny TV or sort of a Dick Tracy wristwatch in my hand. Well, here we many, many years later. Men have indeed gone to the moon. I didn't. But I have met and worked with the first two men who did and that's a pretty close second to a kid who grew up in West Virginia dreaming about these things.
And of course, the amazing developments in computing and engineering that we're talking about here today bring us something like this to carry around in my pocket that's actually far more exciting and presumably even more useful than Dick Tracy's wristwatch or the little tiny TV I kept wondering if would come along someday. Well, I never did become a revolutionary soldier. But maybe in a future incarnation that might happen, as well.
Along the way, however, like all of us, I did go with the flow of the development of computing. So I thought about what really was my first computer. And I think the answer is pretty obvious. It was my K&E Log Log Duplex Deci-Trig slide rule. And some of the other guys, and they were mostly guys, had things like a Dietzgen 1733 Polymath Decimal Trig Log Log. This was, in those days the Dietzgen people and the K&E people, it was kind of like the PC people and the Mac people today. You had a great collection of folks devoted to one or the other.
And then the real oddballs would buy a Pickett. And Pickett was interesting because it was actually made out of metal instead of bamboo. Well, thinking about all these things, I also remembered one day early in my years here at MIT, I was sitting at a dinner of some sort. And I noticed that the guy sitting next to me, his name was Dietzgen. So I kind of looked over at him and I said, you're not, by any chance-- he said, yes I am.
So there again, I met the guys who went to the moon. I met one of the people who built modern slide rules. And by the way, my brother, Marvin, spent his career as an engineer at Pratt & Whitney Aircraft. He told me that they developed a panoply of incredibly specialized slide rules with all kinds of odd scales and configurations, some of them with multiple elements connected-- interconnected by springs, and so forth. They're all painstakingly created to do complex specialized calculations.
And of course, they perfected them just about the time that programmable computers came along to do the work in a much more effective way. My first digital computing experience was as an undergraduate at West Virginia University, probably 1961 or 2. I'm not exactly sure. I actually took a numerical math class, which was probably pretty new to an institution like that in those days.
And we got to play on the computer and run some simple programs. And also a couple of, in the engineering mechanics course, we did really sophisticated calculations like program the formula for a bending beam and calculate displacement under force and so forth. Probably using an IBM 790 I would guess. I'm not really sure. But I did become then, and later, a pretty proficient Fortran programmer.
So I just want to say that after I finish my term at the NAE and start thinking about what I'm really going to do next, if anybody's looking for a good Fortran programmer, let me know. I might be available.
Well, then I moved on to graduate school in 1963 at the University of Michigan in Ann Arbor. And there I actually took my first semester, even though I was a mechanical engineer, I went over and I took a real computer science course to Bernie Galler, who I know that many of you know. Who was an individual like so many that had been launched here by MIT and moved out to universities around the world. And later on I got to know Bernie pretty well.
And I remember him telling me once that the first computer with a real memory he had worked on had, I think it would hold eight bits in a mercury vapor tube. And he'd gone from that all the way to what we considered to be a very modern computing in those days. And in addition to Fortran I developed a little proficiency in MAD, the Michigan Algorithmic Decoder. And I've actually met people here at MIT who knew MAD had done some work in it.
But moving from my dreams and daydreaming, there are also nightmares along the way, which I know many of you shared. And that was the whole deal of carrying a big box with hundreds of carefully punched cards over to the computer center, usually late in the evening, turning it in the window, and rolling around in your bed all night waiting to see what was going to come out the next morning. Because obviously, one little bug and the whole thing was ruined.
And I generated a lot of gastric acids in those days over that. And that went on for many years. And then even later, after things got real time and much simpler, it became a recurring nightmare. Any of you have recurring nightmares about taking your box of cards over to the computer center, or am I really weirder than I think?
Then I did a thesis on a topic problem in hydrodynamic stability that, to be honest, today might be a good homework problem at best. But in those days with very scarce computing resources and so forth, it was really a lot of work. I used the Galerkin method and then solved those equations on the computer. And I can tell you exactly when I threw away the boxes of the decks of cards from my PhD thesis. And it was in August of 1990 when we moved to Cambridge. I finally decided the time has come.
Then I went on as a young faculty member to do research work, even though I was a mechanical engineer, largely in the field of optical holography and holographic interferometry. And trying to build quantitative tools out of these techniques that had tended to be qualitative up to that point. And the reason I bring that up in a meeting about digital computation is that that field really evolved from the efforts to build big, I'll call them computers, to analyze radar imagery data.
And they went to optics in the early days of lasers because the information content and processing power that they could accomplish optically were far beyond anything you could do by the digital computers that were then available. Needless to say, times have changed a lot since that. And in this work, my graduate student, Don Sweeney, and I became the nth group to independently develop the idea of computer tomography. And n is a very large number for those of you who know the history.
But what we did was just incredibly clunky and time consuming, gathering data and then processing when we started. It just seems unimaginable today. And by the way, I share the experience with probably just about every academic in the room that when I wrote my grant proposal to the National Science Foundation to take holographic interferometry and develop this technique to image things and to measure things in three dimensions, it, of course, got turned down because a reviewer said, can't be done.
And I remember very distinctly one of them said, this obviously can't been done or it would have been done back in the 19th century. So I didn't get my money. Fortunately, the Navy helped me out a little later and I eventually went back and did get a NSF grant.
And as a matter of fact, by that point I discovered that the problem actually was solved a century or two before through what was then quite abstract-- considered quite abstraction, the Abel transform. And I actually wrote the first little paper that pointed out that, beneath all of this stuff that was going on in tomography, mostly in the medical realm, was, in fact, the mathematics of the Abel transform.
But then, as you know, I unfortunately fell into the dark chasm of academic administration. But it ended with the absolutely remarkable experience of getting to come here to this great institution that we all love so much. I'm told I was actually the first president to have a computer on his desk. And I remember about after two years or so I was here, Jim Bruce, who most of you know, plotted and printed in some newsletter a curve of the increase of e-mail usage since I had come to MIT because I had let it be known that if people wanted to communicate with me and my colleagues, this was the way to do it.
Now, he always said that was my effect. I think it was really just happenstance. That's just what was beginning to happen in the world. But just to be safe, when I stepped down as president I did publicly apologize to everybody in the community for having started e-mailing around here in that sense.
But I want to tell you another thing, maybe another nightmare. And that has to do with administrative computing. Now, along the way we decided at some point in the 90s that we needed to move to things like electronic time sheets instead of all of this paper. And there was a guy down in the bowels of the bureaucracy who would, every week or two, send me an e-mail and say, well, we've got 90% of the people-- I mean, sorry, we got 10% of the people using electronic timekeeping, and then 20% and so forth.
We got down and he sent me an e-mail one day and said, every unit but one has now gone to electronic means of timekeeping. So of course, I bet and I asked him, okay. Who's the one? The laboratory for computer science. Where else? I assume, even now, you have gone here.
But I do want to tell another story that some of you know. Many of you will remember them in the 1990s the Clinton-Gore administration touted the development of the National Information Superhighway. Well, one day while this was all going on, our beloved Michael Dertouzos called me.
Chuck, my boy, he said, let me explain something to you. The National Superhighway is great. But the real power of the internet is global. There's a new thing. It's called the World Wide Web. It's been invented by a wonderful young guy named Tim Berners-Lee. He's in Europe. We need to bring him to the MIT Lab for Computer Science. Our purpose will be to create an international consortium to make Tim's vision and invention really serve the whole world. I need you to come with me to Geneva to meet with some important people of the European Commission. Here's what we're going to do.
Well, you all know what followed. In a selfless display of working for the common good, Tim Berners-Lee, Michael Dertouzos, the European Commission, the Lab for Computer Science, and many others came together largely outside the limelight to create the organizational infrastructure to really deploy the World Wide Web, as has been mentioned earlier. And the internet and the web, of course, did change everything for people like myself who are basically just users.
Changed how we work, how we buy things, how we learn, how we entertain ourselves. They've expanded and speeded up our contacts and interactions around the world in ways that only a few of you could have predicted. They ended isolation for many, as Nicholas likes to point out. And they have brought useful knowledge about things including about health care to many people around the world. Unfortunately, because they are tools in the hands of real people in the real world, they've also brought us entirely new forms of crime and fraud. What a shame.
But one thing they enabled us to do here at MIT was to launch my beloved MIT Open Courseware. Really one of the most satisfying experiences I've had in my career, working with you to launch this project and move it forward. Only the MIT faculty could have done this. And this is literally a historical and well documented fact. We could spend a lot of time on that. In a few weeks I think we will.
To be honest, it did not launch quite the revolution many of us had hoped for. But I will tell you it is having a very major impact and has definitely earned a serious place in the progression of education.
So this brings me to the present. The things I value these days are Skyping with my grandchildren. Computing is the only way I know to work, at all. I'm just observing the impact of social media. But to this morning's point, computing truly has transformed everything about the way engineering is done. And the impact, I think, is just in its early stages.
In the 20th century it seems to me that computing from a user's perspective has been dominated by information flow, by the storage, transmission, and analysis of data, using information technology, the internet, and the World Wide Web. In the 21st century I suspect computing will be dominated, again, from a user point of view by knowledge generation and knowledge sharing.
Engineers will be able to model and simulate at what are now just extreme scales of size and temperature and time frame and everything else. Engineers will be able to understand and perfect systems, hopefully perfect them, of unprecedented complexity. We've seen nothing yet. Engineers will be able to design materials, systems, and devices from first principles of physical sciences and hopefully from life sciences as well.
Engineers will be able to test systems, utilizing both real data and simulation to make much more accurate predictions of safety and performance. A major challenge for such extreme computing will be to provide engineers, architects, planners, and policymakers the capability to simulate, understand, and effectively design large scale systems and infrastructure that properly recognize, address, and integrate culture, social behavior, human need, and equity. This is a huge task for this group and for many beyond it.
If I'm anywhere close to being correct in these simple observations, then our ability to meet the grand challenges of this century will directly depend on our resolve to continually deploy world leading cyber infrastructure. Now, I'm very optimistic, as all of you know. And I'm very optimistic and excited about the future of computing and what it can do for engineering and for society. But there are lots of things that I worry about.
And in particular, as a user, observer, I worry about two things. One is security. Will real or perceived needs for security of information and its flow in the cyber infrastructure mess up the whole grand, global adventure? I think it's a real and present danger we must guard, again, in the same spirit that the World Wide Web was deployed.
Secondly, complexity. Will systems that computer scientists and engineers and others create become so complicated that we can't even think about envisioning all the possible end states? And will disaster ensue when operations and control of highly complex systems are passed to individuals who really don't comprehend them, don't know what's inside, don't understand the models, don't understand the assumptions?
In other words, will things like what happened recently with risk calculations on Wall Street recur in domains that could cause even more dramatic damage? I hope not. But it is something we all have to think about, to work on, and create a domain in which we can lead, so that, hopefully, these nightmares don't become real.
Well, to close let me finish by indicating a couple of things that I really hope computing will enable us to do in this century. I give a lot of talks. And I like to talk about the fact that we have moved out of an era that was characterized by the term "brain drain." And you all know what that is. It means the brightest people in areas like science, and engineering, and so forth, leaving their home countries and, for the most part, coming here to the United States. What a blessing for us this has been.
But it's a double edged sword that looked at some times from the developing world in particular, they see it as a drain of resources away, the brain drain. I think we've now moved into what many people call the era of brain circulation where people and ideas are continuously moving around the globe and more and more of the next generation is spending a decade in this country, and a decade in that country, and so forth. In the end, I think this is very healthy.
My assumption is that the next stage we might call "brain integration," which really means problem solving, working together as individuals, using our minds around the world regardless of location, all tied together in some ways that are probably beginning now. I mean, after all, if you look at the way any large engineering system is designed and manufactured and integrated, it takes place all over the world.
The Boeing 787 has several thousand engineered parts. They're manufactured in 530 some locations around the world. And of course, it's only computing that allows this to happen. And we're seeing these things that I don't really understand about crowdsourcing and so forth, and especially massive multiplayer games. There are just some new things happening out there that I hope smarter people than I are going to talk about during this session. And I think that we're going to learn how to apply these to serious engineering work, serious problem solving, ways of seriously approaching the great challenges of our century.
And then the second thing I would like to note in closing, that one of my dreams is that we get close to what I think of as the holy grail in personalized learning. That we really will, over the next decade or so, bring together computation and information technology with serious learning science, cognitive science, and really enable people. And I believe that this will not all take place on the computer. It will have a human guide, there. But really ramp up by a substantial amount the way in which people can learn in a way that's tailored to themselves. Again, by coupling the technology with cognitive science, learning science, perhaps even neuroscience per se. So
This is something that I hope will become available to engineers and, really, to all people that will build on early legacies like MIT's Open Courseware but take things to an entirely new level. It's been a great ride. I want to thank all of you who really had to do with conceiving computation, with building the systems that I have been a more or less happy user of, and very much appreciate the opportunity to see so many good friends and colleagues back here again today. Thank you very much.
DEVADAS: In the interest of time we'll move to the next talk. And I'm sure you can catch up during lunch for lots of questions. It's my pleasure and honor to welcome Maria Zuber who is Professor of Geophysics, the Griswold Professor of Geophysics and the department head of EAPS, Earth, Atmospheric and Planetary Sciences. And Maria is going to tell us about how computation has affected geophysics. Maria.
ZUBER: Thank you. Thanks. Okay. Well, it's a great honor and privilege to be here today to talk to you. This is my first computing talk. But when Victor called me and said, how would you like to give to talk about how computing has changed the world in terms of the physical world? I have to admit I just couldn't resist. And so I'm going to talk to you today. And what I thought I'd do is I'm just going to go through a list of topics where we've made some achievements in the solid earth sciences, the fluid earth, and then talk about some future challenges.
And I'm going to be focusing today mostly on questions that I think are particularly interesting where computational methods have made a contribution. And I won't be really focusing on the details of the modeling in the interest of time here today. Okay.
So if one is going to talk about how computing is advancing the Earth Sciences, it makes sense to start with the formation of the Earth. So there was a paper written in 1971 by William Hartmann suggesting that the Earth and moon formed at the late stages of the accretion of the Earth by an impact into the Earth. This was in 1971. It was one of many theories for the origin of the Earth-moon system, including the moon coming out of the Pacific Ocean-- which there was no Pacific Ocean at the time-- including capture of a planetesimal, the moon, as it came by, which had huge angular momentum problems.
But this idea was around. So this was 1971. And then it wasn't until 1986 at a conference that was held in Kona, Hawaii, where two separate groups showed up at this conference and showed computer simulations of how the formation of the Earth-Moon system could have formed from a physical standpoint. So this is from that talk by Kipp and Melosh.
And here we have a Mars-sized impactor hitting the Earth at a strafing angle. And that's quite important. And what happens here is that the impactor wraps itself-- the core of the impactor wraps itself around the core of the Earth and the mantle of the Earth. And the mantle of the impactor gets thrown out. And the moon accretes in orbit.
Here's a simulation, a later stage simulation. And this is a smooth particle hydrodynamic model here, which includes self-gravity. And so you can see here, the Earth melts. So that's something that we've learned. When I grew up, I was told that the Earth never melted. But it's rather inevitable in an impact with this amount of kinetic energy. And then as this calculation goes on, what you'll see is that there is material in orbit around the Earth that then reaccretes to form the moon.
And if we can move, let's see, forward on this one. So the success of this model-- this model is able to account for the mass ratios between the moon and the Earth. So two things come out that are about the right relative masses. It is consistent with the angular momentum budget of the Earth-moon system. And it actually does a fairly good job of explaining the bulk compositions of the Earth and the moon, especially the moon's lack of iron.
So the core of the Earth is about half the Earth's radius. And the moon probably has a very small core, about a few kilometers. And that just never made any sense to anybody who was thinking about the accretion of the Earth and the moon in the same part of the solar system. So this is one, I think, magnificent example of how computing has improved our understanding of the Earth.
Now, this is a typography model of the moon that was made by our instrument that is mapping elevations on the moon as we speak. And as you can see here, if you look at, there's a lot of circular features here. And these are large impacts on the moon. And what's interesting about these is, these impacts all formed between about 3.8 and 4 billion years ago in a period of time called the late heavy bombardment.
So the Earth and moon, they formed about the same time, at about 4.5 billion years. But all these big impacts that we see here all formed within a period of about a couple of hundred million years. And something that a lot of people don't think about, I think about it all the time actually, is that Earth used to look like this. Very early in Earth's history it used to look like this. And I'm going to show you, now, a computer simulation of one possible explanation for why this late heavy bombardment occurred.
Now, this is the orbits of Jupiter in red, Saturn in yellow, Uranus in blue, and Neptune in purple. And this is a mass of particles called the Kuiper belt, out here. And at about 878 million years, things are going to go crazy here. These are icy planetesimals outside-- Pluto's one of them, but there are many others. And in this model, Jupiter and Saturn go into a resonance so that Jupiter goes around the sun twice when Saturn goes around once. And what happens is, it scatters all those planetesimals all over the place, including into the inner solar system.
And for a range of reasonable simulations, one can explain the timing of all of these impactors getting themselves into the inner solar system. And it can also explain the Trojan asteroids of Jupiter. And so what's interesting to think about, in terms of the history of Earth Sciences-- so here we start with the formation of the Earth. This is today. And so here's the dinosaurs, back here.
And if you go all the way back in Earth's history, here this is about 4 billion years ago. So 3.8 to 4 billion years ago, this is the late heavy bombardment. And it also says here, first life. So at this time, when all of these massive impacts were hitting the Earth, this is the time when life formed on the Earth. It's interesting to go back and think about how many times life started and stopped once it got going due to these impacts, because these impacts-- here's another computer simulation.
This was by one of my graduate students, Wes Waters, looking at what happens if we have one of these large impacts and it hits the Earth. So this is a heat anomaly associated with the big impact. And this is the Earth's mantle. And what happens is it completely throws the convection system of the Earth out of whack for a number of tens of millions of years. And one of the things that we try to figure out in the heat budget of the Earth, this determines vulcanism. It determines by de-gassing of the Earth, by cooling off. It's the formation of the atmosphere, the formation of the oceans.
And if you're perturbing that system, it changes the way we understand how the Earth lost our heat. So the consequences are actually profound. And computing is a way that we can use to investigate this.
Another thing that we use computing for is inverse modeling. So I was just talking to you about how an impact could have perturbed the convection pattern inside the Earth. The way that we have actually gone in and determined what this convection pattern has looked like has been through inverse modeling of taking thousands and thousands of digital seismograms globally, and doing inverses to see whether or not seismic wave speeds in a particular area are faster or slower than some average Earth model.
And this tells us, basically, where things are hot or colder than the average. So if we just look at one section, here, going across South America, the Nazca Plate, here, is subducting beneath South America. So there is actually a ridge out here where you can see it's warm. You can see the subduction of cold material, cold ocean floor, here, and the relative coolness of the continents.
So this is some work by my colleague Rob Van Der Hilst. But we have essentially mapped what the interior of the Earth looks like in terms of temperature and composition on the basis of large inverse models of analysis of seismic data.
So to move on now, now we're looking at the transition between the solid earth and the physics of the fluid earth, and looking at a model of landscape evolution. Now, when I was going through grad school we used to model erosion by just taking a diffusion law and solving it for some simple boundary conditions, whereas now we are getting to the point using sophisticated computation where we can take an actual landscape. And this is the island of Kauai in Hawaii.
And we can develop a model based on rock types. We can measure cosmogenic nuclides to tell us what erosion rates are. And then we can develop models that are consistent with that. So the nice thing about this, and where the computation is really necessary, is that as opposed to doing just an idealized pyramid where we call the Earth a black box, that we're actually using observations based on physical data to get us rates where we can then go in and develop these models so that they are well tuned to reality.
So we're now going to move on to the fluid earth. And here is a model developed at MIT using the MIT GCM. And there are folks from the Laboratory for Computer Science that are involved in this modeling. This is part of MIT's climate initiative. And this is a model of the circulation of the ocean, which uses as data-- so this assimilates data in it. And data from more than 12 satellites is included in the simulation. I think this is a two degree model run.
It wasn't long ago when we treated the circulations of the oceans as a rectangular prism, where we just tried to get the general currents. But now we can get-- well, we can resolve boundary currents such as the Gulf Stream, important equatorial currents. We can look at the changing ice mass at the poles. And what has been very important, this is part of a global study where MIT is leading, which is essentially using all available satellite data as a part of looking at general circulation models of the ocean to try to figure out the role that the oceans play in climate.
And here what is crucial is, one needs to be able to resolve order 100 to 200 kilometer scale, length scales, because this resolves the eddies. And it has been shown that eddies are very important in angular momentum and eddy transport in the ocean. So the previous models that were only able to observe the major currents were not able to really take into account the important aspect of the energetics of the ocean system.
All right. Next slide, here. So we can take the next step, then. And so here is a model of the circulation of the ocean based on the same physics that I've told you before. But now what has been added is biology. So looking at ocean microbiology, the different colors here correspond to the distributions of different kinds of phytoplankton in the ocean. So we can now understand the role of ocean circulation, ocean temperature in the distribution of nutrients, and look at the role in the biology of the ocean.
I think this is an incredibly important emerging area because we're going from major scales such as gyres, which are basically the scale of ocean basins, even getting down to the genomics level of the ocean biology.
All right. Greenhouse gas emissions. MIT runs a program called AGAGE, which is a global consortium which measures trace gases at many places on the globe. And this is led up by Ron Prinn at the Center for Global Change Science. And this is a model of sulfur hexafluoride which is the most potent greenhouse gas that's regulated under the Kyoto Protocol. But what happens, countries are supposed to tell you what their emissions are but it's self-reported.
And so what happens is that one needs to actually go in and develop a model for how the ocean is circulating and how these gases are distributed over the globe, and then go back and try to infer what the emissions are on the basis of that. And so that's what's going on, here. So this model traces the distribution at two kilometers above the surface. And this is for the year 2007. So this uses actual, again, observations in the globe for how this is taking place.
All right. Now, weather prediction. So weather prediction, numerical weather prediction, goes back to the first computer. And the first weather model was actually run on the ENIAC. And the only problem with this is that the model took 36 hours to run a 24 hour forecast. And this is the kind of model that you can run this model now on your cell phone. But one of the key individuals on this was Jule Charney who came up with the description of the physical equations that it needed in order to run these weather prediction models.
It was sufficiently successful. MIT hired him. And so while MIT, our students, are in the weather predicting business much more so than we are as an endeavor. But weather prediction actually has gotten a whole lot better. So this is a plot of how well for a three day, a five day, and a seven day forecast from 1980 to 2002, how well the location of a predicted high or low pressure center correlates with what a model prediction has said.
And the top lines are the northern hemisphere and the bottom lines are the southern hemisphere. So you can see we're doing pretty darn well in three day forecast, less well in five and seven day forecasts. But these forecasts-- the forecasting actually, in the short term, can actually be quite remarkable. Here is a comparison of a simulation run at the Goddard Space Flight Center called GEOS-5. And this simulation has 24 million nodes covering the Earth.
And I'm going to focus in on this area right here. So which is the image and which is the simulation? This is a snapshot taken 90 hours into a model run, February 2nd. Chuck, you may remember this because the Washington D.C. got hit with 20 inches of snow last February. Turns out, this is the simulation. This is a satellite image, here.
So this simulation did so well it not only predicted what the distribution of clouds looked like, it actually predicted the right cloud types. So short term, this is a four day simulation. And this actually did incredibly well. So we're making great progress in this area.
So let me finish up here with a couple of future areas I think are ripe for progress. So, subsurface transport. So obviously, we want to take carbon dioxide and sequester it underground to reduce greenhouse emissions. When we take oil out of the ground, we pump water in it. This water tends to mix-- salt water is often used, and it mixes with the fresh water that's used in aquifers. So this is a mixing process of multi-phased flow.
And I'm showing you, here, this is sandstone. So this is a piece of rock that you could hold in your hand. This is what it would look like under a hand lens. This is what it would look like under a microscope. And this is what it would look like under a scanning electron microscope with a resolution of about 50 microns. And we can now take the properties of a sandstone and simulate the real physical properties of sandstone to look at what multi-phase flow occurs.
So the next simulation is done by one of our alums who formed a company, but he was on the faculty at Stanford. And what it's showing here is oil being injected into a water saturated matrix. And then you're going to see water injected into it again. This is an advanced lattice-Boltzmann scheme where the molecular properties of the medium are quite important in determining what the flow properties are.
And the physics of this was actually done by Dan Rothman in our department. So this is really the next step in things. So you can imagine future applications. And imagine if these fluids, if you put CO2 underground, these fluids are going to react with the matrix. And you can actually put the chemistry. And you can look at formation and break up of methane hydrates. You can look at the permeability dependence on flow viscosity and viscosity contrast between fluid phases.
In the area of atmosphere ocean exchange, if we look at what the cycle is-- so sea spray is very important in trying to understand climate because CO2 will get mixed with the ocean and deposit it on the sea floor as organic carbon. And we, essentially, need to be able to model the turbulence of this process. So this is another area that's very important in terms of its emerging importance in climate.
And then finally, modeling the coupled solid and fluid earth. What we'd like to be able to do is understand what the solid and fluid earth interdependences are. So if we want to try to model the entire Earth system, there are actually 20 orders of magnitude variation in viscosity of all the fluids that are involved in this problem. So it is incredibly challenging and exciting from a computational standpoint to see where this might lead in the future. But obviously, improvement in methods is going to be quite important.
And I know this is a computation symposium. But in Earth Sciences, we end all of our talks with a sunset. And so I'll just end right there. Thank you very much.
DEVADAS: We have time for one or two quick questions for Maria.
AUDIENCE: Thanks for the talk. So with regards to numerical weather prediction, does the community have a good understanding of how far we are from information theoretical optimal, let's say for the three day, five day, seven day windows? And where do you think the biggest potentials for interaction with computer science in that regard are?
ZUBER: So I would say if the question is how far we are, it depends. So in the particular case of that storm in Washington, there was a great deal of data and it was assimilated effectively into the model. But we are right now, we're limited in say, upper atmospheric observations that could go into the model because, ultimately, this is a chaotic process. It's a non-linear process. So you're only as good as your constraints on the problem are. So if the temperature changes at a higher level, then you're out of luck.
So I would say the best way to interact with the scientific community-- it's bigger and faster computers that's able to assimilate information as it comes along, late stage, to put it in. And you're never going to get there, precisely. Right now on three days we're pretty good, but five is still-- five, I think, is the next frontier of where we'd like to be more reliable.
DEVADAS: Right. So it's time for the next session. I see Nancy is going to chair the next session. I'd like to thank Chuck and Maria for a pair of fascinating talks.