Stephen Chu, "The Energy Problem and the Interplay Between Basic and Applied Research” Compton Lecture
[APPLAUSE]
MODERATOR: Good afternoon, everyone. We are delighted that several of our local government partners in our energy efforts have joined us today. Cambridge City Councillor Henrietta Davis is here. Where are you Henrietta? Great. And representing the Department of Energy and environment of Massachusetts of the Commonwealth are Undersecretary of Energy Ann Berwick and Commissioner Phillip Giudice. Welcome. Glad to have you with us.
Now, before introducing our secretary of energy, I want to first provide a little bit of context for the Karl Taylor Compton lecture at MIT, which is our most prestigious lectureship. Karl Taylor Compton guided MIT for almost a quarter of a century from one 1930 until his death in one 1954, first, as president from 1930 to 1948 and then as chairman of the corporation, or board of trustees, from 1948 to 1954.
Compton was truly a transformative figure in MIT history. He was a constructive revolutionary. With his passion for uniting physical sciences with engineering, he transformed MIT. But he also transformed engineering, education, and practice everywhere. He led MIT through a tumultuous chapter in US history, a period that included the Great Depression, a World War, and a period of economic and intellectual transformation.
In 1957, 3 years after his death, the Compton lectures were established in his honor. And since then, the Compton lectures have brought to our campus leading voices in science, technology, public affairs, education, and the arts. From the great physicist Niels Bohr, who gave the first Compton lecture, to Senator Edward Kennedy who re-inaugurated the series two years ago. President Compton himself was noted for his high intelligence, deep knowledge, his integrity, and his uncanny ability to see the future.
So it's extremely fitting that we should honor his legacy with a speaker who embodies these same exceptional qualities-- United States Secretary of Energy Steven Chu. As we all know, for the past 2.5 years, the Institute has been deeply involved in the MIT Energy Initiative or MITEI. It's partly a research program, partly an education program, and thanks to the enthusiasm of our students, it's partly a movement.
Our students have become among the most exciting forces in this movement. They bring an overwhelming passion and focus to our work. Our student-led Energy Club. Has grown to more than, I was going to say, 1,200. But at lunch today, I learned it's now more than 1,500 members. I have to keep updated every week. Dozens of MIT students recently entered the MIT Clean Energy Prize Contest.
And I have the pleasure of inviting the just-announced winners of the Clean Energy Prize to stand. I think they're with us. And so we can congratulate you.
[APPLAUSE]
So when you listen to students like these talk about energy, it's clear that they see it as the challenge of their lifetime and the central challenge for our planet. In its daunting scope and scale, the problem of energy resembles the challenge of the Apollo program, like it, but raised by a few orders of magnitude in significance and complexity. And if energy has become a much-magnified moonshot for this generation, than secretary Steven Chu directs mission control.
Secretary Chu began his scientific career in a way that will sound familiar to many in this room. As he has written, "In the summer after kindergarten, a friend introduced me to the joys of building plastic model airplanes and warships. By the fourth grade, I graduated to an erector set and spent many happy hours constructing devices of unknown purpose where the main design criterion was to maximize the number of moving parts and overall size."
I've seen those things. I've built those things. A child of immigrants he grew up on Long Island in a family of academic over achievers, nearly all of his aunts and uncles had PhDs in science or engineering. Some of his cousins had more than one advanced degree. And his father and his brother graduated from a certain Institute of Technology on the north bank of the Charles River. The Institute we all call home. So with only a PhD from Berkeley, Secretary Chu was, in his own words, the academic black sheep of his family.
[LAUGHTER]
Nevertheless, over the next 30 years, he has had a hugely successful career across a stunning range of fields. After two years as a Berkeley post-doc using atomic physics techniques to learn about high energy physics, in 1978 he moved to the Bell Laboratories, then among the most exciting and prolific hotbeds of physics and physics-related research.
There he began groundbreaking work on trapping and cooling of atoms with laser light, which has had many applications, including the ability to manipulate nanostructures using optical tweezers and for achieving Bose-Einstein condensation. By the fall of 1983, he had risen to lead Bell Labs Quantum Electronics Research Department. Four years later driven by a desire to work with students again, Secretary Chu moved to Stanford where he would remain until 2004 serving twice as chair of the physics department.
In 1997, Steven Chu shared the Nobel Prize in Physics for his work on cooling and trapping atoms. As his own research interests expanded to include biological physics and polymer physics at the level of single molecules, he became a driving force behind the creation of BioX at Stanford, an ambitious project to bring together under one roof researchers with different backgrounds to find solutions to some of this eras most pressing problems.
In 2004, he returned to Berkeley, this time joining both the Department of Physics and the Department of Molecular and Cell biology. Simultaneously, he became director of the Lawrence Berkeley National Laboratory. Under his leadership, the Lawrence Berkeley Lab has become a key center for biofuels and solar research. In January of 2009, Stephen Chu was confirmed as United States Secretary of energy, the first working scientist in that role and the first cabinet member in history to take office as a Nobel laureate.
Together with President Obama, Secretary Chu has begun at an incredibly fast pace the necessary and hugely important work to move the United States and the world away from our perilous dependence on fossil fuel and toward a sustainable energy future. That future will bring us a new cycle in our innovation-based economy and set us on the path to combating climate change. In just 112 days on the job, but who is counting, he has already helped secure landmark funding for energy, R&D, and technology in the American Recovery and Reinvestment Act and in next year's budget. Together, these funding commitments mark the largest and most important investment in American Science and Technology since the Apollo Program.
Secretary Chu has said, "The key to America's prosperity in the 21st century lies in our ability to nurture and grow our nation's intellectual capital, particularly in science and technology." His themes surely resonate with us here at MIT. Time permitting, following his speech, he will take questions from the audience. But right now, I invite you please to join me in welcoming to MIT, our Karl Taylor Compton lecturer, United States Secretary of Energy Steven Chu.
[APPLAUSE]
STEVEN CHU: Thank you very much. I don't know. With that introduction, it can only go downhill. So I'm very pleased to be here today and talk to you about the energy problem, and what we can do about it, and some views on perhaps how to try experiments in organizing how what we can do about in terms of the research.
So let me very briefly start with saying that the energy challenge has many facets. These are three of them. I deeply believe our future economic prosperity really depends intimately on how we're going to use energy efficiently and sustainably in the future. The price of oil and gas will go up. We don't know what it's going to do one year or five years from today. But certainly in 20 and 30 years, it will be higher.
I don't know when. Hopefully, in the United States it will begin this year. But sometime in the future we will be living in a carbon-constrained world. There is also a growing conflict for competition over energy resources. Other countries are jockeying for position as we did in order to make sure that we have access to, for example, oil, and of course, climate change, which is the reason why I got into this some half dozen or so years ago.
So very briefly, here is a few observations more recently that you may have not known about. This is the surface area of the Arctic ice cap in the North Pole. You look at the size of the cap in the month of September where it melts back the furthest, those ranges, and let's see if I can show you here, this was the IPCC's prediction. This is the 1.5 sigma range of where they thought the ice would be. And you see the observations were ahead of schedule. And so this is one of the areas of concern.
Another area of concern is the sea level rise. This is not the ice in the Arctic, because that's usually buoyant. But this is the ice in places like the Antarctic, and especially, Greenland. This is the rate of increase of sea level. And over the last two decades or so, you're seeing an increase in the rise in sea level. Again, it bodes ill for what might be happening. And again, if you look at this, this is the median prediction. These were the ranges given by IPCC. And we're skirting at the outer limits of this range.
This red stuff is the actual tidal gauges and the satellite, which gives more up-to-date information, suggesting that this trend is continuing to increase. Another prediction of some decade ago, it was predicted by 2013 that as much as 80% of the pine in British Columbia will have died. That's because of a fact that the frosts are not as severe. They don't kill all the pine bark beetle parasites.
So this is a picture of the pine forests. This is the dead stuff. There's little slivers of live pine interlocking. And so how is that prediction of faring? Well, according to the British Columbia Ministry of Forests and Range in 2006, they're halfway there. 40% of the pine is dead today. So there is this recent evidence that the things that were said a decade ago are coming true but a little bit faster than we had hoped.
This is a recent report coming from MIT. It was business-as-usual predictions. This is the IPCC assessment of business-as-usual. This is a more recent one done by MIT. There's a slight difference. MIT is saying if we continue on our current track based on slightly different assumptions of improved modeling of certain greenhouse gases like carbon black, slower ocean uptake of the CO2, and a slightly higher emissions forecast, they're looking at a mean temperature rise of something like 5 degrees centigrade with significant probability of going to 6 or higher.
So just to set the stage, between where we are today and where the last ice age was approximately 6 degrees centigrade colder. In the last ice age, all of Canada, the United States, down to Ohio, and Pennsylvania was covered year round in ice. So only 6 degrees centigrade means a profound difference in the climate for the Northern hemisphere. And so with not too much imagination, you can imagine an equally profound difference 5 or 6 degrees on the other side.
So that's of some concern. And the biggest concern in the last five years or so is that we're becoming aware of more of what we call tipping points. One tipping point is the fact that in the frozen tundra in Canada and Russia, in particular Alaska, there's a lot of carbon that's been slowly being stored over thousands, millions of years. This carbon is stored there. It isn't recycled because the microbes in this frozen area are generally asleep. They're pretty dormant. If a tree falls down in the Brazilian rain forest, they gobble it up, recycle it, and it comes back as carbon dioxide methane. But in the frozen tundra, there is a long-term storage of carbon.
And what happens in the frozen tundra is that if the microbes actually-- the difference between a hunk of meat in your freezer which can be kept for a long period of time and a hunk of meat in your refrigerator which spoils after a few days to a week. That's the difference of just a phase transition. And so if you go across this phase transition, the microbes wake up. They will begin to work on this carbon. And they will release it.
And the big fear is once they start releasing it, the yearly emissions of carbon could exceed human emissions. And so no matter what happens-- not matter what we do once that goes, then it's on a life of its own. The carbon will be emitted. It will warm up. It will thaw some more. You get more carbon emitted. And it's out of our control.
So that's a tipping point you don't want to go near. And that's primarily the reason why climate scientists are now saying 550 parts per million of carbon dioxide is too close to risk. And they're pushing for 450. Before the Industrial Revolution started, we were 275. We're now at roughly 380. 420 in carbon dioxide equivalent if you include other greenhouse gases that humans have put up. And so we are getting close to where it's a very nervous time.
And so is there reason for hope? I think that there is. And the reason for hope is, first, people have to be aware of these issues. But also, the hope is going to be in science, technology, and in a concerted world recognition of where we're at. So let me remind you of something published in 1968 Paul Ehrlich at Stanford wrote a bestseller called The Population Bomb. And he said, "The battle to feed all of humanity is over. In the one 1970s and 1980s, hundreds of millions of people will starve to death in spite of any crash programs embarked upon now."
So what happened? Well, this is a plot of the time between 1960 and the year 2005. The population of the world more than doubled. This is the amount of land put under cereal production. These are productions of grains of rice, wheat, corn, things like that. This is the productivity worldwide. So the productivity increased four-fold, while the amount of land put under cultivation remained essentially flat and fewer people starved to death.
So what did happen? Well, in 1970, a fellow by the name of Norman Borlaug got a Nobel Prize for the development of dwarf strains of wheat that were disease-resistant, but more importantly, they were fertilizer-tolerant so they could take higher doses of fertilizer. And that, plus irrigation, meant that you could increase the yield per acre not only in the United States, but in countries like India, Pakistan, Mexico-- three to five-fold per acre.
And so that so-called Green Revolution actually transformed the choices. Let me also remind you historically of something that happened between the 19th and 20th century. A similar innovation occurred. As we went into the 20th century, Europe was very concerned. They had a depleted soils problem. Despite the rotation of crops, despite the use of manure, the soils were getting nitrogen depleted. And they were facing a dilemma-- lower and lower yield. This is very ominous.
So there were essentially two choices being debated. They said, well, we colonized a lot of the world. We'll just have to ship our food from the colonized part of the world. And other people thought, well, we really like fresh fruits and vegetables. So why don't we ship the topsoil over from the colonized world and grow the stuff here?
And so what happened was Fritz Haber invented a process of synthesizing ammonia. And with ammonia, you can make nitrogen-based fertilizer. It was deemed so important it got a Nobel Prize in chemistry. Several years later, Karl Bosch invented a way of making it really practical at high temperatures and pressures. A second Nobel Prize for the synthesis of ammonia because it was that important.
In 2007, a surface chemists got a Nobel Prize where it said in the citation that at last we have now a deeper understanding of the Haber-Bosch process. So 2.5 Nobel Prizes for fertilizer. It was that important. So these things-- the Green Revolution, the invention of ammonia synthesis, actually transformed our choices and are totally different.
So what do we do with our situation today? How do we move into a sustainable energy future? Without a doubt, energy efficiency and conservation is number one on the list. Let me remind you of a California experience now, a national experience. This brown curve plots the size of refrigerators. In the middle of 1970, the average size of the American refrigerator is 18 cubic feet.
By 2000, it had gone to 22 cubic feet. The blue curve indicates the energy used by each refrigerator. So even though the size went up, the amount of energy decreased by about a factor of 4. When the first standards were put in place, the manufacturer said this is terrible. The American homeowner can't afford these refrigerators. So we have also plotted in the green curve the inflation-adjusted price of the refrigerators during this time period. And it went down by a factor of more than two. So they were cheaper. They used four times less energy. And they got bigger.
Now, you'll notice that refrigerators are still climbing in size because there's no satisfying the American appetite. But it's leveling off. And the reason it's leveling off is because of the size of the kitchen door. And that's why they're making them wider.
How much energy has been used-- how much energy has been saved by refrigerators? Well, the first bar on the left is if we had these roughly 150 million commercial and residential refrigerators in 1974 efficiency that we would be using close to 300 billion kilowatt hours per year. The next one shows the energy we saved. The energy saved is that light purple bar.
And you can compare that to all existing renewable energies in terms of wind, solar thermal, photovoltaic. The savings in refrigerators alone are more than all the renewables, those three categories, combined. Okay? That's a lot of energy that's saved and a lot of money saved. Nuclear is 20% of all electricity generated in the US.
So can we do better? Yes, we can do much better. Buildings, commercial and residential buildings in the United States use about 40% of our total energy. It's divided roughly half and half. This graph gives you a breakdown of what it is. For example, in commercial buildings, 26% alone is in lights because the lighting is relatively inefficient. That also means the additional cooling as an example.
So this is the break down. So the question is how much better can we do in buildings? Well, how well do we do today? So we have what are called lead ratings that not only include energy, they also include other things-- other sustainable issues like water use, how many bicycle racks associated with the building.
But let's just plot end-use intensity-- just the energy. And so what we're plotting is LEED certified silver, gold, and platinum. Platinum is the highest LEED rating. And on the y-axis is the ratio of the actual building performance versus the design goal of the LEED building. You get a LEED certification for the design, not for the actual performance, number one. We should change that.
And what this shows is if you're not really-- if you have an okay building, but not really that good, you see that you have a ratio better than 1. That means the actual energy use is better than the design goal. That's good. But as you drive to more and more energy efficiency, it takes a turn for the worse. So if you want to design a more efficient building, what this scatter plot tells you is we don't really know what we're doing. Okay?
And so this is a problem because if you say we're going to design a more efficient building but there's not any confidence you're going to design a more efficient building so that the full lifecycle cost will be less, you might not be motivated. So this is where we are today. And in the current state of how we deal with buildings, a place like MIT which has a need for a building. It builds the building. It runs the maintenance. It pays the utility bills, everything. You'd think, okay, it would always be designing state-of-the-art buildings.
But in fact, sometimes universities do not. They say we want a new building. Let's say it's a new laboratory building. They go to various faculty members and say, what do you want? They tell them what they want. You begin to do some design. You find out the building is over budget. And you walk around between the detailed design and what people wanted. Finally, you begin to construct it.
It's really over budget. So you do what's called value engineering, which means you have to cut out certain things. Usually, energy efficiency is the first thing that goes. And then you've run out of money. So instead of commissioning the building, which means you actually tune it up-- tune up the heating and ventilation system. You just move people in, and you're done with it.
So 95% of the new buildings skip the commissioning stage, the fine tuning. Commissioning alone could save 10% of the energy use in a building, just to tune it. So this is how we do it today. What's the desired state? Well, the desired state is to borrow from what we already do with commercial aircraft. If you design a 777 beyond an airbus or the Dreamliner, what you have is you can have a conceptual design.
Then whatever happens in that design, if you make a change, the efficiency of that design is automatically built in. So you can actually predict the performance of that airplane long before you've tested it to let's say 1% energy efficiency. It's all there. okay, if you want to make modifications, the computer program will actually tell you, this massive computer program, will actually tell you what you're doing to the performance, the economy, everything.
So virtual design in airplanes can be applied to buildings. Now, once you put the building into operation, you think, for example, here there's a lot of people in this room depending on the ventilation in this auditorium. But in the usual ones I give lectures in, they're designed to get stuffy in about an hour. Maybe because they don't, well, I don't know what the reason is. I think it's a compromise between having a ventilated room when a lot of time, it's not occupied and having it when it's fully occupied, as it is now.
You can imagine real-time commissioning in the same sense that a microprocessor is constantly tuning a modern engine. It notices what the temperature is, what the temperature of the engine, the air, and it's costly tuning up. You no longer take your car in to get it tuned, because it's tuning up on a minute-by-minute basis.
So in a modern building where you have sensors, and RF mesh networks, and all these other things, you can say, okay, where are the people? Very inexpensive carbon dioxide sensors can be put in that place. You direct the heating and the cooling exactly where you want it to be. If load shifts, it automatically tunes itself up.
So this is something which, because it's in software, it can be implemented. You use these tools to both design and operate the building. It's also very important, by the way, in the operation of the building that you have a very robust computerized system. In today's modern control of buildings, it's actually very complicated to control these. I know from personal experience when I was at Berkeley, we went into a new building. The temperature of this very specially controlled temperature rooms down the second basement wasn't tuned right. And so the temperature would go like this, oscillating by about 9 degrees Fahrenheit every few minutes.
And we were doing some microscopy where we were trying to use single fluorescent molecules to help localise things to about a couple of angstroms. And so we needed a temperature stabilization system. It is going up and down by 9 degrees. We simply could not take data for several months. People pointing fingers, it's not our fault. It's the fault of the design of the building.
So finally I said, well, can you give us the control of this room? Each room had its own separate programmable control. I said, is it an integral proportional controller? And they said, yes, so I said, well, actually, they didn't immediately say yes. They didn't know what I was talking about.
Finally, I found out it does have it. And they gave me the program after a couple weeks. And so we just reset it so it's oscillating like this. You reset the thermostat. It oscillates like that. You just got a heavy side function of what it does. And so from that, we could set it. And within a couple days, the post-doc said it's oscillating by less than half a degree Fahrenheit, a factor of two better than specification.
But you don't have a postdoc who can do that in your maintenance crews. So just like the average garage mechanic can't tune your car anymore, it's okay. You take it to the garage, the computers talk to each other, and everybody's happy. So we need that as well. So imagine taking the HVAC systems, the windows, the lighting, the building materials. You can have passive thermal electrical storage on site in power-- all these components in a modern building. The most essential point is that you need to have building design software, ideally an open platform, that has embedded in it energy design analysis.
So if you make a change, and you do something, you can predict the performance of the building. So you don't get that huge scatter plot that I showed you. And then you need building operating platforms.
So in my last year and a half at Berkeley Labs, we started talking a number of companies in particularly, United Technologies, in saying is this possible to do something like that? And we became increasingly convinced that you can do this very quickly and start to test it and say, perhaps, reduce the energy consumption of a building by 80%. That's before you put on photovoltaic.
Now, California is always ahead of the curve. And so they're saying, why not get a 90% reduction in new buildings? And so in sort of a toy analysis, if you did this, and you had the technology available, and it started going into the market by 2012, by 2030, both in 50% savings in retrofits and 80%, not 90% in buildings, this green bar is the proposed energy savings that will come just from efficient buildings.
So it's 40% of US energy. If you can 80%, it's 0.32 reduction in energy use, roughly a third, just from building efficiencies alone. And actually that ratio is true. It's roughly 40% of buildings worldwide. So this is truly low-hanging fruit. But I think we have to develop the tools that allow architects and structural engineers to get on with it.
Okay, let's look on the supply side. This is taken from a slide that I got from Steve Koonin at BP, soon to be a DOE. And this lists the costs of energy, new gas, new coal, nuclear. Onshore wind is becoming starting to become competitive with fossil. Photovoltaics is still way up there.
So you can ask the question when will photovoltaics actually become competitive without subsidy? It's competitive to the point where-- I mean, it's already somewhat competitive on hot summer days when you have-- it can be competitive if you have peak pricing. But let's say where utility companies would actually start to paint deserts with this stuff. And so it turns out that most technology follows what is called the learning curve. It's a Moore's law where you get exponentially better improvement, except unlike the Moore's law for semiconductors, the slope is not quite as dramatic.
So what you're plotting here is the cumulative production-- what's been deployed in the technology from 1976 to 2003 in this sense. And you're getting better and better. Costs are coming down. And that dash line is roughly the cost of fossil fuel generation. So it suggests that maybe 20, 25 years from today we're going to have, without any subsidy whatsoever-- you don't think about it. You start sticking this stuff on rooftops. That's discounting the transmission issue and the storage issue, just generation. It's all capacity.
So I had the pleasure of listening to a lot of briefings today on what MIT is doing in photovoltaics new out-of-the-box thinking. In fact, it follows a lot in the recognition that you can separate the absorption of the photon, the particle of light, into something like exciton diffusion. In fact, one of the people was talking about exciton diffusion as a means of carrying that excitation away, finally breaking that exciton transfer in charge into a electron hole, and getting out in the circuit.
And so the absorption part's easy. The exciton diffusion part is relatively easy. The charge to transfer and especially the coupling of the charges, the separate electron and hold onto electrodes is a little bit tougher. But there are many opportunities for radically new approaches that one would like to see. It's very important. Forgive me for the provincialism, but anyway, I should've put in some MIT people.
But the idea is that you want to get away from glass substrates. Ideally, you want a continuous reel-to-reel process using wet chemistry processes so that it really can become inexpensive. Because of polymer electronics, you actually have the possibility of actually imprinting on the stuff the inverters and a lot of the electronics. And, so again, this is something where there are great opportunities, I think.
So what we're looking for is the transformative technology. So let me give you an historical example of one. Back in 1918, 1920, AT&T licensed the invention of a vacuum tube. The inventor of the vacuum tube actually didn't realize how it worked. It thought that the glow that was coming from the residual gas was an essential component of it. But in the end, what Bell Laboratories and AT&T did was they spent a lot of money in developing the technology of the vacuum tube, because it was an essential component for long-distance telephone communication.
But the trouble is, oh, for those younger people in the audience, this is what a vacuum tube looks like. When I was a kid, I used to take them-- we had an old TV. And I used to gather them up every year and take them to the hardware store and test which tubes were dying because what happens in a vacuum tube is you're heating up the wire until red-hot electrons come out. And then you modulate the electron stream that's collected on a plate. But eventually that wire burned out.
And so Bell Labs was spending a lot of time and effort making vacuum tubes for a few weeks, then a year, then two years, and they got it up to six years. They could get long-lived vacuum tubes. While they were doing this, they started a little skunkworks effort to make a solid state vacuum tube. Now, why did they think they could do this? Well, it turned out that in 1925, a new of the microscopic world emerged, and it's called quantum mechanics.
By 1930, this theory, which was actually developed to explain how atoms give off light, in the spectral properties of atoms. That theory was actually applied to how electrons move in metals and then in semiconductors. So what looked like something that could explain the properties of light of atoms then could start to explain how electrons move in metals.
And by that time, there was a feeling that this was a truly good microscopic theory. And perhaps it could allow them to develop a solid state vacuum tube. And so that's the first transistor. Now, it's in 1947. It's a picture only a mother can love. But by late 1930s, Bell Laboratories started a concerted effort to actually make this so-called transistor.
But it was that invention that led to the whole semiconductor industry, it led to practical computers, it led to the internet, led to everything. It led to incredible wealth in the United States. It grew out of quantum mechanics only 10 years before that.
Now, another little tidbit, the people at Bell Labs ended up getting about 15 Nobel prizes. The first one was earned by Davisson This is with his assistant, Germer. And Davisson actually joined Bell Laboratories-- and this is going to another theme in this talk. And that is, a lot of the best research started as very mission-oriented research, very applied research. But if given the right sort of people and the right sort of environment, you can do some pretty fantastic, basic science as well.
And so in this case, Davisson joined Bell Laboratories during World War I to work on vacuum tubes for the war effort. He liked it there. He liked the atmosphere. So he stayed. And he's holding a vacuum tube in his hand. And this particular vacuum to took the electrons and scattered them off of a metal plate. And he found that these electrons actually scattered as if they were matter waves thus confirming a conjecture by de Broglie that these particles could actually have wave like properties, one of the tenets of quantum mechanics.
So he gets his Nobel Prize. He's a big hero. A lot of people now want to go and work for the great man. One such person was Shockley. So he comes to Bell Labs wanting to work on vacuum tubes and vacuum tube technology. He's a theoretical physicist. They said, well, why don't you work on this new solid-state replacement? And so he did. And here he's seen with the other two co-inventors of the transistor Bardeen and Brattain.
Now, I have to say in this picture Brattain-- I'm not sure how to pronounce it-- is the only experimentalists there. Bardeen and Shockley were theorists. So why is Shockley pretending he's doing an experiment? Well, because he's the department head.
[LAUGHTER]
So, again, this was this great invention. And if you consider all the science that came out the 10 years or 15 years following the announcement of the invention, a lot of condensed matter physics was written in those days in the development.
This mission-oriented research at Bell Labs led to a lot of things. It led to radio astronomy. The discovery of Jupiter and the sun were radio sources. The invention of the photovoltaic cell, which was an accidental invention. The transistor definitely was not accidental. And so it's a pretty good list of things.
And even when you are doing something very targeted, something very related to communications research, the way the work was done there is somewhat different. It was not hard nosed, just what you think of-- let's make a better widget. For example, Nyquist-- they actually had a mathematics department in Bell Laboratories, believe it or not, and so with these mathematicians, Nyquist determined what's the maximum signaling rate that can be transmitted through any channel for a given bandwidth.
But mixed in with that, is okay, you now know what the maximum transmission level is. So you would want to pack the communication bands very closely together. So you need a good frequency control. And so Mason develops the crystal oscillator of great precision.
Long distance communications-- you need stable amplifiers. If you have cascaded amplifiers, the high gain would go unstable. So they invented negative feedback. Bell Labs has the patent for negative feedback. Now, this is a little surprising because I thought with caveman, fire to hot, pull the wood out, fire too hot.
[LAUGHTER]
Anyway, so good patent lawyer. Other things, those engineers who-- you know these people. This is just a small sampling of the kind of work that was done there that was not only good for the communication business, it was laying the foundations for a lot of what we know in electronics and noise. Now, Claude Shannon was a different case. He joined Bell Laboratories not in World War I, but World War II. He actually got a degree at MIT.
And he got his doctorate degree here, and he went to work on fire control systems and cryptography during World War II. And so he was actually working on how do you control anti-aircraft guns to shoot down airplanes. But he was collaborating with other fellow Bell Labbers. And so Shannon and Blackman and Bode, for those who are in the physics or engineering business these are legendary names, developed a data smoothing and prediction for these anti-aircraft guns. And they decided because they were working for Bell Labs, it was very analogous to the problem of separating a signal from interfering noise and communications systems. And Shannon was also working on cryptography. He had met Turing during the war because Turing was visiting Bell Labs for a couple of months.
And so he develops a mathematical theory of cryptography. And he writes that cryptography and communications are so close together you couldn't separate them, which could have fooled me. So and in his footnote to his theory of cryptography that was later declassified, he states his intent to develop these results in a forthcoming memorandum on the transmission of information.
So this paper in 1948 formed the basis of information theory. For those of you who don't know about this, let me just say I'd love to give you several lectures on it. It's a fantastic thing. But as with many profound discoveries, it is possible to see that times were ripe for a breakthrough, not so with information theory. "His results were so breathtakingly original that even the communication specialists of the day were a loss to understand their significance. Gradually, as Shannon's theorems were digested by the mathematical and engineering community, it became clear that he created a brand new science."
So what did he do? He said, okay, the fundamental problem of communication is reproduce at one point, either exactly or approximately, a message selected at another point. So he divided it up into a source. Let's say it's a person speaking or it could be musicians playing. Then you're going to take that information, a voice or music, and you're going to encode it some way. And you're going to put it into a channel. But the channel is noisy.
And then you're going to decode it. And finally, it goes back to transmitted sound. Now, so Shannon said, okay, the channel is noisy. But you get to optimize, given the channel noise, how to encode a signal. He was worried about information being lost. For example, if you take a voice and you digitize it-- and he actually did his undergraduate work in Boolean algebras. So he was immediately in the '30s and early '40s already thinking all digital.
Suppose you take your voice message and bring it into strings of zeroes and ones. Now, if it's noisy. You want to make sure that information gets transmitted. So instead of saying 0, 1, 0, you would say, okay, for this 0, I am going to transmit three 0s. And for this 1, I'll transmit three 1s. And when I get on the other side, I know I've just encoded redundancy. And I'll take it. And if there's a noise, that might flip it. So I'll just say, okay, if there's three 1s and three 0s, no contest.
If their noise should flip a bit, and there's two 0s, it's probably likely that it's still 0, so we'll call it a 0. So that's a way of putting in some redundancy into the message to take out the errors. Now, if you think about that and you say, I want to transmit this message so that it's completely error free. Is that possible? Well, you say instead of three, maybe I use 10 0s and 10 1s or 100 0s and 100 1s. And you see what you're left with. In order to get it error free, you have to transmit go to zero rate of transmission.
Okay, and so now, if you want to improve upon that, your standard engineer would say, well, let's improve the hardware so it doesn't flip as many bits. So Shannon asks a very different question. For a given error rate, how can we design an encoding and decoding system to maximize the speed and minimize the errors? And what are the fundamental limits to the speed and reliability of information transfer?
Just as for thermal noise, Johnson and Nyquist said there is a fundamental thermal limit on all electronic noise given by KT, or Nyquist, the fundamental bandwidth limits. So Shannon was asking, what are the fundamental limits here? And he discovered something amazing. He proved that even in a noisy system, you can go as close as you want to no error and still have a finite bandwidth, a finite data transmission rate. So you can make perfect error correction and still not take an infinitely long time. And so he proved this theorem that straight line. There's two graphs. One's a log, and the other is linear.
So this line log scale. This is error rate-- 10% error, 8%, 2% error. And he said, no matter how smart you are, you can't do better than this line, period. And it was a mathematical theory. It's an incredible thing. He had to define what you meant by information. He defined it in terms of entropy, and it worked out and said, okay, that's as good as you can get. And these are the actual codes that are commonly available today. And they are asymptotically getting up to this thing.
So this is the type of thing that when you think of applied research, you actually just dazzling-- it's like uncertainty principle. No matter how smart you are, you can't measure the position and momentum at the same time better than such. So, again, there was something special going on at Bell Laboratories that allowed all these things. So I asked the question. How can we design a modern-day equivalent of a Lincoln Labs here at MIT or Los Alamos but focus on mission-driven research, but also connected to the industrial world. It's not wartime stuff. You're not making bombs.
And so can you create energy laboratories that are equivalent to Bell Laboratories, which remain at the pinnacle of its field for 75 years? So you begin to think about it. And you ask, well, what was the management structure at Bell Laboratories like? Well, one of the first things is the managers were the best practicing scientists. And they were not some of the best. Or they weren't formally some of the best who lost some of the wind in their sails and moved on to management. They were the best.
And these managers had an intimate knowledge of the people and what they were doing in their departments. And they could deploy resources and make decisions very quickly. When I was a member of the technical staff at the labs, I had an idea. I would go into my department head or director's office and say, I got this new idea. And he said, okay, tell me about it. And we'd go, and we'd talk about it. And he tried to shoot it down, and I would defend it.
And that's the way it preceded. And when I became a department head, exactly the same thing. I tried to shoot it down. Or say, oh, that's a great idea. Sometimes I was right. It was not such a good idea. Other times it was wrong. And then you can make a funding decision within-- you can say, no, within an hour. And you can say, yes, within a day or a week.
So the management also, because they were active scientists, it was also their business to find out as much as what was happening in the rest of the laboratory and they were some of the lubricant for, oh, you should talk to so and so because they may know something about what you're doing. And that was really great because I'd never had to go to the library. I just talked not to my department head, but there were half a dozen people and say, what about this idea?
And they said, oh, you should talk to so and so. Within one or two near-neighbor exchanges, you're likely to be seen next to a world expert in something. And then they could do exactly the same. No, that's a great idea. But it was done 10 years ago. Or that's such a stupid idea. Let me tell you why. Or in 5% or 10% of the times, they might say, hey, you might have something there.
Now, the other great thing about Bell Laboratories was that one is not as obsessed about keeping things a secret. And that was really nice. You could spout off an idea. And you weren't really worried that someone's going to run off and steal it from you because the place was so idea rich. There were more ideas than there were time to do them.
So this is when I got hired in 1977. This is me before I became national lab director. In 1977, '78, roughly a couple dozen people were hired at Bell Laboratories in basic research area. Five of us got Nobel Prizes. Over 50%, probably by now two-thirds, are in the National Academy of Sciences. We were all hired young. Straight out of graduate school or straight finishing postdocs.
So it was a very special place. And so one of the things is can we actually invent things. I'm going to skip battery development. Just one thing about batteries-- I'll tell you where we are in terms of batteries. This is where lithium ion, a standard commercially available lithium ion battery is in terms of energy per weight, unit weight, or energy per unit of volume. And this is diesel fuel, gasoline fuel, and body fat.
There's quite a bit of difference between chemical fuel and a battery. But I just have to say that if you can get something five times better, the world will be your oyster. It can really electrify transportation. This is why, for example, we're not going to have batter powered airplanes because we're down there instead of up there.
Just as for your comic relief, "What does a Boeing 777 have in common with a bar-tailed godwit?" That's a bird. Just in case you don't know, that's what a bar-tailed godwit looks like. . And the answer is they both can fly nonstop for 11,000 kilometers. This bird is about this big.
When the bird takes off, 55% of its weight is in body fat. And when a 777 takes off, about 45% of its weight is in body fat-- sorry, in jet fuel. And they both have about the same range. So the plane is little bit better because it has a payload. The bar-tailed godwit was carrying a little sensor telling where it flies. So for some strange reason, it migrates from Alaska to New Zealand.
So it couldn't be-- I don't know, but anyway. I'm going to skip this because there's this red light blinking. And batteries-- there's lots and lots of things you can do with batteries, lots of new ideas floating around. I'm going to skip that and just say that I am optimistic because there are a lot of very smart people working on this radical approaches. And so I'm very optimistic we will get very good batteries within five years or so.
Biomass, also very optimistic. We tried a little experiment at Berkeley Labs. It's called the Joint Bioenergy Institute. It's funded by the Department of Energy. It's $25 million a year for five years. And if they do well, they get another $5 million. There are three bioenergy institutes funded by DOE. I think they're working so well that we want to expand this to other things like a battery institute, a building systems institute, things of that nature in various forms.
So what's good about this type of institute is although there are participating institutions, they rented a floor of this building in the laboratory building. And in it, they wanted to put in a structure very much like a Bell Labs structure where it's not a principal investigator surrounded by a group, but a whole bunch of people trying to work on various things like feedstocks, deconstruction fuel synthesis.
And it seems to be working very well. In the first six months, there are two notable things. One is that using synthetic biology methods, they reprogrammed yeast and E. coli to make gasoline and diesel-like fuel. And these green circles are the types of fuels they can make now. They want to make these commercially viable. So then the hard part begins. And the hard part is get this organism to spend most of its waking life just taking sugars and turning it into fuel.
The organism, of course, in order to do this, they've introduced entire metabolic pathways. And the first thing an organism does when you do something like this is tries to shut it off. And in the process of getting this to work, they're going to understand many of the fundamental things about systems biology and how you are going to shed some of those metabolic pathways that actually have nothing to do with what you want it to do which is simply to make fuel and a little bit in reproduction and some of the other stuff, adaptability and things you don't care about anymore.
So it's like you have a teenager and you give it all the Coca-Cola and pizza it ever wants. And you feed it video games, Coca-Cola, and pizza. After a long time of genetic modification, you're going to have one heck of a good video game player. And you get rid of all the other essential things. So what we're trying to do is to try to figure out how these microorganisms can do this.
And so this is going to be a great challenge. But there'll be great science that's going to come out of this because we will have to understand these systems in a very deep way. And finally, what we want to do is we want to use nature as inspiration, but go beyond nature. Just as this was sketchbooks of Leonardo da Vinci of how do you learn to fly? Well, you study how birds flew.
But the first airplane was a hybrid. It didn't require muscle power. It had wings like a bird so that the wing warping which you see-- this is the Wright Brothers plane-- could control the flight. But the power came from a gasoline engine. Now, today's airplanes, the turbine blades are single crystals of metal. And so you're making things that nature simply could not make.
And so with that as an inspiration, can we do artificial photosynthesis that can say, okay, plants take sunlight. They assemble carbohydrates. Why can't we do the same? And certainly, MIT-- Dan and Sarah and others-- are doing this where you take an artificial system and you ask you to split water as the first process in photosynthesis. The sunlight energy turns water into oxygen and hydrogen. And from that, you assemble a hydrocarbon. The reason you want to do this is because you have access to materials that nature doesn't have access to and that all the precious drops of water can be used to make a fuel.
Let me give you another point of inspiration. Carbon sequestration capture and sequestration is very high on our list of things that we must do because people are building coal plants. China for a while was building one a week. And you have to capture the carbon from these plants and sequester it safely. If you were going to capture carbon from a conventional coal plant a standard technology is you pass the flue over an absorbent-like amine or cold ammonia, there is a reasonably fast kinetic rates. So the carbon dioxide is absorbed on this amine, for example.
And then in order to bring it up again and create a stream of carbon dioxide, you have to heat up the material. And in heating up the material, it costs a lot of energy and needs a lot of real estate. But there is an existence proof. And that is in our own bodies. When we metabolize, we actually produce carbon dioxide. And if you think about this, the absorption of the carbon dioxide in your bloodstream-- and then it gets transported, and you breathe it out.
And so there are enzymes that actually allow you to capture the carbon dioxide, transport it, and breathe it out in very subtle, high pressure, low pressure regimes, which are not too different at body temperature a million times faster than it would have occurred without these enzymes-- no energy penalty. So this is an existence proof biology once again has done it very well.
And so surely we can capture carbon dioxide as well if we learn how. So it's nothing something more to tinker with. In the end, hopefully we can do better. Now, you can say, well, why aren't we using these enzymes? Well, they're enzymes. So they get degraded by all sorts of proteases. So they don't have the stability that we would like. But at least you know, half the thing of doing science is knowing what the answer can be.
So the answer can be thousands of times better. So let me end the talk. I showed you a bunch of challenges. There's such a teeny tiny small sampling of the challenges of what we need to solve. We have to look everywhere.
There's going to be very exciting science that will come out of this. And just like in the laboratories where you want to deliver some goods in the end, but boy, there's going to be a lot of very fundamental stuff that you have to develop along the way so you can have a Nobel Prize and save the world at the same time. And so with that, I'll just leave you with this image.
This was the famous image taken by Apollo 8. When they came around the moon, they thought finally to turn the capsule around. And one of the astronauts took this picture and a very bleak lunar landscape, a very beautiful Earth. The important point to realize is there's nowhere else to go. And so the astronaut that took this picture said we came all this way to explore the moon. The most important thing is that we have discovered the Earth. So it's our home. Let's take care of it. Thank you.
[APPLAUSE]
So time for questions. Let her.
MODERATOR: Please.
AUDIENCE: Hi, Secretary Chu, Dan Hussein, Pioneer Energy. I finished my undergraduate degree in electrical engineering. And your boss inspired me to go back and get my PhD. But you are now inspiring me to aspire to the Nobel Prize. So thank you very much. I have a question. Even though I'm an engineer, I have a political policy type question.
So you just got $34 billion in taxpayers' money. And how are we going to go about with a mandate to distribute it to solve a lot of these problems? How is your department going to go about doing it, but at the same time, without wasting taxpayers' money? What is your plan there?
STEVEN CHU: Be fast, but be careful. I think that's something we're very concerned about. A lot of these things, it's spread over a wide variety of areas, for example, weatherization of homes, things like that. So we're developing mechanisms to make sure that we have people on the ground, programs trained so that you can do energy or things like that. But let me speak more to the point. There's a lot of grants that we also give. And the university presidents, deans, professional societies, are about to get a letter from me.
And we're going to be asking for volunteers, both faculty scientists and volunteers, to help review the grants so that we spend the money in the wisest way possible. And we would also ask for students to consider take a leave of absence, spending a year in the Department of Energy, and helping us in those two years where we have to dispense this $37 billion in addition to the $26 billion we have per year.
This is a huge load on the system. And we need the best help we can in making sure that we evaluate the best proposals and put the money where it can do the most good. So that's an appeal to all of you. Think about spending some time in Washington. We're planning some time in the near term future to amass the NIH study groups, only a lot bigger to look at a lot of proposals. That's the only way we can actually do this. And the quality of reviews has to go up. It's very important we get it right.
MODERATOR: So I can warn we can take two more. So we're going to take the next two over here. And I apologize. That's all we have time for.
AUDIENCE: Secretary Chu, my name is Gary Shu. I'm a technology and policy student here also in the Department of Urban Studies and Planning. I was a student of Horst Stormer at Columbia who went to Columbia after Bell Labs pretty much fired all their scientists. And you've presented many great accomplishments that Bell Labs is produced.
But Bell Labs was supported by the monopoly AT&T for so many years, which was able to take the losses of some of the experiments that didn't play out. And so my question is how does the government plan to structure a Bell Labs in the government or, an RBE, so that the funding doesn't take away from other science projects in the government and is also able to transfer the profits to the private industry?
STEVEN CHU: Well, I prefer to think that the money spent on research at Bell Labs wasn't a loss. It was an investment that led to many, many great things. Now, what actually happened in terms of the business of Bell Labs was something different. So in terms of what the government wants to do with RB. Let me first put it as-- let me rephrase-- not answer the question, but say something slightly different.
If you look at the amount of money the state spends on energy, just the primary energy market, how much you spend on oil on gas, coal, things like that, it's over a trillion dollars. It could be a trillion and a half, but something of that scale. Energy going in the future will certainly have to be more high tech than it was from the Industrial Revolution days of burning coal to boil water to make electricity.
So in a high-tech industry, what do you typically do is invest 10% or more of sales in R&D, at least 10%. I was on the board of Nvidia. If you didn't do better than that, your competition would eat you alive. Well, 10% of a trillion dollars is $100 billion a year. That's a lot of money. And we're really investing a couple of billion.
And so the scale is not proportional to what is needed and so instead of saying that the money is going to be the same and are aren't you afraid that some of that money will be taken away for basic research. No, that's the last thing we want to do. And so what the president has called for is a doubling of the basic science budgets in the NSF-NIST and the Department of Energy Office of Science over the 10-year period.
And so that's in the out years-- that's what he's proposed. But in addition to that, we need additional money to fund these more applied areas. So I think I'm a big believer in the fact that energy science and technology will be a cornerstone, if not THE cornerstone, for how America is going to prosper in this century. And so what we're investing now is nothing. And so the president in a talk given to the National Academy of Sciences a couple weeks ago he said what he wanted was an investment in science and technology over all private and public sector that's a couple percent of GDP.
And so the doubling of the basic sciences is a start to that. But really, he says that this is going to be our future, not only our energy and climate change future, but our future in economic prosperity. So hopefully, this will come to pass. It should come to pass. This is, I think, our future.
MODERATOR: One more.
AUDIENCE: Great. Secretary Chu, I'm Phil Giudice, energy commissioner here for Massachusetts. Good to see you. Great talk. I'm concerned about the current national energy and climate legislation that's being considered. It has a lot of the right framework in it. But it feels like the process as it's underway right now is potentially going to water it down so that it really doesn't motivate significant change. And I'm wondering what we and all of us can do to help affect that so that it comes out with a different outcome.
STEVEN CHU: Talk to your congressmen and senators. Probably the ones around this area are very sympathetic to this. But I think that that's it. There are regional differences in the United States. And that's one of the issues that some of the hardest hit areas in the United States as examples have relied heavily on coal for energy, the industrial states.
And so we have to figure out a way to allow those states to actually make this transition to a new future. But it has hardship. And so I'm still very hopeful that we'll have an energy bill that has some real teeth in it. And we'll see how it plays out. But certainly, Congressman Markey, who is around here, is playing a key role with Waxman on that.
AUDIENCE: I understand. Thanks.
MODERATOR: I want to thank you and how inspired we are by your leadership. And I hope you have seen and heard the commitment of this community to joining you as we invent a new future for energy in the United States. Thank you so very much.
[APPLAUSE]