24th Annual Killian Award Lecture—Daniel Kleppner

Search transcript...

BACOW: Good afternoon. President and Mrs. Vest, Chairman and Mrs. Gray, friends and colleagues, let me welcome you today to the 1996 Killian Award Lecture. As the chairman of the MIT faculty, I have the honor of introducing the 1995, '96 recipient of the James R. Killian, Jr. Faculty Achievement Award, Professor Daniel Kleppner, the Lester Wolfe Professor of Physics.

The Killian Award is the highest award bestowed by the MIT faculty upon one of its own members. It recognizes accomplishments that go beyond the mere excellence that is expected of every member of this great institution. Recipients of this award embody the highest achievement of the values that we as the faculty hold dear-- achievement in scholarship, research, teaching, community service, and collegiality. To be recognized by one's peers and colleagues for such accomplishment is indeed a singular honor.

This award was established to honor James R. Killian, Jr., MIT's 10th president. Jim Killian's accomplishments are legion. He helped guide MIT through the post-war years. He was the nation's first-time presidential science advisor. He played a leading role in the establishment of the Corporation for Public Broadcasting. And he articulated a vision of a university polarized around science, engineering, and the arts, which continues to guide MIT today.

In preparing these remarks, I revisited a book which I last read 10 years ago, Dr. Killian's biography entitled The Education of a College President. In it, Dr. Killian offers trenchant advice that is as relevant today as it was in the day in which was written. I quote, "The central challenge facing American universities is the imperative to be relentlessly first-rate, to maintain such high credibility, creativity, and luminous excellence that they enlarge the national vision and enhance the national confidence. By their demonstration of a contagious excellence and high moral purpose, they may help our society avoid a sloth of despond."

Clearly Jim Killian had a way with words and an ability to turn a phrase. He was a gifted writer and editor and an extraordinarily effective advocate for good science. So too is this year's recipient of the Killian award, Dan Kleppner, a physicist whose scholarly achievements are matched only by the wit and elegance of his written and spoken words. Dan began a scholarly career at Harvard where, in his doctoral dissertation, he discovered that coherent cesium atoms can bounce from properly prepared surfaces without losing their coherence. This discovery provided the foundation for his invention, with Norman Ramsey, of the hydrogen maser, in which hydrogen atoms bouncing about in a microwave cavity stabilize the frequency of a clock to a precision better than one microsecond per year.

Now, I must tell you from personal experience, that this is indeed a great discovery because our ability to measure time with great and unerring precision has allowed the creation of such wonderful technologies as global positioning systems. In fact, each summer when the Bacow family is groping off the coast of Maine in the fog, we say a little prayer of thanks to people like Dan Kleppner for their work that made possible the invention of GPS. Because it's in fact his work which gives rise to these marvelous little devices. This is a handheld GPS receiver, which will tell me wherever I am in the world within a few meters of accuracy and helps me to bring my family home when I can't see anything around me.

Dan joined the physics department in 1966. Today he's recognized as one of the world's leaders in the field of atomic physics. He pioneered the study of Rydberg atoms, which help to clarify our understanding of how chaotic behavior emerges as one goes from quantum mechanical states to classical dynamics. His work on the suppression of spontaneous decay of excited atoms, previously thought to be inexorable, has led to novel kinds of lasers and photonic devices.

Indeed, in the current political environment in which support for basic science is being questioned, Dan Kleppner's work stands out as a wonderful example of why society must continue to invest in the search for new knowledge.

Not surprisingly, the Killian Award committee is not the first group to recognize Dan's considerable achievements. Dan's been honored with the Davison-Germer Prize of the American Physical Society and the Meggers Award of the Optical Society of America. He's a Phi Beta Kappa lecturer and Lilienfeld awardee of the American Physical Society. He's a fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and a member of the National Academy of Sciences.

For years, Dan has contributed to the life of the Institute through his gifted teaching of both graduate and undergraduate students. His text in classical mechanics is widely regarded as a sophisticated introduction for undergraduates in the field and has been used both at MIT and elsewhere. Dan serves as the Associate Director of the Research Laboratory for Electronics, which has also been the intellectual home of much of his research and also serves as the ombudsman in the Department of Physics, a position which I'm sure at times tests both his wisdom and good judgment.

No introduction of Dan Kleppner would be complete without reference to his column in Physics Today. While I cannot confess to be a regular reader of the journal, I went back through past issues in preparing these remarks. And I must say I have to recommend his columns to all of you. They are witty. They're insightful and accessible even to a social scientist who last studied physics in 1970.

Let me close by quoting one of Dan's columns, written in response to an article which questioned the morality of science. And I quote, "It is upsetting to find oneself suddenly on the wrong side of the moral fence. There are, after all, certain compensations for being bad. Bad people can wallow in money, wield power recklessly, exploit their friends, and drive red sports cars. I seem to have missed most of these pleasures."

[LAUGHTER]

"My choice now is to get serious about being bad or to refute the critics. I opt for the latter." Dan, on behalf of your colleagues at MIT as well as in the Academy, we are pleased you have made this choice and in the process have become such an eloquent and articulate advocate for good science. If you'll come forward now, I would like to read the citation for the 1995, '96 James R. Killian, Jr. Faculty Achievement Award to Daniel Kleppner.

In recognition of his extraordinary achievements in atomic physics, most notably his pioneering research in the field of Rydberg atoms and his contribution to the invention of the hydrogen maser and its application in ultra-precise clocks. A teacher and writer of great felicity and wide influence, he has inspired generations of scientists and contributed widely to the development of his profession. Dan, congratulations.

[APPLAUSE]

KLEPPNER: Thank you, Larry. I feel at this point I should immediately sit down.

[LAUGHTER]

But I am very pleased and honored to receive this honor from the faculty of MIT and from the institution. I'm somewhat puzzled also because when I look at my colleagues throughout the Institute and think about the wonderful things which are going on around here, I'm a little bit puzzled as to why it is I am the person to be up here talking while you are all there listening. But I feel it's rather bad luck to worry too much about good luck, so I will stop worrying about it and go on to talk about what I would like to talk about today.

The first thing is the title of my talk. Some people have asked about it-- "Views from the Garden of Worldly Delights." They asked why I chose that title. The answer is that if I had called the talk, "Selected Topics in Rydberg Atom Physics," you probably wouldn't come. But beyond that, the title reflects the fact that I do want to talk about science in a somewhat broader context than my own immediate research because I think this occasion calls for it.

The Killian award, as you so very nicely described, is a tribute from the faculty to honor the values at MIT that we think are very important. And I think that science is a very important essential part of those values. And I would like to talk about the work that I've done, somewhat in the context of the broader values of science.

So let me show what's on the agenda here. There's the title in case you forgot it, and I'd like to thank my wife for supplying the flowers. It was a midnight inspiration. But it is rather a flowery title, and it's somewhat perhaps a flowery talk. But you'll see it's an appropriate title, I hope. I also would like to acknowledge the graciousness of providing this decor that matches the title of my talk. It makes me wish that I had entitled the talk "Views from the Top of a Small Pile of a Hill of Gold."

[LAUGHTER]

Anyway, here is what I'm going to talk about if I can overcome the forces of static electricity, which-- I was teaching 802 last fall when it was humid, and nothing worked because static electricity doesn't work when it's humid. And here we are in the middle of the winter. Now it's defeating me. Okay.

I have a prologue. I wanted to talk a little bit about some of the people that we all know about and revere-- Kepler, Galileo, and Newton. And then I'd like to talk a bit about the work that I've been doing-- some of the physics of the dynamics of the vacuum and chaos and quantum chaos and something about pushing the frontiers of high precision.

Now I realize that the audience has all sorts of different temperaments, backgrounds, and tastes, that to my expert friends this will all be very simplistic, and for those who don't know much about physics at all, this will probably be totally confusing. So if I hit the right level over here, I should leave nobody pleased.

[LAUGHTER]

But I will try to be clear.

And then a little epilogue but not a very, very long one.

Let me start by turning to The Garden of Worldly Delights. It's the name of a painting by Hieronymus Bosch. Sometimes it's called The Garden of Earthly Delights. In French, it's called [FRENCH]. These things always sound much better in French.

[LAUGHTER]

Apparently the title was awarded to it sometime in the latter part of the 19th century. There's no trace of an early title. And it's not even clear where it hung. It was meant for a church, but it's certainly not an altarpiece. It was painted in 1510, and it is quite a remarkable and provocative painting.

So if I can have the first slide here, can we-- well, I can. I guess I press this thing.

[INAUDIBLE]

Is it there? Oops, let me go back. There we go. Can you see?

[INAUDIBLE]

Yeah.

It's a triptych. It's about as high as I am. It's a bit higher, six feet high, about, and six feet wide. And it's made of three panels, two small ones which fold over the one large one. This is a traditional form for church paintings. They were kept closed except on special occasions. The outside of this shows the creation of the world. What one sees there is a picture of the cosmos and the Earth with forms rising out of the chaos, very peculiar ghastly, grisly forms. It's apparently the third day of creation, before any of the beasts have been put on Earth, much less man.

Sitting way up in the left-hand corner is a very small figure that-- up there-- it's too faded to see from where you are, but it's a figure of God looking down on creation. And what's very unusual about this is that God is holding a book which is open. The painting is very unorthodox, if not downright heretical, because the thought that God was creating the Earth from a preordained plan is not one in traditional religious dogma.

[LAUGHTER]

And considering what else goes in the painting, how much it anticipated the 20th century, it wouldn't surprise me if the book held open the theory, the quantum theory, of gravity or perhaps string theory.

But let's see what the picture looks like when it's open. Here are the three panels. Over on the left is the Garden of Eden. In the center, which is generally called the garden of worldly delights, is a large picture with lots of activity taking place. And on the right is a picture of hell.

It's a very provocative and kind of spooky picture because when one looks closely, it is so unorthodox. In the Garden of Eden, there's a picture of Eve being presented to Adam. In the background, there are wonderful animals. There's a giraffe. There are lions. There are lambs. But there's a lot going on there that generally doesn't go on in the Garden of Eden. For instance, the lion is eating one in the lambs back there.

[LAUGHTER]

And in the foreground over here, there are very weird, spooky supernatural creatures, which are coming up out of the slime. So this is a very dark view of Eden. It's as if man is doomed even before the fall.

In the garden of worldly delights, it is rather a psychedelic festival. Lots of young people-- they seem to be about the age of graduate students--

[LAUGHTER]

--who are sporting themselves in varied and unusual activities. Here is a little detail from it, a fantastic mountainous form. There's a griffin flying through the sky. Over here, there are mermaids, or perhaps mermen, flocking down there. Here is someone doing aerobics.

[LAUGHTER]

There is a little mermaid there. Here are a lot of people who are sitting around eating a giant strawberry. These are all vegetarians, and they're eating a lot of fruit. So this is all politically correct.

Up here, there's a little figure you may recognize from the poster. Let me-- there it is. There's a little character flying through the sky on a fish. I think some of my friends were a bit concerned about that and were a little bit afraid I was having an out-of-body experience. But it is not me up there. I'm not sure who it is. But it's a very haunting little image, and this painting is filled with haunting images like that. It's a rather surrealist image. And in fact, the painting is a surrealist's painting. It's surrealism 400 years before its time.

Here we have people floating in a bubble over here, eating from these giant fruit. Here are some people climbing around in a shell. There is a couple reposing under an oyster shell.

[LAUGHTER]

Notice the picture's very suggestive, but in fact, no one is actually doing anything here. These are, like, perhaps flower children who are spaced out. Or maybe everyone has watched too much television. It's as if people are in a dream. This is not a licentious picture at all. It's very [INAUDIBLE]. There's a fellow floating in a bubble over there.

Here's someone sitting on a giant bird. This is infested with large beasts, huge birds. There's a crowd back there of people who are hugging each other. It's hard to know what to make of it.

Here is a picture in hell. Now, the usual d depictions of hell was filled with fire and brimstone, but this is a rather different hell. Here's a war machine made up of a knife and a couple of ears, which rolls along. Very bizarre, very surrealist type touch over there. Here is somebody being tortured by music. There's a bagpipe, which is blasting.

[LAUGHTER]

Again, he's ahead of the time.

Maybe the most haunting detail in this whole figure is this face which looks out from underneath this little platform where people are being tortured. It's a face just sitting there looking. This is a picture of, really, total confusion, of anarchy. Nothing make sense. It's a rather existentialist view of the world. It's just a world of absurdity. It isn't a view of terrible condemnation. The people are rather being tortured, but rather whimsically tortured over here.

This was the vision of this marvelous painter in 1510 who anticipated surrealism and existentialism by four centuries.

Now, I'd like to jump ahead and show another picture. And this is a picture of Johannes Kepler who was born about 60 years after Hieronymus Bosch died. Of course, Kepler is a very well-known figure. Every freshman learns about Kepler's laws, except the freshman I teach since one year, when I was trying to talk about Kepler's laws of planetary motion, I accidentally said, "Kleppner's laws of planetary motion."

[LAUGHTER]

Since then I've tried to avoid the name.

[LAUGHTER]

Can we turn off the projector because I would like to show a little bit about where his laws came from. I won't go through his whole history, although it's a wonderful history. There's a marvelous book by Arthur Koestler on the life of Koestler-- on the life of Kepler. Yep.

[LAUGHTER]

He became a mathematician and an astronomer at a rather young age. And at the age of about 26, he published his first work called, the Mysterium Cosmographicum. It was his attempt to try to reconcile the motion of the planets to a rational scheme. And of course, he constructed this figure, which I think you've probably all seen, in which one has all of the regular solids, one embedded inside of a shell, embedded inside of the next. And his belief was that the planetary orbits would fall within those shells.

He spent his whole life really trying to test that belief and see if he couldn't straighten it out. But it was a failure. And in some ways, his life was a failure.

Let me jump on now to 1619. The Cosmographicum was about 1596. So we're 23 years later. His last major work, the Harmonices Mundi, The Music of the World, in which we at last had unearthed the secrets of the planets. It was a time of great elation for him because he had discovered that his whole life he'd been working on the wrong thing and that it wasn't in the geometrical solids at all. The secrets were in music. He managed to equate the motions of the planets, the eccentricities, the variations in speed to musical scales and felt that every planet was singing its own song as it went around.

Now, this, of course, is sheer balderdash. The planets do not make any music, and these scales don't really work. But this was Kepler's crowning achievement to discover this music of the world.

But he had discovered something else while doing this. He'd discovered the laws by which he's known, Kepler's laws of planetary motion. First is that the planets follow elliptical paths. Now, that's very easy to state, and we take it for granted. But he fought that for years and years. Planets should move in circular paths. Even under the Copernican theory, one tried to equate planetary motion to circular paths, that Brahe had a cosmology in which the planets moved in circular pads, but the circle wasn't exactly around the sun. And one can introduce, still, cycloids, circles moving on cycloids. And he gave that up very, very reluctantly. The fact that they actually moved on elliptical orbits was a terrifying truth for him and one that he took no pride in and rather suppressed.

Also, the law of equal areas-- as the planets move around the ellipse, if they're faraway, they move slowly. If they're close by to the sun, they move rather rapidly. He discovered that it was a useful computational tool for him. But again, he seemed to take very little pleasure in that. And the third law, namely that the square of the period of a planet is proportional to the cube of the radius of the semidiameter of its axis, which he founded in 1618, is buried in The Music of the World.

So he had made these great discoveries, but he hadn't really known just what he had discovered.

Now, I'd like to show another picture, which was taken somewhat later. Here's a picture of the planets of Jupiter. These were taken by a rather good telescope, the Yerkes Observatory. But one can see comparable pictures just looking with a rather small telescope. And these are the planets of Jupiter on three successive nights. Well, it's actually one night, a few hours apart and then several days later. And one can see that these have moved.

Now, the discovery that Jupiter has moons, that the moons are going around rather rapidly, was made in 1610 by Galileo. Let me just-- for the record here, let me put on the-- let me just-- yes.

There is a picture of Galileo. I just want to pay him tribute. And I'd like to show his telescope over here. This is a picture of the very primitive telescope that he used. He, of course, didn't invent the telescope. Someone in Holland got it to him. It was a spyglass. He turned it on the skies and did some miraculous astronomy with it. The level of that astronomy is fabulous.

Let me show you. Here are pictures from his notebooks, sketches of the moons of Jupiter taken on successive nights. These have been preserved, and Professor French, Tony French, very kindly told me about this and showed these to me. These are his actual drawings with the positions of the planets put in, and Tony French plotted them. And this is the callisto that-- I doubt that the Galileo ever made a plot like this-- but you can see the wonderful quality of this data, even though it was taken without any type of modern instrumentation at all. He was a master physicist, a master astronomer, a master measurer.

He also realized what he had over here. When you look at that, what you're looking at is a clock in the sky. And he realized the moons of Jupiter constituted a clock and that the clock would be very useful.

We have a very nice example of new technology, namely the telescope, leading to a new, very important discovery, namely that Jupiter formed a little planetary system all of its own. And that new scientific discovery immediately had useful applications because with a clock in the sky, you can tell where you are on the earth. You can tell your longitude. Larry just talked about the GPS system. This is the forerunner of the GPS system. And Galileo got right to work in trying to popularize the use of this for geodyssey, for navigation. He invented a device called the [INAUDIBLE], which apparently it was a helmet with a spyglass put in it so that sailors could look up and see the moons of Jupiter. No model exists, and apparently it never worked at all. But it showed that he really was trying to attack the practical problem of navigation.

Now, the reason that the slide that I showed you, the transparency of the pictures of Jupiter's moons, is so moving to me is that, well, I have seen Jupiter's moons through a small telescope, and when I've seen them, I've thought about Isaac Newton.

So let me jump ahead now to Isaac Newton and to the Principia published in 1683. Here are a few pages from the Principia. These were photographed at the Burndy Library at the Dibner Institute, which has wonderful copies of the Principia which one can look at-- the system of the world, his rules for philosophy. And he gets right down to work-- the phenomena on which he's going to base his new dynamics, the phenomena that he uses, are the satellites of Jupiter.

He talks about the fact that they obey Kepler's third law. And he's going to derive that law from his planetary dynamics. He's going to derive the fact that they move in ellipses and the law of constant areas also from planetary dynamics. He makes very little reference to Kepler himself. Apparently, he rejected Kepler's observations about the orbits being elliptical because Kepler had only found this for Mars, really, and hadn't really generalized it. And in any case, Kepler's laws were purely empirical, and Newton wanted more than empirical laws.

Well now, with that as a background, let me jump ahead 200 years. I'm jumping ahead 200 years to the 19th century, which was a great century. It was the century of the Victorians. People were very industrious. They liked to collect things and catalog things and compile things. It was the century in which spectroscopy was invented. And that gave a lot to do for people who liked to collect, compile, and catalog. Namely, it was discovered that every element has its own set of spectral lines, that when you would look at it through a spectroscope, you'd see certain wavelengths which are present. You can measure those, really, very accurately. So there was a good deal of accurate measuring of spectral lines with rather little understanding of what they meant at all until 1884 when Joseph Balmer in Switzerland, a high school teacher, hit upon a very simple formula which showed the spectrum of hydrogen. Now, at the time, there are only four lines which are known, and here they are. There's their wavelengths-- one red line and a couple of lines which are in the blue.

There were three lines or four lines there, which he fit with a very simple formula which is of this form over here. The one over the wavelength is proportional to, well, that little expression I have over here.

Now, this by itself wouldn't be terribly useful because it's very easy to fit a few numbers quite accurately with an empirical formula. What made this thing very useful was the fact that it suggested there may be other lines which are present because here we have this J going from 3 to 6. What about 7, 8, 9, and 10? Before he actually published the paper, those lines were reported. They were discovered by Huggins. They were discovered not from the laboratory discharges. And this data incidentally came from Angstrom who made his measurements in the laboratory with a discharge of hydrogen. And one thing which made life work well for Balmer was that he knew that Angstrom's data was much more reliable than others because the quality of data at that time was not all that great.

But anyway, he fit it with this very simple empirical formula over here. But this formula had predictive power. Now, an empirical formula which has predictive power is a very useful one because invariably there is some physics which underlies it.

Well, taking Balmer's formula, one can predict not only the whole Balmer sequence over here but that there would be other sequences, too. He wrote his formula as 1 over 4 minus m squared, but if you think of this as n squared minus m squared, you could see that you can predict whole different series of spectral lines. And they were discovered rather rapidly. Well, Paschen in 1908. This is the infrared Lyman a bit later than that. This is the very far infrared.

So we have a whole series of lines, and we could fit them very, very well with a formula. But the question is, what does the formula mean? The situation was a bit like the situation that Newton faced when he had Kepler's laws to deal with, very accurate empirical laws but no physical foundation for them.

Well, of course, the foundation was laid by Niels Bohr in 1913. When he introduced rather heretical views into physics, he laid the foundations for modern quantum mechanics. He suggested, first of all, that the atoms exist in various stationary states. By classical theory they will radiate continuously. He said, no. They have various stationary states they can live in. They can jump from one state to another and give off the energy in terms of a photon. Over here, h is Planck's constant.

There was another postulate, namely that he was introducing a new type of mechanics and somehow, whatever he said, whatever he calculated over here would have to agree with the classical mechanics in the classical limit. Namely, he was introducing the quantum theory of atoms.

Now, the consequences of this were the planetary model of the hydrogen atom. The energy of the atom goes as 1 over n squared where n are digits. You can think of the radius of the orbit. It goes up as n squared. Most importantly, he actually computed the value of this Rydberg constant from other fundamental constants. It's a wonderful example of reductionism in physics, and it's clearly true that it was clearly nonsense, which Bohr very well recognized. It said so in the paper, namely that what he did over here was to lay the groundwork for a new physics, but his explanation was totally unsatisfactory. He took a bit of classical mechanics, imposed some quantum ideas on top of it, and came out with a formula which worked. And he said, this points the direction for a new type of physics. And of course, that's exactly what happened. Quantum mechanics was invented the next decade.

Well, he makes a very interesting comment in the paper on this. He asked in the Balmer series-- you can see in the laboratory lines coming. You can see emission lines with values of n as high as maybe 15 or 20 but not much higher than that. He says the reason you don't see them higher than that is that these atoms are so large that they're going to collide and fall apart before they radiate. And the condition for seeing those atoms is that you must have a very low density, which means you must have a very large volume to look at. Very interesting statement. That was 50 years before radio astronomy was invented. But in fact, of course, in space you have exactly a very low density and a very large volume at which you can look. And in the 1960s, when radio astronomy was just getting going strong, those lines were seen.

In fact, this is the first line. This occurs around a star when an electron moving around a proton radiates, gets captured, and then cascades down through these very high orbits. This line over here-- this beautiful line over here corresponds to the transition 110 goes to 109. It was discovered by Peter Metzger. I find this drawing fascinating because it's very difficult to do as well in our laboratory. That atomic physicists are rather arrogant with respect to astronomers because they can just look, where we can actually do things to our atoms. Nonetheless, the fact is it's very hard for us to compete with them. The other little lines over here correspond to carbon and helium. They're very small energy shifts due to the reduced mass, and it's all there very clearly.

Well, at that point, which was in the early '60s, I started thinking about Rydberg atoms. It actually was Ed Purcell at Harvard who started me off in that direction because the atoms are so very, very peculiar. Namely, if you look at hydrogen in these very high lying states with the very large values of n, there are a series of simple scaling laws which tell you how things vary. And they vary rather spectacularly. The radius goes up as n squared.

Now, suppose n is 100. You're talking about a radius which is 10 to the fourth times larger than an atomic radius. This is practically macroscopic. The binding energy goes down. The transition frequency for delta n equals 1. Instead of the optical regime, you're in the microwave regime like the line I just showed you. The cross section goes up as n to the fourth. It interacts very strongly with the electric fields. The field you need to ionize the atom gets very, very small, and that turns out to be very useful because it's very easy to detect these atoms. You just ionize them. In the ground state, it takes a billion volts per centimeter to ionize an atom in a-- well, for n equals 30, 1,000 volts per centimeter will do the trick. It interacts very strongly with magnetic fields. There's an n to the fourth scaling law there.

Also, they don't radiate. They live for a very long time compared to conventional atoms. The lifetimes go up as between n to the cube and n to the fifth. So you can have these atoms sitting around for a very, very long time. The fact that they don't radiate makes life very simple, but it also makes it very difficult. It's simple because you don't need to worry about them decaying before you've looked at them. But things which don't radiate don't absorb. And in the mid '60s, one couldn't make these atoms in the laboratory because there were no sources of radiation powerful enough to excite them.

Well, let me just point out one other feature of this. If we look at the scaling law of the radius and the transition frequency, we find Kepler's third law. Well in a sense, it's rather trivial. The reason, of course, is that we're talking about the same type of force, an inverse square law force. We'd also find the law of equal areas, if we believe in orbits, up here. That's just conservation of angular momentum.

So the persistence of the scaling law over here is just, if you like-- it's a coincidence due to the fact that the electric forces and gravitational forces both vary as 1 over the distance squared, if that's a coincidence.

Nonetheless, these atoms are obviously very intriguing. And in the next decade, techniques became available for making them. We had lasers, which were powerful enough to excite the atoms. And once you can do that, you can make them quite easily.

Let me just show a typical scheme for making Rydberg atoms. Certainly a Rydberg atom is any atom with very large value of n. No one seems to know how big it has to be to be a Rydberg atom. But I'll be talking about values of n, maybe 30, perhaps 100.

You make them in a vacuum. You have a little oven. If at all possible, you try to avoid hydrogen because hydrogen is a terrible atom to work with. Alkali atoms work very well. And when you get into Rydberg states, the differences between hydrogen and the alkali atoms are practically negligible in many cases.

So you have a beam of atoms coming along. We zap them with laser light to knock them up into the Rydberg state. We use the Bohr frequency relation but in reverse. Instead of the atoms radiating, they absorb. And they absorb resonantly. They go into a Rydberg state. And then one can detect the atom very easily by applying an electric field. The electric field ionizes the atom. Once you have a charged particle, it's essential to detect. Neutral atoms are beasts to detect. But the Rydberg atoms, since you can ionize them, are quite easily detected. So those are very powerful techniques.

Just as an example of a little bit of spectrum, here is a spectrum for, what, n equals-- here we're going from 100 to 117 over there. These are actual lines that we recorded at the laboratory quite a while ago. These little peaks on the line over there are due to the electric field. We were actually trying to measure the field. They're not very important. But you can see that the signals over there are quite large, and one can see them.

Okay. That's sort of a background. This is an intriguing new class of atoms. Everything is totally distorted-- the interactions with magnetic fields, with the electric fields, with the radiation field, totally different other atoms. We have new techniques for making the atoms, for detecting the atoms as there should be some interesting applications.

So I'd like to talk briefly about a few of those applications. The dynamics of the vacuum. Well, that's a very interesting idea. The fact that the vacuum has physical properties. It's commonly accepted in physics. We don't think much about it, but if you're not a physicist, the thought that empty space is really not empty is a kind of unsettling thought, and I just wonder what Kepler, Galileo or even Newton would've made of that.

I'd like to talk a bit about order and disorder, and I'd like to talk about my old love, namely precision measurements and perhaps clocks. Now, according to the Bohr theory, the atoms are excited states. They can jump to another state. The Bohr theory doesn't tell you how it jumps. The foundations for that were relayed by Einstein who pointed out the properties of what he called "spontaneous emission." Namely, every excited atom is going to radiate at some rate which depends on the structure of the atom which he could not calculate at the time. But he laid out the foundations. And this process, called "spontaneous emission," is a fundamental process. It's the fundamental irreversible process. Namely, if you take any atom and excite it, it's going to radiate down to its lowest level. You don't do anything to it all by itself in free space.

Now, irreversibility is an important concept in physics, chemistry, and other areas, too. It is often due to the fact that you have large numbers of particles interacting. But this is the fundamentally irreversible system, namely, one atom sitting all by itself. Except it isn't sitting all by itself. It's sitting all by itself in space.

The fact that these atoms decay causes problems. It means that the atoms, of course, have a lifetime which is 1 over the decay rate. So you only have a certain amount of time to look at the atoms. It creates noise in quantum measurements. When you track down the ultimate source of noise in any measurement device, it comes down to the spontaneous emission. So it's the ultimate source of noise in measurements. And at the bottom, I've just said it's a nuisance because we'd rather not have it occur.

Now, when you limit the lifetime of a measurement because of the uncertainty principle, you introduce a spread in the-- an uncertainty in the energy. And if you're looking at the energy radiated, the uncertainty in energy is reflected in the uncertainty in the frequency. If you look at a spectral line-- and here's a crude drawing-- there's the intensity of the line as a function of frequency. It has what we call a "natural line width." It may be very, very narrow, but it's always there. And that line width over here is just inversely proportional to the lifetime, namely it's directly proportional. It's essentially this spontaneous emission rate.

Suppose we could prevent the atom from radiating. What would it be like? Well, I set out to do that a long time ago for a reason which I realized even then was self-defeating. Namely, there was one line we'd like to measure very, very accurately in hydrogen. It's the transition between the 2s state and the 2p state. Let me just say that it is a line which is hard to measure because the 2p state, the first excited state of hydrogen, decays very, very rapidly. The difference in the energy is called the Lamb shift. It's very important in physics. It's purely quantum electrodynamic in origin. It's an important test for quantum electrodynamics. And it's very difficult to measure it because of the very short lifetime of this excited state of hydrogen. The question is, could you prevent that excited state from radiating? The answer is, yes. And you can do it by cutting the atom off from the vacuum. And the way you do that, as I'll explain, is you put it in the very, very small cavity.

Now, I proposed this in passing in some lecture notes in the late '60s. I pointed out, though, at the time, that it really wouldn't help at all because by the act of putting the atom in the cavity, you change the Lamb shift. And this is one of these things which is actually quite profound, but I didn't realize just what I was saying at the time. Because what it really means is that the concept of an atom just sitting by itself in empty space is a metaphysical concept. It interacts with the space. If you try to change the nature of the vacuum-- if you try to change the nature of space-- you've changed the nature of the atom.

Nonetheless, the thought of cutting off spontaneous emission was intriguing to me. And with Rydberg atoms we could do it. Here's how it's done.

We have a source of atoms. It's an atomic beam, and it travels some distance. And it may take of the order of a millisecond or so to get from here over to there. We excite the atoms with a very short pulse of laser light over here and measure the arrival times down here. The very fast ones come soon. The slow ones come later. And we get what we call a "distribution curve."

Now, if the atoms are radiating as they travel this way spontaneously, and you choose a system where this spontaneous lifetime is about the transit time, then you're going to lose most of them, and the number that you get there is very small.

How do you prevent them from radiating? Well, you cut them off from the vacuum by putting them in a cavity. In this case, you could think of it as a waveguide beyond cutoff if you know about waveguides. Otherwise, let me just say that if you put metal plates nearby, they can't radiate. And so one puts metal plates nearby, and one sees that the atoms live. So in this region, the atoms are really very different from the way they are in free space. They simply are not radiating. And we did the experiment. It worked fine.

Here is one drawing to show a little bit more detail. There's a time-of-flight spectrum that we have over here-- fast atoms coming, slow atoms coming. This would be calculated if there's no radiative decay. What I put here is a radiative decay curve. The atoms are decaying at that rate. So what you actually see is under this red area here, which is much smaller than that. By measuring the ratio of this area to that area, you can tell whether or not you've turned off spontaneous emission. And in fact, here is the original data from this experiment. And this curve over here, this increase in the curve, says that, in fact, the lifetime is inhibited. It's something that's at least 20 times longer than it would be in free space.

Well, that was a curiosity at the time. It seemed rather novel. At this point, I think it seems rather commonplace, but it points the way to just the opposite experiment. Namely, instead of having the atom cut off from free space, you can make it interact very, very strongly. And the way you do that is by putting the atom into a cavity.

Now, I have a little demonstration over here. I've been talking rather glibly about atoms interacting with the vacuum. We have to picture free space as a place where you can have radiation, where you can have waves going by, any direction, any frequency. But if we look at one of those, the atom can interact with that wave, and it can exchange energy with the wave. And this little demonstration over here shows very much what goes on. Here is a pair, what we call "coupled oscillators." You notice that the energy is leaving this one and going over to that one, and it's coming back, a very well-known phenomenon. We always teach it in freshman physics and beyond.

Now, what's happening in free space is that we have lots of these oscillators. If I were to have a large demonstration with three, four, five of these balls hanging, and I set this one going, it would make that one go. The energy would go there and get transferred down and move back and forth through it. Eventually it might get back here. But as I put more and more of those in there, the energy would never get back here. And this is an analog for spontaneous emission. Namely, there are so many modes with which the atom can couple, the radiation leaves, and it never comes back.

Now, the experiment I just showed you was one where we just cut off from all the modes, and the energy never left. But what we can do is let it interact simply with one mode by putting the atom into a cavity which is tuned to that mode. If it's a very good cavity, the energy just moves back and forth from the atom to the cavity and from the cavity back to the atom.

And we started a series of these experiments where we have atoms flying along over here. They're going through a cavity, interacting with the cavity. And then we can detect the final state of the atoms. And what one sees is that, indeed, the atoms exchange energy with the cavity. What we're doing here is measuring the atoms in the two states. Here is the upper state, and there's the lower state. They leave this state and go into that state, and they swing back and forth. This is not one atom. This is several hundred atoms. One would very much like to do it with one atom. One would, but one didn't. At least this one didn't. But experiments with one atoms in these microwave cavities have been done. The first ones were done by Herbert Walther at Munich.

In order for this to work very, very well, you need a really super cavity. And we never got the [INAUDIBLE] which was high enough. Nonetheless, you could very well see the effect of the atoms interchanging, spontaneously interacting with the cavity and exchanging energy back and forth.

Well, what's it all good for? It's good for a lot. But there was a very beautiful demonstration of these ideas carried out in the optical regime rather than the microwave regime that I'd just like to show you, recently, by Michael Fled and Kyungwon An, the single atom laser.

Here the same ideas are being applied with an atom interacting with the cavity but not in the microwave regime. This is ordinary optical radiation. One has a stream of atoms coming along. One excites them with a laser beam. They go into a cavity. In that cavity-- the cavity is basically empty. There's nothing there. An atom may spontaneously radiate into the cavity. If it does, the photon it gives up does not leave. It rattles around. If you have enough atoms, it'll stay until the next atom comes in and make that atom radiate faster. And eventually, all the atoms, as they go through the cavity, radiate into the cavity even though at any one time, there's only one atom in the cavity or less.

Now, this is a laser, but normally lasers have millions or billions of atoms interacting all at once, all contributing to the radiation field. This is a very different type of device with only a single atom in there. And one can see the lasing action over here. Large end tells you the average number of atoms in the cavity. And this is the radiation power in the cavity as you tune the cavity. This little peak over here just corresponds normally to spontaneous emission. As you increase the number of atoms, the peak gets larger because on the average, there are more in there. Suddenly, as you go up here, this gets huge, and that's lasing action taking place.

So one has a device with single atoms interacting with these fields. And there is a new physics and a new technology growing up with these devices because the kind of field these atoms radiate is different from the normal fields that we make. What we have are what are called "quantum mechanical entangled states" of the atoms in the radiation fields. These can be used for, possibly for quantum computation, for quantum measurement theory. It's a fascinating area, but let me leave it and talk about just the reverse of this.

Order and disorder. Chaos and quantum mechanics. These experiments currently are being done by Neal Spellmeyer. Hong Jiao recently got his degree on it. The reason that I find this fascinating when I think about Kepler, Galileo, and Newton, is that, of course, the whole goal of these founding fathers was to discover the order in the universe, and the planetary system, the solar system, was used as a paradigm of orderly behavior.

Well, I was quite shocked a couple of years ago to learn from Jack Wisdom here that in fact, the solar system is not regular at all. It is chaotic. The motion is very irregular. What we're looking at over here is the obliquity of Mars. I think he chose Mars-- well, he says it was just because of a particular resonance in it. But this was the same planet that Kepler went to war on trying to figure out its orbit because it's so elliptical. But anyway, if you look at the angle between the spin of the planet and the normal to the orbit over here, which on Earth is about 23 degrees, it varies between about 60 degrees and 10 degrees. It's a huge variation. And it's varying over time, and this time is rather long. This is a billion years. But it's in this very random and chaotic fashion.

Well, what do we really mean by chaos? There's one example of it. It means that it's unpredictable. I have a little demonstration over here which a lot of people have seen but maybe not everyone. This is another pair of coupled oscillators. These are coupled oscillators. Here are two links over here. I can let it swing like that. It swings very nicely. That's one of the normal modes. There's another mode over here if I let it go like that. That's the antisymmetric mode. Those are the two normal modes. And these are the things we ask students to calculate on examinations because you can calculate that.

We rather neglect the fact that there are also modes which are not normal like that.

[LAUGHTER]

I advise you all to watch that very, very carefully, because it's never going to do that again.

[LAUGHTER]

So this is a unique opportunity. It seems to have life of its own. It's really quite intriguing. It's losing a little bit of energy to the table over here. But if we let it go-- in fact, I was going to try the exercise of hypnotizing the whole audience because it really is rather hypnotic.

[LAUGHTER]

But without defining chaos, I can tell you that's what it is. And I can give you a working definition of it, too. Namely, if you try to grab this thing while it's moving like that, you're really likely to get your knuckles rapped, where when the motion is regular, it's very easy to pick it up.

Well, chaos assumed its modern importance really with the work here by Ed Lorenz in the '60s when he realized the significance of this, first looking at weather systems. Then it was realized it's a much more common phenomenon than that.

The existence of this type of motion was known by [INAUDIBLE] at the turn of the century. But he didn't have the right tools, namely modern computers.

So now there's a lot of interest in chaos, but the question is, what happens to quantum mechanical systems when they go chaotic? First answer is, nothing at all, because this is a purely classical motion, but let me give you an example of the sorts of things which can happen.

These are energy levels of a Rydberg atom of hydrogen in a magnetic field-- no, in an electric field, in the electric field. And without going into explanations, it's all really rather orderly. This is what we call an orderly spectrum. If I look at an atom which is perturbed a lot-- and this is lithium, which happens to have what we call a "large quantum defect"-- you can see that the energy levels over there look totally different, and they reflect something different about the quantum-- about the classical mechanics of the system. Of course, you can always solve the classical motion of an electron moving around a proton in any field that you want. It's just that that type of solution has not much to do with atomic behavior. Nonetheless, there is a correspondence over there.

Let me show it to you in an other context, the one we've studied it in mostly. Here is a rather complicated diagram of the energies of Rydberg states of-- let's call it hydrogen-- in a magnetic field. In this region, these energy levels-- notice they cross each other beautifully. It all looks rather orderly, and in fact, the system is. When you get out over here, things look rather disorderly, and in fact, the classical motion is disorderly also.

We can see that just by-- well, first of all, looking at the spectrum way out there. Here's a little bit of a spectrum in a disorderly region. Notice there are sort of energy levels in here. Incidentally, what I'm showing you is a scan of energy over here and magnetic field over here, and these horizontal peaks are signals from the atoms. When we plot our data this way, you can actually see the energy levels growing. You can see them get large. You can see them get small. They interact. It's all kind of a mess. Here's some crossing with the other ones. There's nothing orderly about that experimental spectrum.

But let's go back and look in the orderly regime at the classical motion. And here is a nice picture, which was-- this was plotted by freshman, Ian Schecter, who knows how to do these things. And Neal Spellmeyer gave him a lot of help in showing him what to plot. This is a picture of an electron moving around a proton in a magnetic field. It's tilted towards us. But this is really sort of a large flower. It's a very nice type of motion. If we let the thing continue to go, the shell would kind of get filled in, and we'd have something like a large football. It's kind of pleasing. It's really quite orderly.

But if we change the energy just a little bit, then what we find is a rather different type of trajectory. Here is the trajectory. The nucleus is over here. Maybe I have it upside down. It's even hard to know which way it goes.

[LAUGHTER]

But you can see these things are going all over the place. They loop very far out. They come back in here. It's a real mess. This is an example of chaotic motion. So the classical motion of this system is really chaotic.

What does that mean for quantum mechanics? It's a provocative question. Oh yes, let me characterize the whole system. This is a diagram made a few years ago, again, with the help my flower drawer over here. Energy, magnetic field-- we have a region over here, which is an orderly region. There's one where things get kind of mixed up. And here we have a region of hard chaos, and this is labeled "no person's land" up there. Because I'm an ombudsman, I'm very sensitive to the language that we use.

[LAUGHTER]

And this is now a few years old because now we know a lot about what's going on up there. Let me show you. Kepler would have enjoyed this because here is some data, just a sweep of the spectrum. Going up, this is a positive energy and a high magnetic field. You have a lot of lines over here, and there's nothing very pleasing about this at all. One can't see any order in that. If you blow this up, there are even more lines in there that you can't plot them all. It's rather discouraging when you just look at it.

But in fact, in this, we have a lot of information about the classical motion of the system because even when a classical system is chaotic, there are periodic orbits in it. Namely, there are orbits which repeat themselves over and over again. They're very unstable. If you just try to find one of those by launching a particles in the computer and seeing if it comes back, you won't find it in general. They're rather hard to compute, but they do exist. [INAUDIBLE] made a very elegant statement about the importance of these periodic orbits and chaotic systems because they're the only thing with which you have to get a hold of, with which you could talk about the system. It provides a framework for any type of understanding you'll have.

So the classical periodic orbits are very important. And the interesting point is that this quantum spectrum can tell us what they are. Now, the way you do that is a little bit complicated. I don't want to go through it. But I want to show you some of the results.

This is actually not for an atom in the magnetic field but in an electric field. What you do is you-- if you like, you filter that spectrum properly. You do it by-- it's a type of Fourier transform using the right variables. And what you see is a much simpler looking spectrum. We have a whole series of peaks over here. This is lithium atom in the electric field. We have peaks over here which we've labeled. All of these correspond to one of the classical periodic orbits in the system. There's another set of peaks which corresponds to another. And there are peaks in here which we haven't labeled yet, which correspond to still other classical periodic orbits in the system.

This connection between classical periodic orbits and the quantum spectrum was established by Martin Gutzwiller in the 1970s and worked out to be useful for atomic physicists by John Delos, and we know how to do it right now. We can look at the quantum spectrum and learn about the classical mechanics of the system.

Just to show an example, which I won't describe too much except to show you that we can do it-- what we have are a whole series of spectra at different energies over here. And these red dots correspond to the point where new classical orbits come into existence. We have a quantum signature for it.

Now, this game that we are just learning how to play right now is a very intriguing one because actually, you cannot find many of these orbits classically. You might think that classical mechanics is all cut and dried. If we have the equations of motion, we can integrate them. We can find any orbits that we want. But if you're looking at long period orbits, which are very sensitive to the initial conditions, and looking at a particle which may be going way around like that and eventually coming back here, you can't find it because it's so unstable that even the smallest tweak of the initial direction, for example, will send it careening off. So you can't hunt them down.

Quantum mechanics can do it for you, and here's how it works. When you excite an atom in an experiment like ours with a laser, what you're doing is creating a wave which goes out, according to the laws of quantum mechanics, over all possible paths. And this is the language of Richard Feynman who first, to my knowledge, talked this way. It explores all of space. It goes along all possible paths.

Along certain paths, the wave comes back and reinforces at the nucleus, and those are the classical paths, the allowed classical paths. In a sense, the atom's like a supercomputer for you because it's exploring all of these paths at once. And as you take the spectrum, all that information is in the spectrum. And now we can unfold the spectrum. At least we can start to unfold the spectrum and learn about the classical mechanics.

So we have a rather interesting case here where we're using quantum mechanics to learn about classical mechanics. The question is, is that a sensible thing to do? It's generally thought that quantum mechanics is really very complicated, and classical mechanics is all very simple. It's just the other way around. When it comes to disorderly motion, classical mechanics is infinitely complicated, and quantum mechanics is infinitely simple. And quantum mechanics can do the work for us. So that's one of the things that we've been learning about that.

Okay, I'd like to conclude by talking about what I've called "precision measurements." Two experiments-- one measuring the Rydberg constant with Rydberg atoms, same term-- "Rydberg atoms" or "atoms [INAUDIBLE]. The Rydberg constant is the constant that goes in Rydberg's formula or Balmer's formula. Incidentally, the reason it isn't Balmer's constant rather than Rydberg's constant is that Rydberg was a professor, and Balmer was a high school teacher.

[LAUGHTER]

Let me show a little history-- I thought I had it right here-- of the-- oh yes, here we go-- the precision of the measurements of the Rydberg constant. You could always measure it pretty well. Spectroscopic things generally could be measured well if they had a part in a million or so. Here we're starting about 1920. The precision of the measurement is almost one part in 10 to the seventh. It gradually improved. Laser spectroscopy came in. It got much, much better. This plot was made a couple of years ago, and since then there have been even better measurements. It's now about a part in 10 to the 11th. One can measure this constant to this type of precision.

That's the precision of the measurement. That means how accurate-- well, that means how precise you think it is. Accuracy is another matter. Let me show you.

Here are plots of recent measurements. We're just plotting the value of the Rydberg constant over here. The last official compilation of the fundamental constants in 1986 had a value of the Rydberg constant over here. Here are a whole series of measurement since then. They are in very good agreement with each other. And they disagree with that measurement by about two or three standard deviations.

This type of disagreement is not at all uncommon in the history of measurements of fundamental constants. In fact, it's a whole history of finding errors and correcting them. So the fact that all of these things over here are in good agreement does not mean that much better measurements are going to agree with those still.

One of the problems here is that all these measurements have something in common. Namely, they all are looking at about the same types of spectral lines. These are conventional spectroscopic-- well, I shouldn't say conventional-- the laser spectroscopy of the hydrogen atom.

Now, there's a new type of spectroscopy which you can do, which we are doing in the lab. And what we're doing is looking at transitions between the Rydberg states of the atom. We're going from 29 to 30. That's a perfectly good interval. We can measure that frequency directly. We can generate it from an atomic clock. And we can determine the energy separation. From the energy separation, we can get the Rydberg constant. So that experiment is going on right now in the laboratory, and we're doing it for a couple of reasons.

One is it's so totally different from all the other experiments that it's just very valuable to have it. Another one is that eventually the resolution is going to be larger than the other experiments, and we'd like to know it better. Now, I'll explain in a couple of minutes why we want to know it better. Let me just show a little bit of data from that experiment because we've spent a couple of years rebuilding, and just in the last month or so, we've got it going again.

We have over here an interference fringe. This is the Ramsey separated oscillatory field method. Oh, did I jump by the experiment? I probably did. Let me show how it works. It's an atomic beams experiment where we take a beam of now hydrogen atoms. We excite them to the state we want, and that's a song and dance. But then they go into this region where they interact with the radiation field. We start in the [INAUDIBLE] 29 state. We end up in the 30 state.

I mentioned it's very easy to detect Rydberg atoms by electric fields. You can not only detect them. You can tell what level they're in. So you can detect these two states separately. And we radiate them in the center with radiation at just the right frequency.

Now, to measure things very well, you want to look at them for a long time by the uncertainty principle. So you make this distance just as large as the budget will allow. It would be very difficult to apply the radiation the whole distance down there, so we use the method-- which was invented by Norman Ramsey who is sitting somewhere over here, so he'll be familiar with it-- the separated oscillatory field method where we apply it here and here in coherent beams. And the result of that is a interference fringe which looks like this.

Now, this is about a 1 kilohertz resolution, so just looking at that line by itself would tell you the Rydberg constant about as well as it was known at the last compilation. But now we're going to get to work and find out really where the center of the line is. So we will have a new measurement and a more precise measurement of the Rydberg constant.

I want to talk about one final measurement in hydrogen that doesn't use Rydberg states at all. It's called two-photon spectroscopy. This is done with my colleague, Tom Greytak. And it uses techniques we've developed for cooling and trapping hydrogen atoms. The great goal of this is Bose-Einstein condensation, but we'll save that for another day. I put down here-- the goal is to determine the proton radius.

Well, it turns out if you really measure the energy levels very, very well in hydrogen, your theory runs out of steam due to the fact that you don't understand the size of the proton. The effects are rather small, maybe of the order of a part in 10 to the 11th and part in 10 to the 12th. But that's the leading uncertainty in there.

So I think the result of these measurements is going to be a better value for the proton radius. What will that do? Well, hopefully it'll push some theorists into computing it because they should be able to compute the size of the proton. What could be simpler than that?

[LAUGHTER]

It's also to push the development of optical frequency, metrology, and perhaps to develop a better type of clock. So let me show you just briefly how that works. We have hydrogen atoms which are-- I've just called the cold trap hydrogen. Now, it's a long song and dance how you do that, but we have these atoms which are sitting here, and they're sitting almost at rest. This is totally unlike any other spectroscopic target of hydrogen before. The accuracy of your experiment depends on being able to look at the atoms for a long time. If they're moving fast, you can't look at them for a long time. These are at rest. We can look at them practically forever. We send in radiation over here. We go from the n equals 1 state to the n equals 2 state in a transition which absorbs two photons. This is a forbidden transition, which means it takes a lot of power to drive it.

For the same reason that it's forbidden to absorb, it's forbidden to radiate, which means the lifetime is very long. It's about a seventh of a second. The other n equals 2 state has a lifetime of two nanoseconds. This is a line with hundreds of megahertz. This is a line with the a few hertz, which means you can get fabulous precision on this line if you know how to do it. And in order to do it, you need a couple of things. You need atoms at rest, which we have. You need very good lasers, which we're getting. And you need some way to measure the laser frequency accurately, which were worrying about.

Here are some of these results. These are very recent results in the past few months. What we're looking at is a signal for the excitation of the 2s atoms as we sweep the laser over here, and the central fringe over here is 2 kilohertz wide. That's 2,000 cycles at lasers which are oscillating at about 10 to the 15th hertz. So it represents a very, very high resolution. This is higher than the best which had been gotten previously, but in fact, we think that all of this width over here is just due to our lasers jittering around, and as we make the lasers narrower, that line will get smaller and smaller.

Now, why do we want to know this? A couple of reasons. If we put together this frequency measurement with the Rydberg measurement, we get two things. We get better value for the Rydberg constant. We also get a better value for the Lamb shift in the ground state, and turning that into English, it says we get a better value for the proton radius. So there's more metrology to be done on hydrogen.

But we think the accuracy of this is [INAUDIBLE] leave theory far behind. I mean, that happens. You can make atomic clocks, which are good to a part in 10 to the 12th when your theory's only good to a part in 10 to the sixth. That's about the case with a hydrogen maser. Nonetheless, it's still worthwhile. These atomic clocks have been useful devices. As Larry explained, the global positioning system uses these clocks. You might wonder, why do you need a clock 1,000 times better? You don't need to know your position 1,000 times better than you know it right now. I think the answer to that is, we don't really know why. But invariably, when you get a new technology with much higher precision than any that's been gotten before, you find interesting new applications for it. And I'm sure that will be the case over here.

We didn't set out to make an atomic clock. In fact, it never occurred to me that this would be at all useful until we started looking at what we have here. The reason it's a good potential candidate is the line is quite narrow, and we have lots of counts. There are other ways to get narrow spectral lines, stored ions. But you just get a very small count rate, a few photons a second. This thing can give out millions or billions of photons per second. And that high signal to noise ratio makes it very useful as a candidate for the clock.

Well, I've mentioned the people who are working. I would like to talk about the former students and postdocs. I won't read them off here, the long number of students have contributed to the work that I've shown. I would like to, however, show the students who are working right now. I gave you their names, and here are their faces.

[LAUGHTER]

After awhile--

[LAUGHTER]

Let's see. Forward. Yes, well there is Robert Lutwak, Jeff Holly, and Joel DeVries who were working on the Rydberg constant experiment. And this slide that I showed over here-- I wanted to show it because that pink glow is that Balmer line. This was first seen in the absorption line of the sun by Joseph Fraunhofer in 1817. This is the red line that's the characteristic line of hydrogen. That, to me, is sort of the symbol for hydrogen, and hydrogen is the symbol for physics. And physics is the symbol for all that's good.

[LAUGHTER]

Neal Spellmeyer and Hong Jiao. Hong's a little bit worried about having his picture taken here because he actually got his degree last fall. Neal is working on-- this is the quantum chaos experiments. And here is Tom Killian and Adam Polcyn and Dale Fried who were on the trapped hydrogen experiment with my colleague, Tom Greytak. And Tom was off teaching at the time I was taking pictures, so I didn't get a picture of him. But those are the people who've been doing all the work.

Now, I promised-- or threatened-- a little epilogue, so let me tell you what's on my mind. What's on my mind is Kepler. Here's a picture of him a few years before he died. This was about 1627.

Kepler led a miserable life. It was miserable personally. He came from a family of sociopaths or maybe psychopaths. An aunt was hanged as a-- I guess, burned as a witch. And his mother was almost burned as a witch. His father was almost hanged. One of his brothers was a ne'er-do-well. It was a family of misfits. He himself had poor physical health. He had very poor sight, which made it very frustrating for his attempts to do astronomy, and his whole career was plagued by poor health. He talks of having to walk from one town to another because he couldn't sit on a horse.

Now, that type of difficulty was not that uncommon. Lots of people had problems like that. But he had a miserable life, I think, for other reasons, too. He had none of the sources of support that we expect to do science. There were no instruments around. The telescope was invented late on his career. He tried to get a hold of one. Galileo promised to send him one and never did, but he managed to borrow one for a few weeks. And in the course of those few weeks, he wrote the first book on optics and set out the theory for optics. So he created this new theory for optics. He spent his life computing things obsessionally. At one point in the Astronomia Nova, he made a mistake and to correct it-- they have the records-- 900 pages of calculations done all in his hand.

No computing help at all. The logarithms were invented around 1614. He learned about them in 1622. In 1624, he wrote the first book on computing logarithms. So he sort of invented the technology for doing computing. Well, we take for granted the modern tools that we have right now. But I think we ought to think of how much could be accomplished without those tools and how much we owe to the people who developed those tools.

But what concerns me about Kepler is more than that. What concerns me is about the intellectual tools that were denied to him. He didn't have the calculus. He was before Descartes. He anticipated some of the calculus. But he was trying to solve these problems without any of the mathematics that we take for granted. So he was impoverished in those tools also.

He was also intellectually impoverished in being embedded in the garden of Hieronymus Bosch, not in the modern world. He pursued astrology his whole life. He didn't believe in it. But the reason he didn't believe it was he thought the planetary tables really were all wrong, and unless you had good planetary data, you couldn't expect to do decent astrology.

[LAUGHTER]

He was a subject to all sorts of indignities that we would be familiar with. He had to spend a lot of his time doing things that he didn't want to do, like casting horoscopes. Well, we have to write committee reports, write reviews and such. He was constantly being abused by his benefactors, by his patrons. He published his books at his own expense. In some cases, he bought the paper, took it to the printer, had the books printed, and took the books to the bookshops on his own. When we worry about publication problems these days, we're not worried about problems like that.

But those problems are still minor compared to the problems which I feel most regretful about for him. Namely, he lacked an intellectual community. Knowledge was considered sacred in those days. It wasn't shared. The data of Tycho Brahe was kept under lock and key, and after Tycho died, Kepler literally stole it. He stole the data on Mars in order to work out the orbit of Mars. People wrote their results in anagrams so they could claim precedence on finding the result without having to reveal their secrets.

The climate was a climate of-- really, of paranoia. Somehow in the years following his death, that climate changed. Newton, in a sense, was a rather paranoid person. Nonetheless, he published his works. He helped found the Royal Society. He got into bitter fights with people over precedents, but there was communication back and forth. And that was denied to Kepler. And it's for that reason that his world of the delights, if you call it delights, is very different from our own.

And what has made me think about these thoughts is this occasion here because we're so fortunate to live in a community where we do exchange ideas with each other, listen to each other, that science is a joyful occupation where you like to learn new things and tell people about new things. We take that for granted, but it wasn't always that way. And we should be very grateful for that. We should appreciate it and do everything we can sustain it. And I particularly appreciate it on an afternoon like this when all of you have taken the trouble to come and listen to me talk about these issues. Thank you.

[APPLAUSE]