Steven E. Koonin, “Supercomputer Visualization” - LNS46 Symposium: On the Matter of Particles
- It's again my pleasure to make the next introduction. We've talked about our students. I think the most important product, as many people have said. This institution is our students. Of course, they couldn't do as well and we couldn't do as well for them if research wasn't our main preoccupation. Steve Koonin was one of our students and I'm happy to say he did his PhD with me.
Got it in 1975, if I remember. By the time I let him go, we had published four or five papers, which is not the average for your average PhD student. Since then, he's went back to Caltech where it came from, and is a professor of physics there. He's involved in many national issues and happens to be a member of the town's panel, which many of us have spent a lot of time agonizing on the results of. And he's also been on many other important national committees that have made decisions.
Welcome back, Steve. And have you got--
STEVEN KOONIN: I'm miked, I think. Is the microphone on? Yeah.
Well, we've heard a number of speculations today about why the LNS chose to throw a big party for its 46th anniversary. I'd like to add another speculation to the list. During the cold fusion flap a few years ago, I got to Rich Petrasso at the Center for Plasma Fusion here. Rich believes-- and I guess I agree with him-- that it's no coincidence that not only is 46 the age of the Laboratory for Nuclear Science, but it's also the atomic number for Palladium.
A month or month and a half ago, when I was talking to Lee Grodzins about what I might tell you about in this talk, he gave the guidance that I should reminisce a little bit, and then talk about some physics that's germane to LNS. As far as memories go, I'd note that my formal connection with the LNS through the CTP-- I was Arthur's student-- really only lasted for three years, from 1972 to '75. And so I don't have much of a stock of anecdotes to tell you about.
I would remark that during those years, I learned an awful lot from a number of very good teachers. And I still enjoy interacting with them, usually; today, as colleagues. As far as physics goes, I thought what I would do in this talk is to pick up on a theme that I started when I was a graduate student. Show you, or remind for you, some of the things we did then, and then give you some sense of where it's gone and what some of the prospects are in this business.
The general theme is the use of the supercomputer, or large scale computing, generally, to understand nuclear hydronic many-body phenomena. It's by now almost a truism that computing has really changed the way the world is, and that's particularly true in the way in which we do research.
In nuclear physics, and many-body physics, we're often confronted with the fact that we know the theory, but we don't know the solutions. So here, it's not perhaps like the drought that currently exists in particle physics where people feel that there are no new phenomena to study. Here, there are plenty of phenomena.
It's not the Lagrange nor the Hamiltonian we're looking for, but rather it's the new qualitative behavior that happens in the solutions when there are many degrees of freedom. So although we might know Newton's laws for a system in many classical particles, or the Hamiltonian for a many-body system with a Lagrangian for QCD, we have no way of directly solving the equations.
Furthermore, at least for nuclear and elementary particle physics, we have no direct visualization of what happens. And so one can therefore resort to the use of large-scale computing, supercomputing, to try to solve these equations, and get some feeling for what's going on. And of course, as always, when you have an enormous amount of data coming out of the computer, the whole issue of visualization becomes essential, and a very important part of the process.
What I want to do today is to go through a little bit of three different stories. One is a brief recollection, and perhaps, update on the TDHF model, the time dependent Hartree-Fock model for nuclear collisions-- the damped, heavy ion collisions that Herman was talking about.
Then, show you how that's evolved into something that I'm quite excited about, and that's the treatment of the nuclear shell model using Monte Carlo methods. In some sense, generalizing the time dependent Hartree-Fock. And then we'll switch gears a little bit, and talk about proton-proton or Barium-Barium collisions, actually, and we'll look at some simulations of the Skyrme model for the collisions of nucleons.
So let me remind you before we get started, of what the mean-field picture is of nuclei, or more generally, of many particle systems. What one can say is that the idea is that each particle is going to move independently throughout the system, or throughout the nucleus, each nucleon, under the average influence of all of the others.
Formerly, that can be stated in the following way. If you start with a Hamiltonian that looks like some one-body part here, a dagger, and a to the creation and annihilation operators for the fermions, and some two-body interaction. If you then make the formal assumption that the wave function for the system-- which, in general, is a function of all of the coordinates of the systems-- factors into a product of functions of the individual coordinates, appropriately anti-symmetrized a Slater determinant, you put that into these equations.
And then you get, for example, a set of static equations describing the time independent solutions of this problem. Or a set of time dependent Hartree-Fock equations describing the time dependence of this problem. Each of these has the form of a single particle equation for each of the wave functions, one at a time.
In one case, involving the eigenvalue, in the other case, involving a time derivative. The Hamiltonian appearing in the equation involves the same one-body part, and then a second one-body part, which depends upon the strength of the interaction, and then the mean-field, which in turn, depends upon what all of the other particles are doing.
So this reduces the original linear Schrodinger equation, in many dimensions, to a set of nonlinear partial differential equations, either eigenvalues or time dependent problems, for the static or dynamical properties of the system.
In thinking about the history of how mean-field physics has evolved, it was quite interesting to go back, and try to tick off the people involved, and realize the number of developments that occurred here at the laboratory for nuclear science. Of course, the whole idea, physically, was started for atomic physics back in the 30s with the Hartree-Fock method.
In the late 40s, the systematics of nuclear properties-- binding energies, ground states spins, and so on-- started to suggest that, in fact, the idea for a nuclei was not so outrageous, but it was very difficult to understand how to reconcile the notion of nucleons moving freely through the nucleus with the idea that the nuclear matter making up the nucleus is a strongly correlated system. Viki Weisskopf mentioned that this morning. Dirk Walecka contributed a lot to trying to resolve that apparent contradiction in our understanding of nuclear matter.
Well, by the time the late 60s had come around, people began to take the idea rather seriously, and a number of advances in our understanding of nuclear matter, together with an advance in the power of computers, led people to try to solve nuclear Hartree-Fock equations for the static system using effective interactions.
And here, there were two schools of development-- one initiated by Skyrme, and followed up by Dominique Vautherin, Hubert Flocard, who were frequent visitors here. Parametrized the effective interaction between nucleons in terms of order six parameters, the so-called Skyrme interaction, and then proceeded by fixing those parameters, comparing with the gross properties of nuclei to calculate the properties of specific nuclei.
John Negele, originally working with Hans Beta at Cornell, and then later, here, pursued a different approach in which he tried to establish the effective interactions from the properties of nuclear matter. I think in the end, one came to realize that these were two, essentially, equivalent ways of tackling the same problem.
These effective interactions in the static calculations proved to be a very good description of the gross properties of nuclei-- energies, densities, shapes, and so on. And just to show you how good things were, John Negele will recognize this figure. This is a part of the charge density for a whole sequence of nuclei, and comparison between the mean-field , theory, and the experiment, as determined, for example, from electron scattering for nuclei ranging from oxygen to lead 208.
Well, along about the time that I showed up on the scene in the early 70s, a number of us had the idea of trying to all the time dependent equations for the same kind of effect of interactions. The motivation, as I will discuss in a moment, was centered on trying to describe the collisions between two large nuclei.
And here, of particular important, was the work of Paul Bonche, who was, at that time, a visitor from Saclay. Arthur Kerman continually provided guidance. Mike Straayer Mort Weiss, Tom Davies, and so on, all played a role in trying to make these calculations happen, and I'll show you some of those.
Along about 1980, there was some other very important work that was done here, primarily by Shimon Levit, working with John Negele, and then by Yoram Alhassid, Roger Balian, and Marcel Veneroni, with the question of what does it all mean? Given that you can solve these non-linear equations that describe the static or time dependent Hartree-Fock system, how do I connect that with the real system?
A fundamental problem here is that the Schrodinger equation is linear, while the Hartree-Fock equations are non-linear, and trying to understand then, how to make the connection with conventional quantum mechanics, which I think, we eventually did do, was the issue in the early 80s.
Along about the mid-80s, there were some interesting applications of the numerical methods that we developed originally in the early 70s to problems in atomic physics, in particular, charge transfer between two fast atoms, and the behavior of atoms under the influence of strong glacier pulses, where one uses the TDHF equations to follow the evolution of the atomic electrons.
In about 1986, a graduate student of mine, Gale Sugiyama, and I, looked at a generalization of the TDHF equations, which I'll describe for you, called the auxiliary field Monte Carlo method, in which one used many mean-fields to try to get exact solutions to quantum mechanical many-body problems. This was picked up by Jorge Hirsch, and a host of other people who were, slightly after that time, worried about how to solve the Hubbard model, and trying to understand high temperature superconductivity.
And then in about 1990 or, so, we began to think about seriously applying these ideas through the nuclear shell model, as I'll describe for you. This is work that's being done by Calvin Johnson and Eric Ommand, who are post-docs at Caltech. Gladis Lang, a very talented graduate student, and myself.
I should note that Calvin is in some sense, an LNS product, although several generations removed, Calvin was a student of Wick Haxton at Seattle. Wick, in turn, was a student of Dirk Walecka at Stanford. And of course, Dirk was a student here under Viki.
OK, so the motivation at the time, in the early 70s, was to try to understand heavy ion collisions. And of course, the collision between two large nuclei, schematically illustrated here, is very different from the collision between a single nucleon, or an alpha particle and a nucleus. One has the possibility of affecting nuclei in very dramatic ways.
We know now that one can make nuclei spin fast enough so that the angle of momentum is a significant influence on their structure. We can pump more than 100 hbar into a nucleus. We can make nuclei very far from the line of beta stability with unusual neutron or proton numbers.
We can heat nuclei to temperatures of several MeV, four or five MeV, study their behavior, or electromagnetic properties-- shapes. We can also compress them, perhaps, at ultra-relativistic collisions to very high densities, and perhaps create the quark-gluon plasma that I'll suspect we'll hear about tomorrow.
The driving issue at the time was that one saw that at low energies, the system behaved in rather unexpected ways. As Herman mentioned, the two nuclei would come together, exchange a lot of energy, but not exchange very much in the way of particles. And could one understand this in terms of the dynamics of a very large, newly degenerate Fermi system?
So what we did was in fact, at the time, to set up the time dependent Hartree-Fock equations, in which one could follow-- using the same effective interactions-- the development of the single particle wave function for each nucleon during the collision. Paul Bonche, and myself, and John Negele started by considering the very simplest system, namely slabs of nuclear matter, colliding in effectively, one dimension.
We set the equations up on the LNS computer, which at that time, was an IBM 360 something or other. I don't remember exactly what it was. And that there was great interest as the results started to come out day after day, typically overnight, or every second night, into what these equations would do. I remember heated arguments in Arthur's office in which the opinions ranged from well, they would just go through each other, and nothing would happen at all, to that they would stick together, and you would discover the compound nucleus.
These days, one can do the calculations in three dimensions. One stores of order of ten to the fifth variables on a very fast computer, of course, typically several hours of a Cray one time per collision. One can study the collision at different energies, and impact parameters, and then compare with the data. And one finds many unusual features that are directly related to the quantum mechanical behavior of the systems under study.
Let me show you a video which was done at Livermore a number of years ago by Mort Weiss and his collaborators, which is a simulation of some of these calculations. It's not a simulation of the calculations, it's the output of the calculation visualization. So if we could run the video. Oh here we go.
So these are simulations of a collision of krypton with lanthanum, moderately large nuclei, at a laboratory energy of 505 MeV. They're done using the Skyrme or effective interaction. What we do is to set the two nuclei of krypton and lanthanum up, and boost them toward each other, and watch the density evolving during the collision. A typical length scale is about 8 fermis or so across, and a typical time of the entire collision is a few times 10 to the minus 21 seconds.
This is a head-on collision at 0 hbar, and one is watching the densities evolve. You notice how well defined the two nuclei are, even though they've-- as Herman said-- have gotten into bed with each other, and are exchanging particles. I believe that these guys separate then after the collision. Very slowly, you can watch the neck snap, and go over here.
You can notice the pronounced octupole deformation of the nuclei. It's, of course, the strong force, the volume energy, and the surface energy that's playing against the Coulomb force here, which is quite significant for these heavy nuclei. OK, and we've now moved up to 27 hbar or so.
You'll notice, as the collision proceeds, all kinds of funny bumps and wiggles appear along the edges. These are not numerical artifacts, but are due to the motion of the particular single particle wave functions coursing through the combined potential well of the two systems, and banging against the walls.
These guys stick as they slowly rotate, and they will stick together for as long as you're willing to pay the computer bill. OK, we're going to crank up the angular momentum now to 84 hbar. This, presumably, would form a compound nucleus if it sticks. These guys come apart-- I'm not sure. They come apart.
During the collision, of course, a great amount of the initial kinetic energy of the system is pumped into the internal excitations of these fragments, which come apart very deformed, and moving much more slowly than what they came in with. And of course, spinning as well, one loses orbital angular momentum into internal rotation of the system. 105 hbar. A rather more rapid separation.
When we were doing these one-dimensional calculations at the very beginning in the early 70s, it was a great feat to be able to even put up the one-dimensional pictures in black and white on the CRT. It took, typically, half a minute of intervention on the computer operators part in order to draw one frame. Now, of course, you can do this, essentially, on a PC. Things have evolved significantly.
You can see how they're spinning. That spinning, of course, is eventually reflected in many gamma rays that are admitted by the spinning fragments as it cascades down, and loses its angular momentum. 250, which is a rather peripheral collision. You can see a little bit of Coulomb excitation as the surfaces don't quite kiss one another, and come apart.
And then finally, we can see several collisions at once. Here's 5 hbar, 84, and the 250. You can get a sense of the different time scales involved. We're going to freeze the lower one now, and let the other two proceed, and see how the different time scales, and different scattering angles result from the different initial conditions.
That's enough right now. We'll come back in a moment. Could we hold that one for a second? Let me just show you the kind of things that one is able to reproduce. Here is the same kind of plot that Herman was showing you. See, you might want to try to cut the projector off. OK, good.
We're looking here at a collision of krypton and bismuth. One plots the energy of the fragments observed against the angle at which they're observed. One sees a rather complicated structure of the cross section in the data, and one is rather satisfied that the TDHF calculations can reproduce the angle and energy variation of the average of the structure. Similarly here, is a much heavier system, xenon on bismuth. Essentially, all of the fragments are focused very near the grazing angle, and again, the TDHF reproduces that.
One of the issues of TDHF is how does one extend its validity up to higher energies? And I would have to state, in all honesty, that that's a subject of still serious research. One has the sense that these mean-field theories will work at lower energies, but at higher energies, the Pauli principle becomes less effective, and the whole idea of an independent particle system begins to break down.
OK, so much for TDHF. Let me then talk a little bit about the second topic, which in fact, grows out of TDHF a rather peculiar way, and that's the whole issue of trying to solve the nuclear shell model by using Monte Carlo. If you ask any nuclear physicist what is your best description of the nucleus, you would, in fact, probably get the answer of the nuclear shell model.
In that system, one takes a set of some number of nucleons, a nucleons, distributes them over n levels. The Hamiltonian one writes down consists of the single particle energies, and this two-body interaction, and then you proceed to simply diagonalize that Hamiltonian in a basis-- the basis of configurations of these nucleons spread among the levels.
One's enthusiasm for the model, however, is tempered when you start to make a calculation of how big a basis you need. If I don't worry about symmetry such as angular momentum, or isospin, but simply count the number of ways in which I can distribute these a nucleons over the n levels, it's, of course, a binomial coefficient, which varies very rapidly, and increases very rapidly as either n or a increases. In fact, increases exponentially in general.
What that means in practical terms is that direct diagonalization of the shell model Hamiltonian is hopeless beyond the sd shell, and in fact, you cannot do full calculations in the pf shell. Nevertheless it's a widely held belief among nuclear structure physicists that if you could solve the shell model Hamiltonian with appropriate interactions, that that would, in fact, describe the entire richness of nuclear structure that we see-- spherical nuclei deformations, isomers, all of the things that one talks about.
Well, one of the clues as to how to proceed is to ask how is this exact solution that one might be after related to the mean-field that we can, in fact, treat? Let me remind you that the mean-field, the way it was constructed, the Hamiltonian involved a one-body operator representing the single particle levels, and a second one-body operator whose strength depended upon the mean-field sigma, and of course, the interaction as well.
What Shimon Levit each in the early 80s taught us in nuclear physics, and in fact, was known to a number of people in statistical mechanics for almost 15 years before, was that it is possible to write the exact evolution operator of the system. So h here, is the exact Hamiltonian. It's exponential is some terribly complicated many-body operator, as in fact, a linear superposition of exponentials of one-body operators, which we know how to handle.
This linear superposition being a sum over all possible mean-fields with some Gaussian weighting associated with the mean-field. And so this opens up the possibility-- if you can arrange to do the sum over all possible mean-fields-- of learning how to handle this exact partition function operator, and consequently, make exact statements about the many-body system.
For example, if you're interested in the thermal average at some temperature one over beta of the Hamiltonian, you can write that as the ratio of two sums over all mean-fields, one of which has a slightly different summand than the other. These should be little h's, which is, of course, the basis for the whole method. This method is, for example, the method that's used to treat the Hubbard model in investigations of high temperature superconductivity in copper oxide plates.
Well, the idea is to evaluate this average by a Monte Carlo sampling over all mean-fields. This has a tremendous advantage over the direct diagonalization method for the shell model. One has to deal here only with n by n matrices, the size of the number of orbitals in the problem, and not n factorial by n factorial matrices, which is the size of the basis, if you like, in the problem. And so it has a very favorable scaling with the size of the shell model problem.
Moreover, this method is ideal for parallel computer architectures. Since you're doing a Monte Carlo, and a Monte Carlo is not a terribly large problem, what you can do is farm out the Monte Carlo to many independent processors. If you have 64 processors, you can do 64 different Monte Carlos at once, and then simply collect the statistics.
And in fact, the results-- and I'll show you some of them in a moment-- were in fact done using the Intel Gamma machine at Caltech, which is a 64 node machine. There's a 512 node machine that's just started running, and of course, that will be eight times better.
As a test case, we've recently been looking at protocols in the sd shell for which we can do the exact diagonalization, and test the method in which the force we've taken is an isovector pairing force, plus quadrupole-quadrupole and hexadecapole-hexadecapole interactions.
I should remark, before I assure you the results, that not only can one do the breakup of the Hamiltonian in terms of the density, but one can also do it in terms of the pairing operators, which is perhaps, more appropriate for the nuclear force, which we know has a strong pairing component.
Here's one example which has come off the computer in the last couple of weeks. What I'm showing you is a calculation of one species of nucleons, approximately six in the sd shell, using the interaction that I discussed, pairing plus multiple. We're working in the grand canonical formulation so that we have a chemical potential, and a sum over all possible numbers, pairing decomposition, as I noted.
Plotted on the bottom here is the number of time slices used in the Monte Carlo and numerical parameter. Ideally, one would like to take delta beta to zero. What's shown are the Monte Carlo results as a function of delta beta for various delta beta-- this corresponding to about 10 times slices. The mean-field result is way up there, and the exact result is down there, and you can see that there is, in fact, a very good extrapolation, and one has some sense that this is, in fact, going to work.
Other observable you can calculate is the mean square angular momentum in the system. It starts out at some rather high number. Mean-field is all the way up there. The exact answer is, of course, very small. This is a temperature 1 MeV, and you can see, again, a very good extrapolation downward to the exact answer.
We've gotten more ambitious in the last few weeks, and we're now trying to do protons and neutrons together. This is a calculation in which we're trying to refine a trial state, which consists of four nucleons in the D5 halves orbital, two protons and two neutrons, into the exact solution in the full sd shell with this pairing, plus quadrupole plus hexadecapole interaction.
The Monte Carlo points are these red symbols. We're a little bit concerned about that one. Our best guess at the moment as to what the exact solution is, is this one. I think all one can conclude from this point is that the calculations are feasible. The energy is coming down. Whether we've got this exactly right or not, we'll presumably know in a couple of weeks.
Equally encouraging, is that you can take the square of the angular momentum, and calculate it. It starts off at some rather high value because we've assumed a deformed trial state, but then drops very nicely down to a very small value, consistent with the zero that we know it has to be as the temperature gets lower, and lower.
So what I've shown you are perfectly feasible test calculations. One is looking, I think, at some very exciting future applications, assuming everything works. You can calculate strength functions for hot nuclei, or in fact, for nuclear ground states for your favorite operators, Gamow-Teller, quadrupole operators, whatever.
You can calculate the level densities, and the influence of the residual interaction on them. These, of course, determine very much the behavior of what happens in Vicky's evaporation formula. One would like to know how the residual interaction affects the level densities. You can crank this system, study high spin structure, and so on. So I'm, in, fact quite excited about what might happen in the near future with those calculations.
So that was topic two. Let me then turn to topic three, which is a different kind of supercomputer application-- not to nuclei, but to nucleons, and this is a visualization of the Skyrmion model of nucleons. The general background is as follows. We have very strong evidence now that the correct theory of the strong force is QCD, quarks bound together by gluons. But in fact, we can't solve QCD in many interesting cases. Nobody can start from the QCD Lagrangian, and tell you what the mass of the proton is.
There are, of course numerous efforts along these lines, most prominently, lattice gauge theories, but I think we're still a long way from having convincing results. And so for many cases, people have resorted to effective theories. One such effective theory one can build as a theory of interacting mesons. So you do away with the quarks and gluons entirely, you just talk about mesons.
And in particular, about two kinds of mesons in the Skyrmion Model, a pi meson, and the sigma meson. The pi comes in three flavors, from isospin-- or three charge states. Maybe that's the wrong word. Three kinds of pi's, and the sigma field. And in fact, it's rather surprising and non-intuitive that you can build a nucleon out of mesons only, with the appropriate non-linear theory.
To do so, one has to mix the charge degrees of freedom in the pion together with the space degrees of freedom, basically to make the pion have a different charge character, depending where you are around the nucleon. It turns out that such an object is a fermion, and if you adjust the parameters in the Lagrangian correctly, you can describe nucleon properties to an accuracy of about 30% or so-- energy, radius, GA, and things of that sort.
One of the interesting and fun things you can do is to ask what happens now if I take two of the skyrmion solutions, and I try to collide them together. So look at the classical equations of motion for this non-linear skyrme Lagrangian. We've done that by setting up meson fields on a 20 by 60 by 40 lattice, taking about 2000 time steps, and typically, about an hour of Cray time per conclusion. If we could run just a little bit of the tape, and then stop it for a moment, let's go ahead, and do that.
So this was done by Mike Sommermann, Ryoichi Seki, and myself. The visualization, which as I mentioned, is a non-trivial problem, was done by Matt Class, an undergraduate using some facilities at Caltech. And what we're going to be looking at is, first of all, skyrmion-skyrmion collisions. We plot here, the pion field in the plane, and the red shows the baryon density of the skyrmions. This is at a kinetic energy of 157 MeV, an impact parameter of 0.04 fermis.
We're seeing the same thing again. You can see that they look an awful lot like the TDHF pictures. They come together, and bounce off each other. Here we are at an impact parameter of 1.2 fermis, basically the same energy. Time and Fermi over C is running on the bottom, and we see a significantly greater interaction between the two. A lot of what looks like vorticity in the pion field here, which is kind of interesting. OK, can we hold this one for a second?
And just show you, of course, one can get some sense-- here are the trajectories of the baryon density as a function of y and z, two different collision velocities, and different impact parameters-- you can get some sense of what the system is doing. Another fun thing to do is to collide not skyrmion with skyrmion, but skyrmion with anti-skyrmion. Screamy If you can build a proton, you can build an anti-proton.
And just to remind you, experimentally, what happens-- this is an emulsion picture of an anti-proton coming in, coming to rest, and then blowing up a nucleon on or a nucleus. Some large number of four to five pions come out, and this field has got-- this theory has got pions in it. You might ask what happens, and so these are simulations of a skyrmion and anti-skyrmion.
You can see the pion field pointing out over here, and pointing in over here. This is the anti-skyrmion, this is the skyrmion. Let's go ahead, and just run it. Gone. You're left, of course, with just pions. We'll run the same thing again so you can watch it.
These pion excitations will, of course, propagate outward. We're going to do the same thing, but at higher energies-- 500 MeV now, and an impact parameter of 1 Fermi. These pion excitations will, of course, propagate outward until they become weak enough that the theory becomes linear, and then they appear as pions. I think that's about all.
Let me just show you what one can extract from that. You can try to plot, as a function of time, the baryon number as a function of time during the collision. And what's rather surprising is that the annihilation happens rather rapidly, essentially at the causal limit, independent of the bombarding energy, as I think you might have been able to get a feel from those pictures.
Another thing you can do is to take the final excitation of the pion field, Fourier analyze it, and try to predict the spectrum, and number of pions that come out. And here is an example of that collision. Here is the number of pions as a function of the pion energy. You can see that it's peaked at around an energy of 500 MeV.
There are two skyrmions in the collision, so you have a total energy of 2000 MeV One is getting about four pions out of here, which is a little bit on the low side, but not terrible compared to what one sees experimentally. All right, I think I've probably just about used up my time, so let me stop at this point, and thank you for your attention.