MIT MechE Symposium: Mechanical Engineering and the Information Age - Harry Asada, Kai-Yeung Siu, Seth Lloyd

Search transcript...

[MUSIC PLAYING]

SETH LLOYD: --Professor Harry Asada, from MIT Department of Mechanical Engineering. Professor Asada is a board professor at MIT. He is the head of the d'Arbeloff Laboratory for Information Systems and Technology.

I've been impressed since I came to join the d'Arbeloff Lab with Professor Asada. First of all, the amazing amount of work he manages to get done. But he's also raised a lot of money in support of information technology. So in fact, if there's someone who's responsible for our mechanical engineering department having its very strong program and information, then Professor Asada certainly deserve credit for it. Please, Harry.

HARRY ASADA: Good morning. Thanks to Seth. Now, if I can describe the big picture, I would like to exemplify some of their comments, statement by describing the basic research programs we have been conducting.

As a matter of fact, quite exciting things are happening in the d'Arbeloff Laboratory. And I'd like to do my best to convince you in how the information, on technology, including instrumentation and the networking stuff, can change both exploring the new frontiers and somewhat revitalize areas, which has been already mature, in the mainstream mechanical engineering.

Now, first of all, I'd like to raise this instrumentation and networking issue. It's quite important part of the information. The next speaker, Sunny Siu, who is, I guess, the star of the internet, brought in this kind of concept of networking. And we learned a lot from him.

We are putting lots of little gadgets in diverse areas. And we developed the little computers that has the widest transmission systems, scattered around almost every place that you can imagine including the human body, getting good access to any machines, including robots and some other stuff.

But imagine that you are sort of augmented by the ability to communicate with other machines. What can we do with it? Or you can't get your vital signs at all times, getting a huge amount of information. We would totally change the way we do design in these areas.

Now, this little gadget, we call this i-Coin technologies, because we have a little-- the microcontroller is having some minor computers and other transmitters, combined with some batteries, all in a small package, as small as dimes or nickels. And actually, these are communicating, wirelessly, with some computers.

So now, things are going to mobile computing and the PDAs, combined with this little coin, it's going to be one gadget sometime soon. And they are replacing-- I brought in just a brand new laptop. But this is going to be old technology. It's going to be replaced by those things.

So we made a connection of all these. And instead of using different devices, here and there, but we use the unified systems, so that we can actually do a lot of computations and communications. But this is the first device we've put together. We have RF transmitters. We can transmit these signals up to 3,000 feet and quite stably.

Now, one of the very interesting problems is we all designed this for low-E, low-Energy computation and communications. Dr. Torossian mentioned that the computer speed is thereby increasing. But at the same time, what is interesting to me is that the power required for each gate switch has been decreasing as quickly as the speed of the computers increased.

So as a matter of fact, they're quite a big number of gates that can be powered by a single battery that lasts for the entire lifespan of that chip. So it's going to be disposable.

But still the bottleneck is RF transmission. And while there are quite a few research programs going on in this area, in the mobile computing community, but we looked at this issue at the very fundamental levels of coding and in modulation, which is very much a fundamental part.

So what we'd like to do is that you like to transmit some signals, say a message, m1 to mq. And you had to code it. You had to represent the information in some form. And you have connections of such coding here. And what I'd like to do is actually-- well, in sending each symbol to here, you have to spend some energy, say ei. And what I would like to do is to minimize the average energy per transmitting message e.

Now, obviously, what you can do is that you can actually evaluate these codes in terms of the energy expenditure and put them in the increasing order of energy consumption. And assuming that you know source symbol probability, you can put them into descending order of probability. And you make a match between these two. And the way to minimize this one is simply you assign the code words with less energy to more probable messages.

But this is just nothing special. It's just extensions of Huffman coding, which is very well-known stuff. But one of the very important, exciting things are happening in this codebook part, which is a collection of codes. Now, in the communication community, things are going higher bit and longer bit and higher transmission. And the rate has been the most important factor for many years.

But looking at those applications I just showed you, in the first few graphs, many things can be transmitted rather slowly. Battery power is more important. I can tell you 100 examples of that. Now, given that, we can actually design a special code that is optimal in terms of the energy savings.

Now, if you just use the standard coding techniques-- so let me take this simple [INAUDIBLE] keying to begin with. You need to send certain bit length-- so it's called word lengths-- which has a kind of mixture. It's a high bit and low bit, and quite a few things are happening.

Now, looking at the simple transmission form of keying, the energy is consumed only when the RF transmitter is on. Then that's actually active in things. So obviously, what you need to do is to reduce the number of high bits, then you can actually minimize the energy consumption.

Now, supposing you can speak a little slowly. You can put some redundant bits in here. You can actually generate the codes, which do not need many high bits. It turns out there is just a few high bit-- show here-- can represent the many, many codes. This is called redundancy or energy saving. And we explore how much we can actually gain the energy saving by adding some redundancy to it.

Now, the set of codes having the least number of high bit is called the optimal codebook. The way you can generate it, starting with the all low bit, and then one bit, and one length is used. And then, actually, two bits, you can put. And then, actually, you count this one until you arrive to the point where the same number of code words that you want to ship out, the number of symbols. Then you can cut up, and they do not use the old other code patterns having higher bits.

So this is a simpler one. Of course, in practice, they have been using a much more sophisticated consolidation techniques. I talk about only this one. But there were many, many things happening in the '80s and '90s. But no matter what, the one interesting property that we found is that to be able to send stuff, with minimum energy, the necessary condition for is that codebook, that correctional book of codes must be an optimal codebook having this number of energy in it.

Now, although the probabilities of the source symbols may vary depending on the applications, but, in the codebook part, it can be fixed. And we can implement this. And in our i-Coin, we have such an optimal codebook, which is attuned to specific probability distributions involved in the systems. And after that, we just assign the optimum assignment in between.

Now, taking it one step further, a few steps further from this point, we-- we means me and Professor Seth Lloyd, Ian Hunter, together-- are marching towards a much more challenging device, which is called i-Grain.

This is the complete, self-contained computer, with the battery in it and a wireless communication device in it. And that is as large as a grain of rice. And we do an integrated computation and communications in one-chip CPU and wireless transmission, which is not an easy job, this signal stuff.

And more importantly, we do an integrated chip design and battery design on that and actually explore the opportunities to cleverly, wisely use the battery design consistent with achieved design. And hopefully, we can last such a device for five years, and it turns out to be disposable.

Now, having such a device, we can put this one in unusual places, in particular, in the human body. And actually, we have been working on this one the last several years-- try to put the devices on the human, a non-intrusive and a comfortable way. And as I said, actually, the laptop computer will be replaced by PDAs. There's some robot here connected to the internet.

So if we can make a connection between this one and many are part of the body, we can look at a lot of interesting stuff. And we call this a wireless body local area network. Imagine that you have such ability.

Now, we have quite a few interesting applications of that. Well, human computer interactions and, most specifically, recently personal robotics are booming. And the key issue of the personal robotics is, basically, how you let your robot recognize what you're really doing. So actually, human measurement or the human side of instrumentation plays a major role.

And, as a matter of fact, in this case, we are-- the student, Steve Mascaro, is working together with this robot and building something like this. And this robot is clever enough to understand what he is doing and by tracking after his hand motion. As a matter of fact, most of the jobs performed by humans are by means of the hand motion. So if you actually keep track of the hand motion, we can get most of the information needed.

And I might just talk to one specific technology we developed. That is to replace the data glove. And perhaps you know that this is a kind of existing technologies. And we can measure the data in the hand motion. We can measure the bending angles and the touching force and such.

But once you wear such a glove, you get the loss of the haptic sense. Now, what we are doing is to measure the hand motion, including-- there's actually force acting at the tip-- without covering the hands? How can we do that?

Well, traditional mechanical engineers, if you try to measure some force having a tip, you place some sensor pads here. But we're clever enough to make things very thin. But still we do it with a haptic sense. So what is the solution?

Now, if we have fingers and a table is in front of you, please place your fingers on top of the table surface, and then look at your fingernail. And what's going to happen? Yeah, please do it, and you'll find something-- a good idea-- that your nails change color. And that's a very clear change.

So we're instrumented the fingernails, with just standard micro LEDs and photodetectors, wirelessly communicating this one to the other part. This is the device we developed. And this is just like a teenager with fake nails. And we can actually measure all the stuff.

Now, we have a very interesting color change, as described here. Now, what is really interesting to me is that, well, we can even measure the bending angle not only the force acting on it. But we can measure the force, actually, and the bending angle is on top of it.

This is actually a quick video clip. And I'll show you how this was really working. Let me just skip this first few portion. Now-- whoops. This is brand new Windows 2000, and it may have some trouble. Well, let me just hold up this guy, and I hope this one is good enough. Well, that's kind of technology vulnerable. Can you see this?

Yeah, this is the first part of the video. And basically, we measure the bending angles. And just to get the reference, we just put some sign here. Showing here is basically the tip of the finger. And we instrumented with it. And we bend it. And then, actually, the lower, this one, is to show the bending angle measured by the this fingernail sensor. And nothing in there, it's not covered at all. But if you bend this one, we can measure this.

And the top one is to indicate the actual bending angle. As a matter of fact, this is to get to some reference, making comparison with the actual measurement taken by this. And actually, this is the touching part. If you touch some surface, then it senses the output changes. So you can get the information as to how much it is being touched.

And this one turns out to be working very quickly. We can't get the measurement of a pianist's motion, but it's quite quick, amazingly quick. Now, let me just move on to the next one. Yeah, this is the last part. It's actually moving and tapping quickly. It actually follows that one.

OK, now, moving forward, as a matter of fact the fingernail bed has quite interesting characteristics, although it's not much known. We have a quite rich capillary system right beneath this one. The capillary tube is twice longer than the other part, so many-- a little of the force, and it actually changes the color.

Now, we, in mechanical engineering, are trying to understand why that happened. So this is kind of fluid mechanics and the tissue mechanics part. And then we tried to design, build some model of how it really works, and then check it with the actual experiments.

Now, we're moving towards the wireless sensor, having the i-Coin communicating with the gadget. And one more interesting stuff with this is that we can actually measure the shear force. Well, shear force, generally, it's very difficult to measure. But with this, we can actually see the very distinct color pattern change, so we can measure that.

Now what is the killer app for this? Now, we are actually applying this to replace the computer pointer. If you use the IBM laptop, you see a little red stuff in the middle of the keypad. And that's sometimes very difficult to move. But with this one, you can just use any surface and point to what you really want to actually look at and open the file.

Well, of course, this, I want to say, instrument to human, there are many ways of getting information from the human, such as, in this case, traditionally some switches embedded in the wall. But with this nail sensor, you don't have to do that. You can just paste the picture of the switch. And once you touch this guy, then you get the information.

Of course, we need to have some additional sensors to locate the fingers. So in our case, we're using the magnetic tracker. But that can be done. And in this case, the robot is actually controlled with many virtual switches embedded on this table.

Now, the type of virtual switch, enhanced human machine communications, for instance, in this case, it's traditional switches. And many people get trouble to understand what this switch means, resulting in a certain action here, making connections a very difficult task. But the virtual switch, we place the switches where the task take place. So it's very easy to understand.

Now, let me just talk about the more mechanical oriented work and how they are connected to information stuff. Now, this is actually the care project we are doing. And this is the stuff you see in daily life. The patient, here, has been struggling, transferring from this wheelchair to a bed and bed to a wheelchair.

And first, we developed kind of a hybrid bed-chair systems. This one can be pulled out of a bed, and then move around, freely. It's an omnidirectional chair. So we've been working, this one, with VA hospitals. And what they experience is that having a many actuators, it's a little difficult to operate.

But we place virtual switches on all these places. And at first, we look at how those caregivers tend to touch, to move, for instance, raising this back. And we place the virtual switch there. Of course, the virtual switches are sometimes on the fabric, like this one, whereas, normally, you can't put the traditional switches. But we can place such stuff.

Now, along these directions, let me describe to you one very challenging project. And it's called surface wave actuation. And the goal is to move the bedridden patient in two directions, x- and y-directions, without lifting.

So the surface must have some kind of functionality, so that the patient can be moved in any directions. And in this case, we embed some kind of actuators in here. And the surface will generate a certain motion. In this case, it's going along this kind of circular trajectory, just like a water surface wave, and having some phase like you create this kind of terrain. And then a body is attached at this point, whereas the tangential velocity is in this direction.

Now, natural waves, it's not so much inefficient and very difficult to realize, mechanically. But instead, we made some changes to it, that is to put some kind of little notch, in here, on the membrane-- we call this the extender-- and then place the object on top of it.

What is interesting here is that, since we have this device, when this actually moved out of phase, they just move this kind of an action, rotate about this, so create additional velocity at the tip of this arm.

And of course, no one wants to, stay lying on a bed of nails, like this, so we put the additional membrane on top of it and pressurize it, creating a very soft surface when the patient is lying on it, sleeping for a long time. In case we need to move the patient, to give some stimulation or change the posture, we actually depressurize this chamber, and then actually these extenders come out.

And I have us second video, which is interesting. Let me just try it. Whoops. Have a different-- Ah, it is gone. Ah, this is Microsoft, so we can't blame--

AUDIENCE: Not designed by accident.

HARRY ASADA: Yeah, right, not yet. But this one-- excuse me, this guy-- whoops. Well, we have a bunch of little fingers in here. They are moving a certain way. And what you see here, the Jello, it's a very fragile object being transported. Actually, bedridden patients have very fragile skin. So amazingly, this can be transported very gently.

It's actually moving. And we can move it in any directions. And interestingly, beneath this particular mat-- oh, this is actually a 50 load we can transport in this one, in this way. Each element of this one is just moving up and down motions, just one directional motion. But we have a bed of actuators beneath this. Now, let me close it.

Well, currently, the size is just transporting a little baby, like this. But how can we actually extend it to the size of full-size bed? Well, first of all, we need to introduce, at this point, brand-new actuator technologies, to make such many little motions on the bed.

We know this upper part, it's working very well. And now, we are working on such actuators. Now, one of the technologies we are using is basically putting the i-Coin in here. By the way, i-Coin has PWM generators in it. And it has the overall ability to control this once you put the simple built-in power state.

So these are sort of smart motors. Everything is there. And basically, we can eliminate the heavy wiring. Well, we tried to eliminate all the wires here and actually bending this one on the power bus line. And we superimpose the signals on here, so that these devices can be coordinated in a certain way.

Well, of course, while more important applications may be automobiles or some other places, where so many motors are being used. And, as matter of fact, they're handling the heavy harness, cables after cables are [INAUDIBLE] in very difficult jobs. So if we can actually put them together and put everything in one cable, I mean, the 12 volt power lines and nothing else, that pays off.

We're actually working on this kind of work. I find that even the traditional mechatronic system design, putting in the communications and networking stuff in the early stage of system design turns out to be very important, dealing with such complex systems, having many, many actuators in it. But just the traditional approach is too costly. So we should develop the other technologies in doing so.

Now, let me just move on to wearable health monitoring systems, which has been a major part of the project. And back in '96, we proposed these kind of systems, having a ring a sensor, monitoring the vital signs of these patients and putting this one, the data, into some place, someone who's watching this all the time.

Now, we got patents on these ring sensors, basically monitoring the pulse and the saturated oxygen level. And every year, we developed new ones, just now, but very reliable. Also, in that course, we succeeded in measuring the blood pressure without using a cuff.

This is a traditional method of measuring the systolic and diasystolic pressures. But we instead use very different technologies. We're using sort of a sensor fusion techniques and common filtering to get the blood pressure by a combination of little sensors. In this case, two ring sensors, perhaps an EIP, Electric Impedance Plethysmograph, can reconstruct the blood pressure when these three measurements are done at the same time.

And we did some benchmarking with the FDA-approved tonometer and our common filter, sensor fusion techniques can create as a good a waveform as those FDA-approved devices. Well, this is actually not just the high pressure and low pressure, but it's the complete waveform that can be estimated, which provides a very rich variety of information, such as the cardiac contraction strength and peripheral vascular resistance information.

Now, the hardest job in putting the sensor into humans is basically the motion artifacts. In this ring sensor case, we have to gently hold the device, so that we do not interfere with the blood circulation in here. Now, it took some time to solve this problem.

And then we came up with this doubling constructions, having all the heavy stuff on the outer rings. And the inner rings holds only the sensor unit, which is extremely lightweight. And so even though you move the hands, the inertia force created on this is FE [INAUDIBLE] A, and the source mode, the force acting on it is very much negligible.

As a kind of axiomatic approach, we basically decouple the functionality, and we solved multiple problems. So in case you actually touch the sensor, something, because no one is actually demonstrating this kind of a measurement. So the patient just moves this one doing some daily jobs. But it may touch in some other place, but I know the force acting on it is basically shielded by this and not affecting the internal force.

But we did some finite element analyses, again, and mechanical engineering program, and then, actually, derived a precise model of the tissue mechanics as to how these are really measured and how we can improve the performance.

We did the benchmarking with the EKG and the PPG, FDA-approved devices. And then these ring sensors have no significant difference in terms of pulse compared with the EKG and other FDA-approved photoplethysmograph. Even though we move this one or touch this one, we get the consistent data. And everything is designed for low-power consumption. It will continue 360 days.

Now, what is the missing link? We have developed quite a few things, but, still, it takes some time to get used in the real medical community. Missing link is basically the lack of a protocol, how it should to be measured and how it should be monitored and how data should be used.

Well, this continuous monitoring concept is new. And actually, there is no such really useful device and gadgets that have been available. Therefore, in asking doctors, they don't have a clear idea how to really use it. Many of them really like it. But when it comes down to the specific questions about the protocol, it is a different story.

Now, the new device, with the i-Coin, has bi-directional communication abilities. So, actually, we are planning this kind of field test, exploiting the IT. Well, here's the device on the patient side. And it's collected primarily by this PDA. And the signal will go on the website. And actually, this one is monitored by a doctor's office.

We are deploying about 100 ring sensors at various locations. And we really would like to see how the doctor uses this one. As a matter of fact, we provide a large selection of options of different measurements and protocols. And doctors make decisions as to which one to be taken.

And that may require some change on this ring-side, perhaps a programs need to be changed. But that program can be downloaded from the website. So although, we have hundreds of i-Coins in the field, but we can remotely change the programs and change the way we basically collect the data.

And what I would like to do, at MIT, is a double-plot. We actually monitor, tap all the information sent between the patient-side and the doctor's office to see how the doctors are using this and exploiting new ways of monitoring those cardiovascular activities in a continuous manner.

Well, so collecting data, everything is easy--

SETH LLOYD: You just got a few minutes.

HARRY ASADA: OK. --now that vast information is coming out. But the problem is that physical systems are highly coupled. And well, natural systems, basically, are highly-coupled. Though, for design, we should be able to decouple it, but, unfortunately, many physiological systems are coupled, for instance, the cardiovascular system, blood pressure changes that depend on state or respiration state, and many things may change.

Now, no single doctor can cover all of these areas, so we tried to find some ways of developing technology to put all the pieces of knowledge together. In particular, each system has a simulator. And an expert devolved and coded these simulation programs.

And if we have some ways of running all these together, like sending the parameters and the variables all together, we can study the coupled behavior of such systems. And this is the code simulation environment.

And I don't want to go into the details, but besides the computer and the software programs, the hardest part of the problem is really the dynamics in the part of the problem. You have two sub-simulators coded separately, but if you try to put it together, to solve the coupled problem, these state variables are [INAUDIBLE] independent, have to conform to some bounding conditions. And you can't reduce these, eliminate these variables, in case they are old and nonlinear.

So what you need to do is basically use the existing codes and run it, together. Well, for instance, actually, this is the building energy systems. And many vendors bring in the different parts of devices, in this case, in some evaporator or heat exchanger. They have pretty good knowledge about these. And they have some simulators in here.

But when I try to put them together, to see how these work together, for instance, the mass flow rate must be consistent and the pressure of these three points must be the same before it actually poses sort of algebraic constraints.

It turns out, the system becomes algebraic, differential algebraic equations. And if we have some ways of solving this kind of problem, effectively, that actually makes things much easier. And we solved this by using the control theory standpoint, seeing this kind of differential algebraic equations as a feedback process, treating these algebraic constraints, which is to be driven to 0. And if this is treated as output function, and we close the feedback loop, so that this output is to be driven to 0.

And we designed this feedback control, very rigorously, by using the control theory, such as the sliding model of the controls and such. This is to give, precisely, the error bound and stability conditions rather than just doing the trying the other jobs.

Of course, these things are very important, nowadays. And people are talking about e-commerce. And they have big supply chain systems and has to make decisions as to which components are usable in their systems. And they do it largely by using simulators.

They are no longer using the data charts and stuff. But they use simulators. And overnight, they have to make quick decisions. And what you need to do, for actually getting simulators from here, you must be able to co-run those simulators with the main part of your product, such as the main body or engine control systems. And you have to make quick decisions. And you can't change the code, originally. You have to run it, all together.

So the way I see it, in IT, it's a nice catalysis point, integrating ME disciplines. Traditionally, in our department, we've been using energy as an interdisciplinary quantity to describe all these fields in a unified way. Now, I think, from my point of view, information is now playing the similar role. And extending that, information is the most fundamental, the common factor that actually integrates all this stuff. And I think that is the kind of direction that we are looking at.

And just to acknowledge that most of the job has been done under the sponsorship of Home Automation & Healthcare Consortium and three grants from National Science Foundations. And I acknowledge many of my colleagues in the d'Arbeloff lab, who brought me a very different types of expertise. And we learned a lot from them. Thank you very much.

SETH LLOYD: Thank you very much.

[APPLAUSE]

Harry took the medal for grace under the pressure of Windows 2000. Sorry. Well, in the interest of time, because we're now running quite behind schedule, I should say we'll defer questions. Ask Harry questions at the break or lunch.

Our next speaker is a Sunny Siu. Those of you who know Nam Suh know that he can be quite persuasive. He persuaded me to leave physics and join the mechanical engineering department. Sunny Siu was already a well-established professor of electrical engineering when Nam exerted his persuasive powers on Sunny.

Unlike Al Gore, Sunny Siu did not invent the internet. However, unlike Al Gore, he did invent a key part of what is going to become the next generation of internet, ATM or Asynchronous Transfer Mode. And what's going to be coming in the internet, over the next 10 years, Professor Siu has played a key role in making this possible. So you all set there, Sunny?

KAI-YEUNG SIU: Yeah.

SETH LLOYD: Great.

KAI-YEUNG SIU: So while we're setting up, I just want to mention here, I didn't know that stuff. He said I didn't know anything about the internet-- sorry, about mechanical engineering when interviewed, I guess, six years ago.

SETH LLOYD: Yeah.

KAI-YEUNG SIU: Yeah. But when I interviewed, I definitely told the research committee, that I know a lot about mechanical engineering in order to get into the mechanical engineering department. Of course, I've been learning a lot about mechanical engineering since I came to MIT more than four years ago.

So what I'm going to discuss today-- I have to apologize, because I have a really serious flu for the past couple of days-- is about the next generation internet technologies. I guess it's not exaggerating to say that the recent information revolution has to do a lot with the internet. And, of course, I guess a lot of the bull market in the stock market has a lot to do with the internet, too.

And I'm also teaching a undergraduate course. I guess it is the first course offered by MIT at the freshman/sophomore-level on the internet. And when I teach the class, I usually like to tell the students there are really three key elements of the internet.

First is the applications the well-known World Wide Web actually spawned the widespread use of the internet. And sooner or later, we can actually talk over the internet. First, currently, you can actually get some software, which allows you to have free long distance calls, but the quality is still not very good. But in the future, the quality will get better and better. And very soon, we'll have actually free long distance calls or flat-rate long distance calls, nationally or even internationally.

And in the very near future, we will see a lot more other applications of the internet, for example, networking physical objects, using the internet. And Professor Sarma will be talking about it.

He will be giving a talk in the afternoon in the use of the internet to network physical objects. That is a very recent project, that we have started at MIT, funded by a number of industrial companies. And we have a lot of applications or impact on manufacturing, inventory control, as well as supply chain management.

Another key component of the internet is network protocols. I guess most of you use the World Wide Web and use the Netscape browser or Internet Explorer to browse the web. And the HTTP protocol that you use actually runs on top of TCP/IP. And many other data applications use TCP/IP.

And then, at the lower layer, is the link and physical layer technologies, including your cable modem technology, ADSL for the broadband access, as well as the older technology, that's called asynchronous transfer mode, that Seth mentioned before. Also, we are seeing the WDM optical technologies coming up and being used in the access area, too.

So let me go into a bit more detail describing all these different technologies. And before that, I just want to tell you about some key challenges that we face in developing the next generation internet.

First of all, the internet is about scalability. And in the very near future, we will see the internet being used to connect maybe billions of computers and embedded processors. And again, in the future, we'll see that the internet is going to be used to connect physical objects. And that might be linked to even trillions of every piece of physical item that you can imagine.

And the second challenge is inefficiency. Currently, we use fiber optics. They can provide a lot of raw bandwidth. The challenging issue is how you can turn all this raw capacity to usable bandwidth.

Meaning that you can provide end-to-end, user-to-user performance in the range of gigabit or even terabit per second. Currently, terabit per second is already available in backbone technologies, linking very big backbone routers. But most user cannot even get multi-megabit per second to their desktop.

And the third issue is about quality of service. What that means is the current internet is based on the best-effort service paradigm, in the sense that, when you try to visit a website, for example, sometimes you really have to wait for seconds if not minutes, because you have no control of the current congestion status in the network.

And if you talk about futuristic applications, like telerobotics, telesurgery, you require really very tight control of the bandwidth as well as the delay. So that's what we mean by the quality of service. And today's internet doesn't have that.

And, of course, in the very near future, we will see the convergence of cellular phones and internet devices, together. And currently, if you buy a cell phone, like this, from Sprint PCS, it offers you about only 10 kilobit per second, because the existing cellular technologies are optimized for voice communications but not for browsing the web or downloading big files. And the goal, in the next generation internet project, I have involved with, is to really increase the rate to 1,000 times more, to multi-megabit per second range.

The last but not the least challenging issue is interoperability. The telecommunication carriers have invested billions of dollars in the existing infrastructure. It would be impossible for them-- to tell them to really abandon all this existing infrastructure and then go for the new technologies. So whatever new technologies we come up with, where the mixture is backward compatible with the existing infrastructure. And that's another challenge.

As a matter of fact, when you think about TCP and IP, they are actually legacy protocols. But they work very well in the past 30 years. So the idea is how you can improve those protocols to work on different platforms and heterogeneous technologies.

So let me just talk about one of the very exciting new technologies, called optical network technology, called wavelength division multiplex. This is a research project that I have been involved with Lincoln Laboratory at MIT and a number of other companies involving AT&T, Nortel, JDS Uniphase.

This technology is an optical technology that enables a very cost effective terabit per second bandwidth in backbone optical networks. And the ongoing research that we have, in this next generation internet project, funded by DARPA-- that's about $16 million project that we have, at MIT, that goes on for another year or so-- is to develop very efficient WDM architectures and protocols to support terabit per second even in the access area.

Access area means the last mile that connects your corporate router to the central office in a local carrier. Again, you have lots of bandwidth in the backbone but very limited bandwidth in the access area. And our goal is trying to really increase the bandwidth even in the access area.

So let me explain a little bit about how this technology works without going into much detail. To just oversimplify a little bit, the technology allows you to divide the spectrum, in the optical fiber, into multiple channels. In current technology, you can actually divide the optical spectrum into maybe 100 channels now, with each channel running at about two gigabit per second.

So immediately, you already have 200 gigabit per second in the fiber, easily. In the future, it may go up to a few hundred channels, with each channel going at 10 gigabit per second. So then you can have multi terabit per second.

So the reason why this is such an attractive technology is that it offers you very low cost. Because many of the components you use are passive components. Again, to oversimplify , you can think of them as just mirrors and prisms. So it can bypass the electronic bottleneck.

Currently, of course, optical fiber is used to connect electronic switches. But every time, your signal has to be converted from optical signal to electric signal, and then get processed by these routers. So we're hitting, already, the electronic performance bottleneck.

But by replacing those electronic switches by optical components, such as mirrors and prisms, you don't have the electronic performance bottleneck. So you switch at a much faster rate, at terabit per second.

The challenge is how we can actually scale, again, these kind of technologies, so that millions or even hundreds of millions of users can access a system at a very low cost, and how we interoperate with the existing billions of dollars of electronic infrastructure that have been invested by the carriers, and how all these existing TCP/IP protocol works over these optical technologies.

And this is the WDM-based architecture that we have designed in this next generation internet project funded by DARPA. In the above, left-hand corner is the backbone network, for example, this. And this part is usually referred as the access area. Access area could be, for example, Boston metropolitan area. And each of these nodes is called an access node and usually is connected ring fashion. And that's usually referred to as a feeder network. This ring could be about 50 kilometer in diameter.

And connected to all these access nodes are what we call the distribution networks. And they could be arranged in a tree-like fashion or a ring-like fashion. And each of these little dots, an example could be, for example, MIT campus router or at Harvard. And the idea is we want to provide terabit per second even in the access area.

Currently, I guess most of you use the local area network, which is about 10 megabit per second. And what we want to do is provide very high bandwidth to the next stop, so that you can really receive all kinds of signals, data, video, on the [INAUDIBLE].

And there's also a trend in the convergence of the television with the PC, that you can actually get all sorts of information, video, video-on-demand, everything, at a very low cost, using a single infrastructure rather than you receiving your voice from telephone network and your broadcast TV signals from other cable infrastructure. Again, this is regarding lowering cost.

So I already mentioned that, in the distribution area, the reason why optical technology is so attractive is it's passive. It doesn't require power. Power is really needed only at the end station as well as to the central office.

One of the challenges is that internet traffic and light voice is very bursty. When you browse the web, most of the time you're idling after you download the page. So it's very bursty in nature. And it's very hard to predict the traffic. And that causes a lot of challenging issues in using optical technologies. Because optical technologies are circuit switching in general. That means you establish a light path between the end users.

And my research focuses on how we can share those wavelengths or channels very effectively, in the sense that, if you have 64 wavelengths or channels to be shared by, let's say, 2,000 buildings, how are you going to share them in a very efficient manner, under the assumption that the internet traffic is very bursty? So this is a very challenging issue that we have been looking at.

The idea is we want to really-- currently, your cable modem, at home, if you use cable modem, offers you about a megabit per second. What we tried to build in this project, in the optical project, is to build another modem, but we call it an optical modem that can offer you a gigabit per second, so 1,000 times faster modem that can be connected to your PC.

Now, let me switch gear. There's another major area of my research. It's in the wireless area. So before we talk about a wireline technology, using optical fiber, WDM technology, another major area of my research is in the next generation wireless technologies.

And this is a joint project with NTT DoCoMo. For those of you who don't know NTT DoCoMo, it is the major mobile operator in Japan. It's a spin-off from the parent company, NTT, but now is a much larger company than the parent company, NTT. It is the fourth largest company in the world in terms of market cap. And they are really very advanced in wireless technology, a major player behind this standardization of the third generation system.

I would like to give you, maybe, an historical overview of how the wireless system evolved. In the late '80s or mid '80s, we had the analog system, that we refer to as the first generation system. Actually, if you are a subscriber to Cellular One, some of the phones are still based on analog, again, because there's already a lot of money invested in those analog infrastructure. And right now, you can get very low cost if you use analog technology.

And in the early '90s, we saw the digital second generation system, which is based on digital technology. And that's for example, GSM and the CDMA technology used in the USA. And they offer you about 10 kilobit per second. The reason why we only need such a low bandwidth in these systems is that most of the communications or the traffic involve only voice. And voice is very predictable. It is a kind of very predictable bandwidth traffic. And compressed voice only requires about 10 kilobit per second.

And in the emerging third generation system, it is based on what we call the wideband CDMA technology. The standard is called IMT-2000. And the third generation technology will be rolled out in Japan next year. In the US, it probably will take another two years before we see this technology.

It has promised to offer you about a 300 to 400 kilobit per second for a mobile user. If you're driving in a car, this is probably what you'll get, about 300 kilobit per second. And that will allow you to have a pretty good bandwidth in order to browse the web.

The reason why they came up with this number, 300 kilobit per second, is that a typical web page, with some graphics in it, actually is about 300 kilobits. And they calculated a human's response tolerance is about one second. If you download web page more than one second, you'll feel impatient. So that's how they designed the technology, at the beginning, with 300 kilobit per second. And that will be combining voice and a limited amount of video, you can see, using your cell phone or handheld devices.

And what I've been working with NTT DoCoMo is on the even futuristic or future generation or the fourth generation. In that technology, we're trying to think of a totally different kind of technology, which is not based on the current cellular system. It will be purely IP-based, Internet Protocol based technology, packet switching.

Because the bandwidth in the wireless system or the radio spectrum, unlike the optical spectrum you have in the fiber, is very, very precious because of the noise in the atmosphere. Whereas, in the optical fiber, we can have easily terabit per second. In the wireless spectrum, it will be very difficult to even get megabit per second. So a big difference, a million times difference in terms of the efficiency.

And there, we want to get up to 2 megabits per second or even 10 megabit per second. In that case, you can even actually view high definition TV on your handheld device. That's really the goal in the future. And they are even planning, already, in 2006 or '07 to roll out this service. And again, the third generation is based on the wideband CDMA. And this is using the third generation technology.

And some other issue has to do with the convergence with backbone networks. If you really want to have very high end-to-end performance, the wireless spectrum or the wireless network only represents, as we say, at least the last mile technology.

Because fiber will be built. Fiber networks will be built. Optical networks will be built. We should make use of the fiber, the enormous bandwidth available in the wireline network and use the wireless part as, really, the last mile connection.

So now let me get back to what can be done currently. I'm going from the future generation, now, back to the present. Currently, you can actually buy what we call the WAP phone-- WAP stands for the Wireless Application Protocol-- from a carrier, which allows you to actually browse the web over the existing second generation technology.

This is the current World Wide Web paradigm in the wireline network. This represents the desktop PC. It's a very simple client and server technology. You send a request URL. You visit a website. The website may run some CGI script and return a response to your desktop PC-- a very simple client-server paradigm.

The WAP or the Wireless Application Protocol paradigm is a little bit different. Now, you have a handheld cellular phone. And this is over the wireless network. What is different is now you have a WAP gateway that is owned by the wireless carrier in the middle. This part is like the existing HTTP. This is the current internet protocol.

But what is different is that you now have to change. This is over the wireless part, which they have now to change the current TCP/IP standard in order to work over the unreliable and low bandwidth communication network in the wireless area. So now you have to have a proxy in the middle to forward your HTTP request to the web server.

And there's also one distinct function that doesn't exist in the current World Wide Web. It is the push function. Currently, your web browser works in the pull paradigm. Whenever you want to get information, your PC actually pulls the information from a web server.

But in the WAP network architecture, we actually have a push paradigm. That means the server can actually push information to your cell phone. That creates a lot of issues, even involving economics, because, if you have a cell phone, if some of the advertisers want to push information to your cell phone, who will pay for the airtime cost? So all this has not really been sorted out yet.

But there is a big market, now, in the wireless internet. And people now try to rush into the area, very quickly. And I just want to mention, quickly, that there's a lot of business opportunities. You can recently see a lot of IPOs of wireless internet startups. Because there's a lot of opportunities.

The existing World Wide Web is built on HTML. But if you want to browse the web using your cell phone, you have to reformat all the web pages into a different language called wireless markup language. So a lot of these companies are doing this kind of conversion to help companies to migrate to the wireless platform.

I don't want to go into too much detail now, but I just want to conclude in two more slides. This is the current PC application, the paradigm of the internet. You have a PC connected to the relatively slow internet, up to a few maybe 100 kilobit per second. And every so often, every two years, Microsoft will ask you to upgrade your operating system. So you're paying a lot, because you have to keep on upgrading software to deal with all the new applications.

And in the future, we'll have to thin client broadband internet model. And this is what we call the application service provider model. In the sense that, you no longer need to really upgrade your software maybe every one or every two years. The upgrading of the software will be done at an ASP server. What you really need is just a Netscape browser or Internet Explorer browser, what we call a thin client. Most of the processing is done at the server level.

Of course, besides a PC, a thin client PC, you may have a PDA, a laptop, and a cell phone all connected to this broadband internet. In a sense, this broadband technology allows us to see the whole internet as a local area network. Even though this server is maybe across the nation, you will see that, actually, almost like in your local office because of the broadband internet.

So that's a shift in the paradigm. And even Microsoft is seeing this model as competing to their really heavy PC model. So we are seeing a lot of changes in the internet industry.

Now, let me conclude this by actually saying something that you'll hear a lot more about this work, how to network physical objects, in the afternoon talks given by Professor Sarma. This is the ultimate vision of what we're calling the thin client. So instead of talking about cell phones, we're now talking about a bar code but, of course, not the existing bar code.

It's a chip that costs only a penny to be embedded in any pieces of consumer goods that you can imagine. For example, a piece of Tide box, manufactured by Procter & Gamble-- actually, our sponsor for this work. So in the future, a lot of these existing bar codes-- we call it a Uniform Product Code, UPC bar code-- will be replaced by a tag, which is an electromagnetic tag, that you can sense or you can read without actually scanning them.

Current bar code, if you go to the supermarket, you have to check out at the long queue in the checkout counter. And you have to really scan every piece with line-of-sight. With these new technologies, we don't have to scan them or scan them, anymore, directly.

It is like if you have a bunch of objects with detectors, which are tagged by this low-cost tagging technologies. You will have PDA or a cell phone. You can sense everything around you without actually scanning them. And this is happening. And let me show you. Maybe I can show this.

This is a chip. You can even see it in the middle. This is a business card given to me by Motorola, our sponsor. At the middle is something like a two millimeter chip. That is going to replace a bar code. And right now, it costs about $0.02. It has 1,000 bits in it. Do you know how many objects can you tag in 1,000 bit? With 350 bits, you can actually uniquely identify every single atom in the universe. Am I right?

SETH LLOYD: Yeah.

KAI-YEUNG SIU: So with 1,000 bits, you can really last a long time, I guess.

AUDIENCE: If you can find a place to stick them.

KAI-YEUNG SIU: Yeah. And this is very low cost. Every day, there are 3 billions of objects being scanned. And now, with this tagging, you can imagine there will be trillions of objects sensed maybe in every hour. So this is going to create the next revolution in the internet.

The internet will be used not just to retrieve information but will be used to actually connect physical objects to obtain physical information. And I guess you can already see there a lot of application on manufacturing, inventory control, supply chain management, a really key impact on the next generation e-commerce. And I don't want to really talk too much, because Professor Sarma will be talking a lot more about this in the afternoon. And I would like to stop here and maybe take one question.

SETH LLOYD: Sure.

KAI-YEUNG SIU: All right.

[APPLAUSE]

SETH LLOYD: We are quite behind time, and my talk is next, so I'm going to suffer for it. But let's take a question, while we're setting up for the next talk.

AUDIENCE: In the '60s, Thomas Watson, who was, at the time, the head of IBM, he predicted that, in the future, there would be only two or three major computers in the world, right? Then the '80s, the personal computer became popular, so this statement became the subject of much ridicule, as an example of how famous people can falter when they predict the future.

But now, when you look at the internet, it begins to appear that the computers that we have on our desks are actually an appliance. They're not so much computers anymore. The actual computing is distributed. So do you think that actually, after all, Watson was right?

KAI-YEUNG SIU: Well, as you say, we'll see that now, as my last slide shows, right? Right now, a lot of the computing power-- if the ASP model is right, we'll be pushing all the computing power back to a centralized computer. Because the server is getting more and more powerful, whereas, all the devices that you want to wear or maybe distribute around this room will get less and less computational power. And maybe they could be passive.

By the way, I didn't mention that the detector I showed you, manufactured by Motorola, for only $0.02, is passive, no battery. So a lot of power, actually, you get it from the atmosphere.

So to answer your question, I don't have a crystal ball in front of me. So I cannot tell you the future. But I will have to say that, in the future, everything will be distributed. And there will be a shift in paradigm, a continuous shift in paradigm back and forth. It depends on the advances in technology.

SETH LLOYD: Well, I think in the interests of actually getting to lunch on time, since you probably don't want to eat cold food, why don't we turn to the next speaker. And you should ask Professor Siu more questions at that time.

[APPLAUSE]

The next speaker is me. I'm Seth Lloyd. I'm the Finmeccanica Career Development Associate Professor of Mechanical Engineering. I don't why they gave me the longest title in the whole department.

AUDIENCE: Is that the longest?

SETH LLOYD: I don't know. Finmeccanica is a long word. I almost never can say it. So I would like to talk to you about quantum computers. Now, who here has heard of Moore's law? Who here has not heard of Moore's law? OK, a couple of people.

Moore's law is this law that was proposed by Moore, the chairman of Intel in the '60s. He pointed out that the size of the components of computers and, hence, the power of computers-- the size of components was going down by a factor of two every two years. And as a result, the power of computers was going up by a factor of two every two years.

If you extrapolate this, this means that, every 20 years, you go down by a factor of 1,000 in size. As Papken der Torossian pointed out, the current scale on computer chips is 0.1 micron. That means that, if we go on for another 20 or 25 years, we'll actually have computers that are at the scale of atoms.

Now, it's not clear if we're going to get there, because Moore's law is a phenomenological law about human ingenuity. It's not a law of nature. But if we are going to get there, we're going to have to use quantum mechanics. And that's what I'm going to tell you about.

So who here has heard of the Heisenberg uncertainty principle? Who here has not heard of the Heisenberg uncertainty principle? See, it's even more famous than Moore's law. The Heisenberg uncertainty principle is how quantum mechanics tells you the limits on how well you can measure things and on how fast you can do things at the quantum scale.

And the devices that I'm going to describe to you actually operate at the limits that are given by the Heisenberg uncertainty principle. So quantum computers are where Moore's law meets the Heisenberg uncertainty principle.

[APPLAUSE]

Well, I don't think it deserved that. Luckily, I'm using actually archaic technology to do this, so I will actually be able to cut my talk in half. Now, what are quantum computers? By the way, I was thinking, when Nam pointed out that, during my interview, I said I knew a lot about mechanical engineering, and I was trying to think what he was talking about. Then I realized that I actually did say that I did know a lot about quantum mechanical engineering.

In fact, you can think I am, in fact, a quantum mechanical engineer, because what I really know about is how to make things happen at small scales. And that's what I'm going to tell you now, how to make things happen when the systems that you are trying to make, that you're trying to engineer are quantum mechanical.

What are quantum computers? They're devices. Well, they're computers. They process information. They store information on individual atoms, nuclear spins, or photons. That is to say at the quantum scale. They operate at the physical limits of speed, dissipation, and memory. They operate at the limits that are determined by the Heisenberg uncertainty principle. And finally, they preserve information-- sorry, they process information in a way that preserves quantum coherence. What does this mean? Well, I'll tell you in just a second.

So one of the main things you've got to remember about quantum-- who here, by the way, has ever taken a course in quantum mechanics? See. I mean this just shows you that the physics-based paradigm for engineering really is a physics-based paradigm.

So now you guys and women who took courses in quantum mechanics know that quantum mechanics is weird, right? Quantum mechanics is weird. Strange stuff happens. And down at that microscopic scale, things do not behave in the same way they do at the macroscopic scale.

So I'm going to tell you some stuff, and it sounds weird, and you don't quite understand it, that's good. Niels Bohr once said, anybody who can contemplate quantum mechanics without getting dizzy hasn't properly understood it. So if you're feeling dizzy, that's good. It doesn't mean you're actually understanding it, right? It just means that it's good. If you're not feeling dizzy, that's bad.

However, I can tell you a lot a lot about all the quantum weirdness you really need to know to understand quantum computation, in a nutshell, on just one transparency. And it goes under the name of wave-particle duality.

So what quantum mechanics says is that stuff that we normally think of as waves, like sound or light, is actually made up a little tiny particles. Light is made up of photons. Sound is made up of phonons. So if you look at a very, very, very, very, very faint light, and you have a detector, a photodetector for detecting light, rather than just giving you a continuous signal, at some point, it will start just giving you little clicks. And each click is a photon, a light particle hitting that photodetector.

Similarly, stuff that we think of as particles, like atoms or, for that matter, basketballs, actually corresponds to a wave. So if I pick this thing up in the air and throw it up there, not a very good particle, but it's still-- this clip has a quantum mechanical wave that's associated with it.

And that means that particle-like properties, like position, for instance, are sort of smeared out when you get down to the quantum scale. Because there's a wave-like nature that's associated, say, with the position and momentum of this clip as it goes flying through the air.

Now, one thing you've got to remember. So that's fine. That doesn't sound so weird, except that it is kind of weird. Because the important thing here is that the waves in quantum mechanics, just like ordinary waves of light or waves of water or waves of sound, they can add up.

So if this is an OK wave for a quantum system, for this particle, and this is an OK wave for a quantum system, well, then so is this. So it's OK to have a quantum particle that has a wave that has support in two places at once. And a particle that's described by this is, in some weird quantum sense, in two places at once.

Now, is this OK? Is this weird? Yes. OK. Are people willing, provisionally, to accept this, even if it's weird? Who doesn't want to accept this, provisionally? Nobody. Good, otherwise I would have to tell you to leave the room. Because everything-- if you accept this, provisionally, then actually everything I'm going to tell you about quantum computation, you're going to have to accept.

So that's why I don't want, like two minutes down the road, to have people backtracking and saying, hold it. Good. So I'm going to take five more minutes, and then I'll tell you about it.

Now, in computers, we store information on electrons. We store them on capacitors, really. A capacitor is like a bucket for electrons. A whole bunch of electrons over here is a 0. A whole bunch of electrons over here is a 1. That's how computers work. Now, let's go down to the level where we're storing information on a single electron.

And an atom is actually a very nice example of a capacitor, right? You can put energy in it, excite it to an excited state. The shape of the electron cloud around the atom changes. So let's think of an atom as electron over here, ground state, registers a 0. Electron over here, excited state, registers a 1. The atom's storing energy, when you go from 0 to 1. That's fine. We're just going to call the ground state 0. And we're going to call the first excited state 1. We can do that. That's fine.

Now, the thing is it's OK for an electron-- indeed, electrons like doing this-- to be in the ground and excited state at the same time. That is, a quantum bit or a qubit can register, in some weird quantum sense, 0 and 1 at the same time.

Are there any objections to this so far? Good. People were convinced by my saying that you weren't allowed to object at this point. Great.

So what can you do? So a quantum computer is a system that stores information on quantum bits or qubits. So a quantum bit, let's call it 0. 0 is electron over here. 1 is electron over here. 0 plus 1 is the sum of the waves. It's the physical state of the system. You add up the waves that correspond to 0 and 1. It's some funny state that registers 0 and 1 at the same time.

Now, if that's not bad enough, if I take a whole bunch of these guys, let's say I take 10 of them, and I put each one in this funny state, 0 and 1 at once, 0 and 1 at once, 0 and 1 at once, et cetera. I ask what's the state of all of them together.

Well, it's a wave with-- remember, every component is represented. So I've got a wave with 000000, 000001, 000010, 111111. I have 1,024, which is 2 to the 10th, components. And my 10 quantum bits are, in some funny quantum sense, registering all the numbers between 0 and 2 to the n minus 1-- 0 and 1,023 written in binary form-- at once.

So what we're actually doing is, on a small number of quantum bits, a small number of electrons, we're actually registering a very large number of numbers. As Sunny pointed out, if I have 350 bits, 2 to the 350, which is around 10 to the 110, is way more than the number of particles in the universe. They're only like 10 to the 90 particles in the universe.

And this, of course, is why information is so powerful. With a small number of things, you can tag a lot of stuff, a lot more stuff than you'd ever want to tag. And in this case, in our quantum bits, we're actually able to do it all at once.

So this is kind of funky. And what can you do with this? Well, here's how you do quantum cloud computation. You have a little gate, a quantum logic gate, which is something that, for instance, moves electrons around. I've actually done this in terms of photons, because photons are very nice things for storing information on.

A photon can wiggle this way, which I'll call 0, or it can wiggle this way, which I call 1. It's pretty easy to do this. You just take out your Ray-Bans, and you polarize the light, so it's wiggling this way or it's wiggling this way. But they can also wiggle this way. And that's the state at 0 and 1 at the same time.

And so I can make a little gizmo. And I'll describe very briefly how you make gizmos like this. And you can do logic. You can make the photons interact with each other. Or you can make the electrons interact with each other. Indeed, nature does this for you, because nature gives you physical interactions, quantum mechanical interactions between these systems. And you can do a little logic gates.

So here is, for example, a gate, where the first guy that goes through stays the same. And the next guy that goes through gets flipped if the first guy was 1. That's called a controlled NOT gate. Because you do a NOT on the second bit, controlled on whether the first one's a 1 or not. And now you have this funky situation, where if you put 0 plus 1 in the first bit, then, well, when you've got 0 there, the second guy does nothing. And when you have 1, the second guy flips.

So you can do logic. And you know that doing logic, if you look inside a regular computer, it's just got all these little logic gates in there, that takes bits, one by one and two by two. It makes them interact with each other. And all of a sudden, you're playing Doom.

So the mathematical way of saying this is that you can put in some input, x, and you can calculate arbitrary functions, f of x, arbitrary digital functions. Well, in quantum computation, what you can do is you can take all these inputs at once, and you can calculate f of x on all these inputs at once. That is something you cannot do on a classical computer. And that is where the power of quantum computation comes from.

Now, is this reasonably clear? Remember, it started out, the world is quantum mechanical. Quantum mechanical things can be in two states at once. Quantum bits can be in two states at once. That means a quantum computer can do two things at once. More than that, it can do lots and lots and lots of things at once. And that's why you can do more on a quantum computer than you can do on a conventional computer.

Does this make sense?

AUDIENCE: Crazy.

SETH LLOYD: I'm sorry?

AUDIENCE: Crazy.

SETH LLOYD: So how do we build these things? And then I'll stop. So really, actually, now, remember that Papken der Torossian said the difference between an engineer and a physicist is that engineers do things. So when I came here and became a quantum mechanical engineer, I felt obliged to start doing stuff. Actually, I was already doing stuff before I came here.

But it's been extremely helpful for me to be in an environment where people are doing things. Because engineers know how to do things. So what I've been doing is I've been building quantum computers. I've also been working on large systems and complex systems, as well. But largely, what I've been doing is building quantum computers. Because this is what the government will give one money to do these days.

So how can you do it? You can do it by a whole variety of means. You can use nuclear magnetic resonance. It's essentially, exactly the same technology that, when you blow your knee out skiing, you go and sit in some big magnet, and it makes a lot of clicking sounds. And it takes an image of your knee. You can modify that technology, very slightly, to put a bunch of molecules inside the magnet, zap them with microwaves, and make them perform these kinds of simple logic operations that I described.

With David Cory, here at MIT, we now have the world's record quantum computers. They're about 10 bits. They can perform a few thousand operations. They're small. And they're slow. But they are performing these operations at the atomic scale. There's one angstrom between the bits.

Quantum optics is a beautiful way for doing quantum computation. I'll describe in a second how you do that. Quantum dots are nice ways of doing this. A very interesting way of performing quantum computation is to use superconducting circuits. I don't know if you read Science, you may notice that they just announced the result of a group, of which I'm a member. In collaboration with Delft, they made superconducting circuits, where you had a whole bunch of electrons going around this way-- we'll call that a 0-- and a whole bunch of electrons going around this way. We'll call that a one. And you have this funny quantum state, where they're doing that at the same time.

Now, people have been trying to do this for more than 30 years, create a whole bunch of electrons going around this way and a whole bunch of electrons going around that way, at the same time, without success, until now, until just this last year. How did that manage to happen now? And actually, why am I able to work on all these kinds of projects without just falling apart?

There is a reason. My goal, my role in these projects is not to do the experiments, because you need world-class experimentalists, who really know what they're doing in that particular field in order to make this happen. My role is really a designer, to design these quantum systems.

And the tools that I've learned, from Nam, to do this are axiomatic design. Axiomatic design is what actually allows you to do this. Why? Because at this very small scale, you're very constrained by nature. You know, the atoms are already there. Nature says they have to behave in a certain way. You cannot design an electron. It just comes that way. Luckily, they're all identical. Nature fabricates them to exact precision.

However, in order to make these things operate in this very constrained environment, you have to be very careful about having uncoupled designs. And you could say that my role, in these collaborations, is to ensure that the designs are uncoupled. That's why we're actually being able to do this. And that's why I think we have a hope of actually making this happen on a much larger scale.

I think I'm running out of space here, running out of space and time. Oh, it's time for lunch. So let me just wind this down here.

What else can you do? I won't tell you anymore about how you do this. Let me just talk about a few more applications that we have here. We've just received a multimillion dollar Murray project, together with the RLE here, in the RLE here, to do quantum communication.

That's where you take in quantum information, like stored on photons-- remember, this way is 0, this way is 1, polarization-- move them down an optical fiber, bring them over here, and pick them up on the other side. There's lots of problems with this. It's very difficult to do these experiments, in general, because quantum systems are small. They're at the angstrom scale. They're hard to engineer. And they're hard to manipulate. And they're very sensitive. You sneeze, your atoms get a cold at the quantum level.

So how are we going to do this? Well, luckily, I and our collaborators, we came up with a design for what we call quantum internet. This is not like the internet that Sunny was talking about, except that it consists of computers that are connected together by communication lines. In this case, these are quantum computers that are connected by quantum information lines.

Now, in our case they're atoms. They sit in little optical cavities. We zap them with light. We communicate by photons. It's actually very hard to make this happen. But by thinking carefully about what parts of this design can be decoupled from the other parts, we actually have a very nice design. And we ought to have little networks, with a few links in them, in a couple of years.

Where's this all going? And here's the end, so we can all go have lunch in a second. Currently, we have 5 to 10 quantum bits. We can do about 1,000 operations. That doesn't sound like a lot, but you have to remember that, four years ago, we had no qubits and no operations. And in fact, then two years ago, we had 4 qubits, and we could do about 10 operations. So we've got a kind of inverse Moore's law going on here. Every couple of years, we managed to double the number of qubits that we have.

In the next few years, we're going to have 20 to 30 quantum bits, about 10,000 operations. We're going to have a two, maybe three link quantum internet, in which we can move quantum information around from one place to another.

So let's put this in the perspective of what Papken der Torossian was talking about this morning. Here's a picture of Moore's law. Moore's law essentially says, every 25 years, the scale size of components in computers goes down by a factor of 1,000. In 1950, we had vacuum tubes, 10 centimeters in size. In 1975, we had integrated circuits, but the components were down there at 10 microns or so, so 100 microns in size.

Now, we're at the scale where we have feature sizes of a tenth of a micron. The National Semiconductor Roadmap actually takes us down to around 2010, which is pretty amazing, using the techniques that we heard about this morning, which is quite remarkable that one's able to do that.

Where are we? We're right here at the moment. This is Y2K. We're right here at the scale of atoms, 10 to the minus 10th, one angstrom. We're building small devices right here. We're making them larger and larger and larger. There's a huge, huge industry out there that is building larger devices, at the order of 0.1 microns, and making them smaller and smaller and smaller.

What we're hoping is that sometime, in the next 10 years or so, we're going to be tunneling up from the very, very small scale to the large scale, constructing larger and larger devices. And halfway in between, we're going to meet people like Papken, who are coming down from the top and building larger devices. Then we break through. We hope to be able to shake hands with each other and say, hey, how's it going?

So let's just conclude. If we are going to be able to make Moore's law continue to 2025, which is around when things are supposed to be at the atomic scale, this is how we're going to have to do it. We're going have to store things at the atomic level. We're going to manipulate things using basic, ordinary physics to do it. And we're going to have to use careful design principles, such as axiomatic design, in order to get us there. So let me stop there. Thanks.

[APPLAUSE]