MIT MechE Symposium: Mechanical Engineering and the Information Age - Ian Hunter, Joseph M. Jacobson and George Barbastathis
[MUSIC PLAYING]
MODERATOR: Our first speaker today is Professor Ian Hunter. I'm trying to read out everybody's titles. Are you the Hatsopoulus Professor?
HUNTER: Yes.
MODERATOR: Yes. He is the Hatsopoulus Professor of mechanical engineering and bioengineering here at MIT. I have to say that one of my most exciting experiences in first coming to MIT five years ago was actually going into Ian's lab. If you want to see information being used in all of its aspects, you should go check it out.
But today he's going to talk to us about work that's going on at the bio instrumentation lab. I don't know if you know, but we had a big effort to start up a large bio instrumentation program here. And that's what many of the techniques that people were talking about yesterday from instrumentation and getting information mechanical engineering will probably have the greatest application over the next half century or so. So [INAUDIBLE].
HUNTER: Thanks a lot, [INAUDIBLE]. So if we can have the lights down a little bit in front. So what I'm going to do is talk about really two areas that are going on in our laboratory. The development of biomimetic materials, largely centered around conducting polymers. And the other is the development of a small, walking, autonomous robot called the NanoWalker. And you'll see how these two themes we hope to merge them in the next couple of years so that we'll be fabricating a NanoWalker out of the biomimetic materials. So my co-authors are Sylvain Martel, John Madden, Peter Madden. And we have John Madden here today. Next, please.
So one of the objectives of our laboratory is to try and automate the process by which new materials are discovered, be they new drugs or new interesting superconducting materials, new actuator materials. And what we are working towards is the notion of having in the one instrument technologies for synthesizing materials as well as analyzing their anisotopic material properties, be they electrical, thermal, mechanical, optical, acoustic, chemical, and so on. All orchestrated by molecular and continuum modeling techniques and an objective, rational approach to instrumentation design and modification of those designs and modeling algorithms as a function of what is measured. Next.
Now, what are the typical sort of operations you might be using when working with new materials? Typically, you'll have a material. You'll take it to, perhaps, a scanning tunneling microscope or an atomic force microscope. You might do some laser optical microscopy, measure mechanical impedance, electrical impedance. You might do some 3D micro fabrication, some nano fabrication, perhaps some Riemann spectroscopy, and so on and so on. At the moment, these are all conceived of as separate instruments. And you would take a specimen from one instrument to the next.
What we're working towards is the notion of having basically the one platform for implementing this range of functionalities. And we're calling that platform the NanoWalker. And the NanoWalker will eventually be an autonomous device carrying its own computer. It has a minimum of 12 degrees of freedom. And as you'll see shortly, it's capable of nano stepping. In fact, it can move with sub nanometer increments. But the idea is to use that as a platform to implement these different tasks. And we're not conceiving of the one NanoWalker implementing all of these, but rather you would have a version of a NanoWalker which would be acting as an atomic force microscope, another version for a micro injection, and so on. Next.
So this is the current size of the NanoWalker. And I have one right here. We are in the process of making them autonomous. In the videos you're about to see, you'll see a tether where we bring the energy into the NanoWalker as well as the information to control it. But we're working very hard to try and make the device autonomous, without tethers. Next. There you just see it rotating.
And you'll see here that we're able to get the NanoWalker to move quite quickly. And bear in mind that here we have steps that are down to sub nanometer level, if we wish. Here the steps are many tens of nanometers up to hundreds of nanometers per step. And we've been able to make the steps up to 4,000 per second. So we've been able to get these devices to move very rapidly. Next.
Now, some of the functionalities that you can build in are, for example, you can use a micro laser to trap particles and conceivably use the NanoWalker to move those particles around. And if we can place the cursor on the one that's now trapped. So there's infrared laser trapping that particle, and we're able to move it around in three dimensions. So the NanoWalker conceived of as not only being usable for making traditional sort of physical contact with the material. But here we're exerting forces on materials using photon momentum. Next.
Here's an example of a version of the technology that we're using in the NanoWalker prior to the NanoWalker's ability to move with four legs. And here we're scanning in three dimensions over the surface of a graphite surface. And on your right, we have an image that's one nanometer by one nanometer and the step size there is 10 picometers. So you can clearly see the individual atoms there. So the NanoWalker is capable of moving down with minimum motions of about 10 picometers. And it can move up to about a meter workspace. Next.
Now, when we look at the NanoWalker, we see really a very large difference between the way in which that device is fabricated versus the way in which nature fabricates its autonomous devices. And there you see a water flea roughly a millimeter in size and a single bacteria are roughly one nanometer, one or two micrometers, I should say, in size. So what we have to ask ourselves is, what is the main difference between the way in which we would fabricate the NanoWalker as an engineering system versus the way in which nature fabricates its systems? Next.
And so on your left there, I have some of the salient features of current engineering practice where we normally use low molecular weight materials such as copper, iron, aluminum, silicon, and so on. Existing engineering systems are characterized by separate manufacturer and assembly. Think of the notebook computers that I see around the room here. The CPU, the various disks and actuators, the case and display are all separately manufactured and then assembled. Whereas in nature, things are grown. They're co-fabricated.
And of course, nature's materials are high molecular weight. Most engineering materials, the material properties are isotopic. Thermal conductivity is much the same in all directions. Electrical conductivity pretty much the same in all directions. Optical transmission, normally pretty much the same in all directions. Whereas biological materials are normally highly anisotopic. And it's very clear that nature uses those anisotopic material properties in fabricating biological systems.
And then the other thing that characterizes existing engineering systems is that if, for example, you create a micro actuator, such as a piezoelectric device, [INAUDIBLE] alloy actuator, magnetostrictive device, that's really a bulk material phenomena. Whereas if you look, for example, at muscle, that is a mechanism down to the molecular level. It's literally a nano step, a molecular linear stepping motor. And it's a mechanism. It's no more a bulk material than, for example, an internal combustion engine might be considered a bulk material. So biological systems are really very different. And we have to ask ourselves, is it conceivable to fabricate biological engineering systems using similar principles? Next.
Now, this particular taxonomy here is due to Seth Lloyd. And Seth was thinking, well, on the one hand, in living systems, we can say there's energy. And on the other hand, we've got energy. Information on the one hand and energy on the other. And if we look at the various things that biological systems do, well, they acquire information in the form of vision in acoustics and touch and so on. And we acquire energy by drinking and so on. We store information. We store energy.
Energy is largely stored in the form of ATP. Very high energy density chemical storage mechanism. We transfer information via neurons, for example. And we transfer energy via blood vessels. We transform and generate. We compute. And we also do that with energy and that muscles convert stored chemical energy to mechanical output with surprisingly high efficiency. And of course, we dispose of information and we dispose of energy. Next.
So when we look at biomimetic systems, we can say, is it conceivable to create a class of materials from which we could create intelligent systems with the ability to co-fabricate and grow and to have anisotopic material properties? But where the different subsystems could mimic those in a biological system, that is to say, we want to be able to acquire information, acquire energy, store information, store energy, transfer, transform and generate, and dispose.
And so what I want to talk about in the next few minutes are our and others' attempts to create a class of materials, biomimetic materials, biological like materials, having a good percentage of these characteristics here. I regard that until we have most of these characteristics here, we're not in a position to create a biomimetic autonomous, intelligent system. A biological robot, if you like. Next.
So here is some of the activity going on in our lab in this area. We have work going on to create powerful actuators out of these biomimetic materials we work with. And they're called conducting polymers. Low bandwidth transistors. I'll talk about some very exciting work where we've managed to transmit not only information through this material but also significant amounts of energy. Energy storage and sensors of various sorts and so on and so forth.
So you can see a list in black there of what we're working on. In blue are, obviously, the counterparts that you'll find in nature. And in red, we have some of the caveats here, in that the transistors at the moment are very slow and some of the acoustic devices have got low bandwidth and so on and so forth. So you have to think of this as early days in the development of this technology. Next.
So basically, this is just one example of one conducting polymer. But it can be coaxed into various forms. And when, for example, it's in one of its forms, a 0% oxidation state, its electrical conductivity is extraordinarily low. But it can be switched into another state, a 50% oxidation state, where the electrical conductivity can go up by as much as 10 or 11 orders of magnitude, theoretically.
So these are materials where they're salient material properties, in this case, electrical conductivity can be switched over an enormous range. And furthermore, that material property often can be very, very different in different directions. So for example, you can make a wire of this material where the electrical conductivity in the polymer chain direction can be enormous. And in the other two orthogonal directions, it can be extremely small. Next.
So we grow these materials in the lab electrochemically. Next. And here's an example of one of the applications of the technology. It turns out that if you strain the material, you can set it up to act as a strain gauge. A useful sensor for many applications. Next.
And here's just some results. And we found that we've got moderately high gauge factors in the order of about five. So there's an example of a sensitive force or strain transducer made out of these materials. What about computation? Can we create a transistor out of these materials? Well, indeed, there's a long history at MIT of creating conducting polymer transistors. In the past, they were largely fabricated on silicon or glass substrates. We've been building them on entirely, pretty much, out of polymers. And they share characteristics with [INAUDIBLE]. And that will have a source drain, a gate. Usually these are for terminal devices. Next.
And here's an example where we're switching this transistor from a state in which it's conducting to a non-conducting state. And there you see we have not achieved the theoretical 10 to the 10 or 10 to the 11 fold change in resistivity as yet. Next.
The switching times of these devices are governed by different equations from traditional silicon based transistors. Next. And at the moment, they're relatively slow. So if, for example, you fabricate these materials in bulk, the transistors only have a bandwidth of about one hertz. If you go down to about 50 nanometers, 10 kilohertz has been achieved. And certainly if you go down to thin molecular layers, speeds of one megahertz are achievable. So this is not a technology that would compete with silicon for raw speed. But when you think of it, the neurons in your own brain have a relatively low bandwidth, and yet they confer an advantage by virtue of the fact that you can fabricate them in 3D. Next.
What about creating wires? It turns out that these materials obey Ohm's law remarkably well over a wide range. So you can transmit information through them. Next. But the question that we asked ourselves, was it possible to transmit significant amount of energy through these conducting polymer wires? Now, typically when you design an electric motor and you're pushing it to the limit, you'll figure you can pump about 10 to the seven amps per square meter through a non water cooled electric motor. Here we're achieving basically a third of that. So we're within a third, basically, or within an order of magnitude of what you can push through copper.
An interesting difference is that the density of copper is about eight times greater than these polymers. So in point of fact, we are equivalent via some measures, volumetric measures, to copper and silver at room temperature. The very interesting possibility is that, theoretically, it's clear that we should be able to go way past this point. But we're very excited that we have now achieved very significant energy densities through these materials. Next.
And also we've created super capacitors out of these materials. They have a low bandwidth, very low bandwidth. But we can store something in the order-- we've achieved something on the order of 10 to the five farads per kilogram. That's orders and orders of magnitude greater than, for example, a tantalum capacitor that is normally regarded as a high energy density capacitor. Next.
Actuators. Here we show the multi-layer configuration for the actuators. Next. And we're working with a variety of conducting polymer materials. And here's a very interesting one. We've started to work with calixarene, which holds the promise of achieving large strains, much larger than we're achieving at the moment. Next.
And so far, we've been working to push the active strains, the contractions achieved by these materials up, as well as their bandwidth. Next. And also creating versions of it that can operate in room temperature and in air. You'll notice that the voltages here are very small. So these are not like piezoelectric materials where you'd need to use voltages in the hundreds of volt range. We can get these things to contract actually with less than one volt. Next.
And bandwidths are also somewhat equivalent to what you would find in muscle. So bandwidth between one and 20 hertz. Next. So we'll show you one of the actuators oscillating at one hertz. And there's a square wave oscillation. Now if we can go down to the 10 hertz.
So that is now equivalent to fast skeletal muscle. That's a bilayer. And if we can now go to the lower one here, you'll see a linear actuator contracting and causing a small arm to move. So we've got a cantilever there. So what you see there is this material here fabricated in our laboratory. We put a voltage across it. It contracts, and we're driving that cantilever there. Next slide, please.
So the active stressors we're achieving here are remarkably high. We've achieved in the order of 30 megapascals. Next. The strains we achieve typically 4% or 5%, but we've achieved up to 18%. Next. And power to mass is also moderately high and expected to be much greater. Next. And the efficiency also is surprisingly high. It's about equivalent to muscle. Next.
And we're working to produce over the summer a complete reflex loop containing actuator sensor simple computation to create a closed loop servo system, servo controlling the actuator with respect to strain. That, if you like, is the first building block. The reflex loop is the first building block in nature. Next.
So it's our objective to produce in the form of the NanoWalker a form of the NanoWalker at some point incorporating this technology. Next. So when we look to the future, we are expecting probably in about two or three years to start replacing some of the more traditional technologies we have in the NanoWalker with some of these biomimetic materials. Thank you very much.
[APPLAUSE]
MODERATOR: We have time for questions.
HUNTER: Yes [INAUDIBLE].
AUDIENCE: [INAUDIBLE]
HUNTER: Yes. We fabricate these ourselves, by the way. And we're working very hard on creating a three dimensional fabrication system to co-fabricate the actuators, the transistors, the sensors, the energy storage delivery, and so on to co-fabricate them together. One of the ironies here is that it may indeed turn out that we use the NanoWalker, the existing NanoWalker, as the technology for creating the future NanoWalkers out of these new materials. Yes.
AUDIENCE: [INAUDIBLE]
HUNTER: Well, if you're talking about how many cycles we can go through for these devices, or do you mean if we look out in the future?
AUDIENCE: [INAUDIBLE]
HUNTER: Well, we're seeing these as direct competitors with silicon based MEMS technology. Silicon is a great technology for manipulating information but terrible for manipulating energy. The capacitors you implement in silicon have very low energy density. The electrostatic actuators are very weak. So here we have a technology with an extraordinarily powerful actuator technology, very high energy density super capacitors. So we can not only manipulate information, albeit at low bandwidth, but we can manipulate energy. And that really puts us way ahead of silicon. If you want to both manipulate energy information as well as energy, which of course, is what biological systems do.
MODERATOR: Other questions? I'd just like to make a comment, which is I think that this work shows very clearly how if you're operating at very small scales, a microscale and the nanoscale, that the trade offs, you cannot move information without moving energy around. And you cannot make energetic processes happen without moving information around as well. And these two things are combined with each other in a way at the microscale in a way that they're not on the macro scale where you can, essentially, completely decouple information processing from energetics.
It actually reminds me of almost when you get down to the small scale, it's almost as if you're returning to having a car with a, well, an engine with a mechanical [INAUDIBLE]. The information processing parts of the system are themselves mechanical. So you actually have to look very carefully at the trade off between the information and energy when you're at the small scale.
HUNTER: In fact, I'd make a very fast point here. It's by no means clear to us that we necessarily want to use the transistor for manipulating information. There's no reason here why we couldn't conceive of building a mechanical computer. Because if you look at the energy involved in moving the polymers, there are some situations we compute lower energies than as currently used at the gate in circuit based technology. And I'll remind you that when the actin-myosin cross-bridge in your muscle moves, it's consuming about 10 to the minus 20 joules in making that step. So if you think of that as potentially mechanical computer, it's about 10 to the seven fold less energy in moving that molecule than current silicon technology consumes [INAUDIBLE] in a fast CPU.
AUDIENCE: One other comment. Your early work was involved in nano surgery. [INAUDIBLE] And this is obviously something that [INAUDIBLE]. Is that one of your [INAUDIBLE]?
HUNTER: Yes. And I know we're getting over time. Very rapidly, one of our challenges over the next year is to get these things to be able to move upside down. And when a fly moves upside down, it's actually creating chemical bonds with the surface. It's got a controllable stickiness. And we would like something similar here so that we could move around and upside down.
And we also want to be able to crawl around on tissue. So it's not clear whether we would do that with a sticky surface or, like some insects, we just impinge in a little bit on the [INAUDIBLE] don't feel it to sort of move around. But yes, medical applications are one of the areas [INAUDIBLE] technology.
MODERATOR: Great. Let's thank our speaker again.
[APPLAUSE]
Our next speaker is Joe Jacobson. Joe is an assistant professor at the Media Lab. And we're very lucky-- in mechanical engineering-- that he's considered also to be [INAUDIBLE]. I was very happy about this, because in fact, one of the things that Joe works on in addition to what he's going to tell you today is quantum computing. But today he's going to tell you about fabrication of more conventional, but still very conventional computing devices.
JACOBSON: OK. Actually, am I audible here?
AUDIENCE: [INAUDIBLE]
JACOBSON: If you want me to have one. OK. It's a great pleasure and honor to be here. And I want to just tell you a little bit about some of the technologies we're developing to build computers and other devices that we associate with the information age in new ways. And pictured up here is a kind of cartoonish figure of what we envisioned the future to be. And basically, if you can imagine a little desktop printer, a little desktop device, and you download from the web, say, a design for a Pentium III and press a button and outcomes working a Pentium III. And I'll tell you a little bit about how we're doing this in a minute.
More broadly, what we're interested in doing is enabling the manufacture of information devices in a way that hasn't really fully been brought to the fore, which is continuous manufacturing as opposed to really making things one at a time. And this is kind of a list of some of the things that we've demonstrated to date in terms of being able to make on roll to roll processes and continuous processes, what we would call printable technology. And for each of these, we've actually had to develop completely new chemistries or completely novel chemistries or ways of doing things in order to reach the point that we're interested in doing.
Obviously, as Ian eloquently laid out, we need new ways to make logic and new ways to make computation. And this is true not just for cost reasons, not just to make things on flexible plastic, as I'll show you in a minute. But we really want to make new types of architectures, new topologies that are not enabled by current fab techniques. Specifically the ability to go very high device counts and to go to very high interconnect counts between devices in a way that is not possible today.
Secondly, in order to have universal logic, obviously we need memory in addition to logic. Then there's some other things that we as humans like to have. For instance, display. We like to be able to visualize some of the information buried in the computation that we're carrying out. And then finally, and appropriate for mechanical engineering, of course, is the ability to do micro electrical mechanical systems. In this case, we've called this PEMS, which are Printed Electromechanical System. I'll show you some very early examples of that.
OK, so very briefly, this is some older work that we've done, but I'll just put up one or two slides. We were interested in how to build a display in a completely new way, basically, by printing. And the standard route to making displays, liquid crystal displays, as you may know, is very similar to the way we make chips. The cost of a fab, a gen 3 fab for making LCD displays is in excess of a billion and a half dollars. And displays are made roughly a couple at a time. These days about four or six displays at a time on large pieces of glass.
And our question was how we could go from that to actually being able to print a little machine onto a surface that would act as a display. And not only any display, but actually get us to the best properties that we thought theoretically could be achieved. Meaning the lowest current draw, a purely field effect material, if you will. Liquid crystal display, as you may know, needs to be driven with a alternating or an AC current in order to prevent ion migration. We wanted something that was purely a field effect, something that would draw a few nanoamps per square centimeter. And so what we came up with is a very simple system, extraordinarily simple system.
We have a little micro capsule that we create using a bulk chemistry and interfacial polymerization that captures inside of it some nanoparticles. And these nanoparticles have surface groups that have been appended to them. And these surface groups covalently bond to those nanoparticles. And they impart to each particle roughly about 10 electrons worth of charge. And then if we put an external field here, we can simply flip these position of these little particles vis a vis your eye and create a display. And if you keep distances small enough and so forth, you can actually get things to go pretty quickly.
This is a very early video of this. And I hope it's not too dark. So this is going to be a lens system that's going to go below the surface and in the next frame. So this is some of this ink that we've coated onto a piece of plastic, in this case. And now we're applying a one hertz pulse or one hertz waveform to this. And in the next couple of frames, you can see what I mean by printing a little machine. The motion of this is actually like a little cat's iris.
And it turns out that if you stop the electric field, you can stop that cat's iris just about anywhere that you want along that cycle, so of any opening. And so you can make different images appear. As I said, this is a very kind of early system. And you can see some imperfections in that. We've gotten a lot better in the chemistry that we've been able to create. And here's just a little example.
AUDIENCE: [INAUDIBLE]
JACOBSON: Well, the external capsule, typically the smallest capsule that we create here is about 10 microns or so. And the internal nanoparticles are about 100 nanometers or so, depending on which color we're interested in. An important aspect is shown here. And you can see that it's really the electric field that defines the image as opposed to these capsules.
Unfortunately, I don't have the most recent picture. But in most recent chemistries, these capsules when they go down onto a surface actually close pack and self assemble into close packed arrays. The chemistry is conformable enough that it can actually tile the surface. And then it's really the electric field, as you can see here, that defines the image, as opposed to the capsule placement, for instance. So this is an extraordinarily inexpensive way to make displays of very large size.
The other property that I should mention, as I alluded to, is if you remove the electric field completely, then the image will just stay there. And so you have this aspect of bi-stability. This is just a test device showing resolutions of about 100 DPI or so. And the best resolutions we've achieved are in excess of 250 DPI. That just shows some of the thickness and so forth. So this is the front end, the visual end of devices that we might want to make. And then, of course, we need to be able to make logic and computation in the background.
And so what we were interested in is taking as a starting point the ability to make an information appliance like a laptop, for instance. And there's actually several different hierarchies of logic that we require there. So if you think about the display, again, a liquid crystal display in a laptop, the back plane, the so called TFT or the Thin Film Transistor back plane, doesn't need to be extraordinarily good. It needs to have mobilities of about one centimeter squared per volt second, which is achievable by using amorphous silicon. So all of the transistors, the million transistors on the back of your glass, can be so called amorphous silicon or a amorphous silicon quality.
Then when you think about what's driving all of those pixels, those are the drivers. And those need to be a little bit better. Typically, we'd like to be polysilicon. Those are about 100 times as fast or need to be about 100 times as fast. Then, of course, we have a CPU, which we want to be the best that it can be. And these days, that's about a gigahertz. And to get there, we need to use single crystal silicon. And in the future, we need to go beyond that to materials like silicon germanium. And then, of course, if you have an RF modem, for instance, at 5.6 gigahertz, then you're already using a heterogeneous semiconductor like silicon germanium.
So we have this entire hierarchy. And what we were interested in doing is enabling the ability to make all of those logic devices by printing, but to enable that entire hierarchy up to very fast speed. So we wanted a single technology that could get us all the way from low speeds up to very high speeds. In fact, speeds in principle faster than you could do with a single semiconductor like silicon.
So in order to do that, the best materials that we know of, obviously, are inorganic materials. These are materials that are processed in a normal fab. Now, I should mention that when you process a Pentium, the time to process that Pentium is long. Typically you start with, say, a 12 inch wafer that goes into the fab. And as a very minimum, you're talking about two weeks of 24 hour day, seven day a week processing before Pentiums emerged from that.
You have on the order of 49 or 50 mask counts. Each mask the wafer goes into vacuum, comes out of vacuum, gets a CMP polish step and so forth. And there are just an extraordinary number of steps involved in that. And so our question was, can we get there without doing any lithography at all? But being careful that the final product that we deliver is indistinguishable to you from what comes out of the fab.
So what we developed was a chemistry that we published last year in Science where, for the first time, we were able to create nano clusters. These are clusters that have about 80 atoms of an inorganic semiconductor in them. And then to put on the apices of that nano cluster what we call leaving groups or organic groups that can self assemble these clusters. Once they're self assembled onto a surface, those organic groups pop off and create a single crystalline film. And this is really the key to being able to liquid process transistors or print transistors which are indistinguishable, at the end of the day, from what comes out of a fab.
So actually, it's not really possible to view these clusters with conventional TEMs. Their size is on the order of a nanometer or so. These are some nano clusters which are a little bit bigger than this.
One key that I really haven't mentioned is the fact that each of those clusters has to be, really for this to work, has to be of an exact size. You need a minor dispersed distribution. But at any rate, when we take this material and put it down onto a surface, as mentioned, so here's a little crystal model of this II-VI semiconductor.
As mentioned, when they come down onto a surface, hundreds of these clusters come together. Their organic group pops off. And they form a crystalline sheet here, out of which we can build transistors. And really the remarkable thing, at least that we thought when we first saw this, was that the clusters don't come together in an arbitrary orientation but actually all of the crystal planes line up, which, of course, is the key to making high quality devices.
So here are some of the first devices that we made. And these very first devices are already the best by more than an order of magnitude of any kind of transistor that people have ever printed. And the first, as far as we know, printed inorganic transistors. And so the other thing that I should mention is that once that crystalline plane comes together, it's the same material as came out of a fab. We've done all types of backscatter electron surface probes and volumetric probes. And as far as we can tell, there's no organic content left. And so these materials can withstand very high temperature after the first printing.
So how do we take some of that material and start to print it? Or how well can we do when we start to print this? And so this is not-- this turns out to be pretty straightforward. We can take some of these materials. And really the magic is in the material in terms of its ability to come together. And we can take some printing blocks. Not these particular ones, but ones that have patterns on them that are appropriate for transistors.
And we've made, at this point, we've made every component inside of a full up chip, including capacitors, resistors. In fact, the aspect ratio of resistors that we can make is huge. I'll show you in a second. We can print down to 200 nanometers. And we can print resistors that just go for several inches unbroken with that resolution.
AUDIENCE: When you [INAUDIBLE] self-assembly process of the single crystal, presumably there's some shrinkage at the edges. So I was wondering at what scale, what accuracy do you get at the edges of these [INAUDIBLE]?
JACOBSON: Yeah the edge-- I'll show you some data in a second. OK, so actually here's some single layered structures. And here's some structures that are a few microns. We can print captured structures. One very important aspect, which I'll show you in a second, is the ability to print [INAUDIBLE]. Because you want to be able to go from layer to layer to layer without any lithographic step. Here are some 200 nanometer structures. We've done recently some 100 nanometers structures. And so the integrity of the line, we can build working devices at 200 nanometers with full insulation between lines. The integrity of the line is probably good to at least 50 nanometers.
OK, so one very important aspect that I want to point out, and that we've pointed out at a recent conference, is our ability to print multi-layer structures, the self planarization property of this chemistry. And that's extraordinarily important. We've actually printed using some other technology a very large number of layers. In fact, as many as 400 layers or so. And the real difference between this and, say, vacuum deposition is vacuum deposition of a material like silicon can form like drapes over every structure you have. For that reason, you need to remove it from vacuum and do a [? CNP ?] polish step.
This technology, this chemistry, is self planarizing to a very good degree. And so we can simply print multi stack structures without having to do any etch step between layers. And that's a very important property for being able to build three dimensional structures and logic. Here are just some more printed transistors. And recently, we've printed with our II-VI materials, you can also print the photo conductor arrays and so forth. Or I should say photo transistor arrays.
OK, so here's just the last thing. I haven't really been specific about what substrate we're printing on. But many people are interested in being able to have transistors that are on flexible substrates, like pieces of plastic and so forth. And so lots of the work that we do is on thin plastic films. I mean, you could imagine having little displays that you can roll up, put in your pocket, and so forth.
And our hope is to be able to have a technology that is cheap enough that these transistors are actually disposable. They can perform some function for some brief period of time, and then you can dispose them. This is a set of ring oscillators. Actually, each one of these is several hundred transistors. OK, so I'm not sure what my time is like.
AUDIENCE: You have four minutes.
JACOBSON: I have four minutes. OK. Let me just briefly-- this is some work we've done in RFID tags. Let me briefly just talk about our work in printing little machines. And this is in a very kind of early stage. This is some results that we just presented at IEEE MEMS in Tokyo in January. Basic idea is to use the same chemistry. In this case, we're just showing that we can put this into an inkjet head. And so this is a nearly conventional inkjet head. And we can print out the same kind of self assembling chemistry onto a surface. And again, we're going to print out multiple layers to create some function. In this case, we've created-- this is a six layer printing. And my apologies here. Let's see if any of these.
So this is a little linear drive motor. We're actually interested in making some little microfluidic chips by printing. And in this case, we have some little nanoparticles that we're binning into different bins. And you can imagine that on each one of these bins, they carry out some chemical reaction. And this is a way to create the same kinds of steps that you might have on the bench top where you take a beaker and pour it into this beaker and this beaker goes into this beaker and so forth. But again, what we're interested in here is something that we've printed flexible and disposable. So if this will let me out.
OK, the other thing I want to say is that as you probably are aware, the same limitation that I mentioned in terms of creating three dimensional structures in silicon in a standard fab for logic also applies to MEMS. The very best in the world is work done at Sandia National Labs. Five layers of silicon, which is really extraordinary machines. But because of that, we really think when we're building MEMS and mostly in terms of two dimensional structures.
About the simplest structure we can build is a heatuator where we have a higher current density in this arm than in this arm. This was built by a student of mine using conventional silicon techniques. So typically, we're thinking about building horizontal or structures in the plane. The functioning of this is very straightforward. You just put a current through this. Current density is higher here than here. And you can get this thing to move.
OK, so one of the prospects of using this kind of chemistry that I mentioned is the ability to go up into the third dimension and build vertical structures. And so here we've built-- this is by inkjet printing. Now, the reason this looks like it's floating in midair is we first printed a release layer. And then we printed a heatuator. And this is a different heatuator than people have made before. It's a vertical heatuator where you have 40 layers here and 140 layers here. So this has been built up out of plane. This is an example where we've printed 140 different layers. That's another view of it.
One reason you might be interested in inkjet manufacturing of MEMS is to cover large surfaces like airplane wings and so forth. There's a very crude version before we really had our printing technology down. But you can see if we put a little current into this, we can make those things go. And now my student is making large arrays of these devices. So with those couple of comments, I'll end. Thanks very much.
[APPLAUSE]
MODERATOR: Questions?
AUDIENCE: What frequency can you get on something like that heatuator?
JACOBSON: That heatuator, the best frequencies we've gotten are not particularly fast. They're on the order of 10 hertz or so. Well, that's built by inkjet. And so our resolution is really limited there. The queue of the materials that we can put down, actually those are made from metal nano clusters. So the queue is not as high. We're now making them with semiconductor nano cluster. And we think we should be able to get up into typical MEMS. Should be indifferent from MEMS. So 10 kilohertz, those kinds of-- over 25 to 50 micron displacements.
AUDIENCE: At one point you were talking about one nanometer particles [INAUDIBLE] dispersed.
JACOBSON: That's right.
AUDIENCE: What were the source of those?
JACOBSON: Our laboratory is the source of those. We developed a chemistry. We developed a chemistry for synthesizing particles at that size.
AUDIENCE: [INAUDIBLE]
JACOBSON: They are inorganic particles that have organic capping groups. That's right.
MODERATOR: I strongly recommend you go over to the media lab and see in their lab. It's great. Not [INAUDIBLE] entirely smart LEGO sets. OK, are there other questions? OK. Thank you. Let's thank our speaker again.
[APPLAUSE]
Our next speaker is George Barbastathis. George is one of the newest members of our department. And he exemplifies what [INAUDIBLE] said yesterday about the strength of our department in optics. Optics at MIT, at any rate, have traditionally not been localized in any department. There have been great people all over the place, though perhaps the research labs [INAUDIBLE] the strongest places. However, now I think it's very fair to say, given the recent [INAUDIBLE] that we now have the strongest optics group at MIT by considerably.
And that, as you see, I think that one of the things that we've seen is that over the past couple of days is that optical methods are really key for doing lots of the instrumentation and manipulation at small scales. There's really not-- you can't beat a photon for getting information from here to there. Unless, of course, you wanted to walk along a muscle fiber, as Ian was suggesting. [INAUDIBLE] Anyway, George is going to tell us today about optical imaging in three and in four dimensions. Four dimensions. Wow. [INAUDIBLE]
BARBASTATHIS: [INAUDIBLE] in space, actually. All right. So I guess it's great to succeed the speakers like Ian and Joe, because a lot of the things that they said sort of set the pace for my talk as well. So let me first outline what I'm going to talk about.
First of all, I'll talk a little bit about optics vision information and what is the relevance between information and optics. And then I'll talk a little bit about what I do. So since I'll talk about in detail, let me not go through it now. At the end, last night I made one slide on educational initiatives in the optics area, because yesterday there were quite a few questions about how we will deal with information education, mechanical engineering. And then I'll conclude.
All right. So optics has been a very exciting field, and in fact, it's also a very old field. So if you look at here, you would calculate that it's about 3,000 years old. People started realizing that light is very important because that's how we see and that's how we can also do other things like set things on fire and so on and so forth. So they figure it out and they started working on how to use light and how to understand light. And yet even though it's such an old field, it seems to always be exciting. Because every once in a while, a new discovery happens and that sets up a new revolution. This seems to keep happening. So let me go through very briefly some historical facts.
So apparently, the first recorded use of light for information was by the ancient Egyptians who knew, apparently, that light rays travel in straight lines. So they used this fact to measure the radius of the earth by looking at sunlight through a well of some sort. So let me go through this like a high school exercise. So I will not go through this. So this was quite some time ago.
Then the ancient Greeks, they knew geometry very well, and they also knew about the properties of light, that it refracts through glasses. So they developed things like lenses and so on. Very crude things. And I think at the same time, the Chinese, or perhaps earlier, the Chinese developed similar things. But the only thing is that Greeks had better recording methods and also they were more open. So we can actually tell what was going on.
So in any case, out of all these, I picked an example that was the precursor to the Strategic Defense Initiative that President Reagan put forth in the '80s. In any case, my compatriot Archimedes, he figured out how to use giant parabolic mirrors in order to take sunlight and focus it on enemy sails. So he set the enemy ship on fire. The enemy in that case was the Romans. So that was, I guess, the first military application of optics.
Anyways, so then the sort of Middle Ages came through. And like all other sciences, optics disappeared. And then it recovered shortly after the Renaissance. And Newton, I guess he was working on everything. So he did develop a theory of light, which was based on the fact that light is a particle. At the same time, another fellow, Huygens, he developed a wave theory of light. And the two were fighting.
Well, Newton won that fight because he happened to be the good boy of the establishment. He was actually a chaired professor at Cambridge. So that had a lot of significance of the outcome of that fight. In any case, then a few hundred years later, people realized that the wave theory is also correct, even though the sort of particle theory also seems to be correct. Yeah?
AUDIENCE: Newton also probably waited until Huygens died before he published [INAUDIBLE].
BARBASTATHIS: Oh, is that true? I didn't know that.
AUDIENCE: [INAUDIBLE]
BARBASTATHIS: OK. Well, make a note of that. Anyway. So in any case, [INAUDIBLE] observed the [INAUDIBLE] light so that it was clear that light is a wave. And then Maxwell actually saw that light is essential in the electromagnetic field. So then the wave nature was not in doubt. But of course, the duality remained unexplained until people developed quantum mechanics. And then as Seth was saying yesterday, wave particle duality actually did get explained in a very nice way. But the quantum mechanics did not only achieve that. The fact that people started thinking about light as particles and discrete energy levels and so on, it led to the development of several optical devices.
OK, and before I get to those, let me show an example where optics was enabling for a fundamental advance in physics. You may know that the people before Einstein, they didn't know how fields are transformed. So they didn't know, for example, how can the gravity from the sun affect the earth so that the earth is stuck to rotate around the sun. So they were hypothesizing [INAUDIBLE] and all kinds of strange things.
Then optics allowed [INAUDIBLE] invent the interferometer. And he used optics in order to show that this [INAUDIBLE] theory was actually incorrect. I'm not sure if what I'm saying is totally true. But probably it was a big stimulation for Einstein to develop spatial relativity. So far, optics has been kind of [INAUDIBLE] to itself to develop optical devices. Yet here we see optics enabling the fundamental advance in physics. Actually, all of the major advances in this century.
Now in addition, Einstein, he was a very smart fellow. So he worked on other things as well. So he developed the theory of the spontaneous emission, which was used later. Let's jump one bullet here, which was used later in order for the invention of the laser. So believe it or not, Einstein was instrumental in that. Yet it was a physicist [INAUDIBLE] who actually came up with the idea for a microwave oscillator. Oops. I hope we didn't die here. There we go.
OK. So [INAUDIBLE]. And a few years later, he actually figured out that the same can work for light, the same idea. And then the laser was developed, was invented. Actually, I should probably go through these figures. This guy, this is Archimedes. This is Newton. This guy here is Maxwell. Einstein, everybody knows him. Then [INAUDIBLE] is this fellow here. These two guys are two instrumental figures in quantum mechanics. This is Plank on the left and Schroedinger in the middle. Yeah?
AUDIENCE: [INAUDIBLE]
[LAUGHTER]
BARBASTATHIS: Him or him? Oh, left. Would you like to elaborate on that? All right. So in fact, [INAUDIBLE] actually, after he invented the laser, he became a member of the MIT faculty. And he was the provost for a few years. I do know that, actually. Did you know that? I found it when I was reading his Nobel biography. Anyways.
AUDIENCE: [INAUDIBLE]
BARBASTATHIS: So, now amazingly, before the laser was invented, there was a Hungarian fellow in England working for I believe it was Oxford, but correct me if I'm wrong, who invented the technique which he called holography, which actually allowed the simultaneous recording of amplitude and phase of the light. So, of course, at the time, lasers did not exist, through the phase of the light was not a very well defined thing.
So Gabor originally [INAUDIBLE] microwaves as a domain for application of his technique. But of course, once the laser was invented, then holography all of a sudden became a big deal. So that's my favorite the bullet. Because first of all, I work in holography. And also, Gabor was awarded the Nobel Prize in 1971, which is the year that I was born.
All right. And the advances actually have been continuing. Optics has been used for a tremendous range of applications, which I will go through the next slide, but also has been enabling very fundamental advances in science. For example, you may have heard of Steve Chu, which invented something called optical tweezer. And Ian saw this in optical tweezer in the laboratory today. Anyway, Chu did this in the [INAUDIBLE] in 1983. And a few years later, he also won the Nobel Prize.
Another fellow, which I omitted, shamefully, is [INAUDIBLE] who was a professor at Harvard right up the street. And he essentially developed non-linear optics, which allow us to do fantastic things like very short pulses of the order of duration of femtoseconds and to study very unusual light matter interactions that otherwise are not possible. And he also won the Nobel Prize in the '80s, I believe.
All right. So the only reason I put up this slide and I spent a big chunk of my talk on it is to emphasize how optics and information sort of play even in a fundamental level fundamental science. Now, science is a lot of fun. But what about applications in life where optics can change the life of people? Well, nowadays, well, of course, the field is now called optical engineering, right? It's actually making things using light instead of only studying things. So I will not repeat yesterday's that physicists only talk about things. I'll just say that physicists only study things, because I'm a little bit of a physicist myself. All right. So OK.
So [INAUDIBLE] let's see what optics can do to change society. So I sort of classify the applications of optics into four major categories. And I put some examples for each one of those. Now, those of you invest in the stock market know that if a company has optics in their name or their description, it's a definite buy. So the reason is because of communications. It's a communication, which is the major application of engineering nowadays. It's not an understatement. And all these optical devices that enable it. Of course, there's also wireless communications, which is a different field and is also very important and a very high growth area. But it's also electromagnetic. Anyways.
So in the optics area, semiconductor lasers, fiber optics, optical routers, optical devices to distribute optical signals right now between major nodes but very soon all the way down to your house, to your sidewalk. They're a big deal. And there's also even more advanced things like quantum communications that Seth was describing yesterday, which are also based on the use of optics. And of course, this is not commercial yet, but it could be in 10, 20 years. Who knows?
Another big application of optics is data storage. You all have used the CDs and DVDs nowadays. You know how fantastic it is the DVD. You can watch movies and listen to commentary and blah, blah, blah. There's magnetic optical disks, which actually failed in the market, but they remain. Holographic memories, which tried to compete for their space in the market. And all these technologies actually enable to high capacity storage.
Now, the most traditional application of optics is, of course, sensors. Whereby sensors I mean everything that has to do with image. In the classical telescopes, binoculars, microscopes, digital cameras, analog cameras, hybrid cameras, whatever. There's other sort of more exotic things like displacement sensors, which can be used, again, for a number of measurements. Pressure, deflection, and so on and so forth.
And some of these are very classical. Interferometers have been known since [INAUDIBLE] in the 17th century. There's more recent developed the atomic force microscope, which is based on optical measurement or displacement and so on and so forth. And so these are sort of general areas of application, but there's also the biomedical, which is nowadays, again, one of the major application areas for actually any kind of engineering. And we have all kinds of fiber optic endoscopes, optical coherence tomography, confocal microscopy, and so on and so forth for diagnostics and discovery.
Finally, last but not least, is displays. Yet another area where optics has been traditionally very important. Probably the first display was a device called camera obscura, which is actually a very nasty thing. It's probably the first spying device. It allows you to look into a remote area if you put the lenses appropriately. The remote area just get imaged somewhere in your room. So you can spy on people across the street or something.
In any case, nowadays we have a liquid crystal projectors. Is it liquid crystal or MEMS? [INAUDIBLE] Anyways. So people use liquid crystals and MEMS to make projectors like this one. We have virtual reality displays that are based, again, on optics. Laser [? shows ?] and probably the most important is the use of lasers to process materials. So you could do very traditional things like material welding material, combining materials and so on. But you can also do lithography, as [INAUDIBLE] was telling us yesterday. The reason we have computers and transistors and so on and so forth is optical lithography. There's no doubt about it. Well, [INAUDIBLE] is trying to change all that. But so far this is what we have. And, of course, laser surgery is another big deal. All right.
Let me tell you a little bit the things that I do in the optical engineering lab at MIT. First of all, I should tell you that my background is here. And my background is in optical memories. But well after I came here, I figured I should do something else to just keep my interest alive. So I spread out into all these areas. So right now I'm working very heavily in communications. Sort of my most recent project is how to fabricate something called photonic bandgap materials.
Now, you probably have not heard of what a photonic bandgap is, but let me tell you that it's a class of materials that allow you to do very strange things with light. I mean, one of the things that you can do is you can have light make a 90 degree turn without losing any of it. So you can have light turn 90 degrees and 100% of the light energy actually dies down. So unfortunately, all of these fantastic things so far happen only in the computers of in simulations. So we're working in my lab on actually how making it happen for real. So we are very excited about that.
We also jumped into the optical communications. I don't know how to call it. The bandwagon. So we have some activity on optical MEMS. We're making actually a wavelength router using optical MEMS. And this is in collaboration with the micro technology lab and [? Marty Schmitt. ?] And [INAUDIBLE] of my activities and sensors and imaging. And so I have a project called collective vision, which is about equipping a very large number of robots with an equal number of very cheap, very individually stupid sensors.
Now, when I say how can a camera be stupid, well, if you look at the camera, your camcorder, it has 640 by 480 pixels or something like that. Now try to look through a camera with 10 by 10 pixels. You will actually see nothing. So that's what I call a stupid camera. Yet if we put all these cameras together, all these little stupid cameras, it turns out that you can get a very high resolution image of the environment nevertheless. And even more if you distribute the sensors appropriately, say in this room, you can get a three dimensional picture of the environment where your noses actually can stick out in the image. This kind of thing.
And so we're working on how to adapt the sensors and make them get the best images given the circumstances. And we're also working on visual learning, on how can using information to train systems to artificial systems to perform something. Now, the assumption that the systems will develop [INAUDIBLE] performance is pretty horrible actually. We're working for the Air Force. So what they want us to do is locate targets. And well, I will let you imagine what they do with the targets.
But in any case, there's a great deal of commercial applications for this technology. For example, in entertainment. You could imagine mounting these sensors on the legs of soccer players or basketball players. And as the players move through the field, you can actually use the information returned by the sensors in order to reconstruct the game in 3D. So then you're sitting then in your virtual reality room, and you actually feel as if you were on the basketball court. These are the kinds of commercial applications that could get enabled by this technology.
All right, let me not take too long. The thing that I will actually talk about today is another kind of imaging which I call four dimensional imaging. And I call it so because we're trying to image volumes. So by volume, I mean really x, y, z. The same kind of images that you get from MRI, for example. And in addition, spectrum. So we're trying to image the color of light.
Now, you say, well, how is this different than the camera or perhaps two cameras? Well, here's the difference. If you have one camera, first of all, you get the planar projection of space. So that's 2D. And also the camera does not get real color information. It just gets red, green, and blue, RGB. Real color information would really give you the relative intensity as a function of wavelength. So it's a really new dimension.
Now, like every instrument, it needs for an application. Being at MIT, we found a fantastic one, some colleagues in cognitive science. They're interested in building in the [INAUDIBLE] of neurons. So what they do is they cut up the brain of a mouse and they take a slice of neurons. They put them on the chip and then they activate it electrically.
So there's a strong need in order to monitor the dynamics of these neurons. We would like to actually be able to monitor that activity. And this is what the sensor will be doing. So we're very excited about that. We're also working on interferometric sensors metrology with applications such as corneal topography, measuring the shape of your eye and so on and so forth. But again, I will not have time to talk about it.
All right, let me motivate a little bit why we want 3D and 4D vision. First of all, there's no doubt that we're very strongly visual animals. We rely on vision very much. And the argument about it is that blind people are very severely handicapped, right? So if you look at the brain, which is probably our most sophisticated instrument that nature gave us, a very big part of it is devoted to vision.
Yet the ability of our brain to acquire the environment is still kind of handicapped. Why? Because the sensors [INAUDIBLE], it's only two of them, and they are pretty much like cameras. Very sophisticated cameras with very huge, how you call it, dynamic range and so on and so forth and very good resolution. Yet if you look at this picture, you become very severely perplexed because this is actually an impossible picture.
The reason you get perplexed, there's nothing wrong with it. It's a planar drawing and can look whatever it want. Yet your brain tries to interpret it in 3D, and you realize that it's impossible. And there's many, many other examples of illusions that show the deficiency of our brains to process 3D information, which is not inherent in our brain, in fact. It is just because of the limitations of the sensors that we have, the limitations of our eyes.
So biometric systems are good but, well, when people who are trying to make airplanes for many, many centuries, they were trying to imitate birds. And they kept falling, so they would fail. Well, at some point, they invented the jet engine and propellers and then all of a sudden planes became successful. Yet planes still have wings. So my point is that it's good to imitate nature. But after some point, our ingenuity can actually overcome nature and perform better. Not because we're smarter than nature, but because nature, perhaps, didn't have the needs that we have developed for ourselves. Anyway.
So when we develop an imaging system, we actually try to develop sensors that can overcome the limitations of our eyes. At least this is what I do. So with that, develop sensors that can capture the environment in 3D and give us more information. Now, information is a key here. Because if you think about it, what does an imaging system do? Well, it forms an image. But what is an image worth? Image is worth as much as many questions as you can answer about the object.
So the way I posed it, it refers exactly to [INAUDIBLE] that my system transfers between the object and the observer. And the space where this transfer occurs can have several dimensions. Your typical camera is two dimensional plus this RGB color. Then you can think of various volumetric imaging techniques like tomography, PET, and MRI, and all these things.
You can think of 2D image spectroscopy, which is a very new field. You can actually buy these devices now, but you couldn't buy them three or four years ago. And so that's n equals [? 3. ?] When you talk about imaging volume and spectrum at the same time. So now this is the generalization of this imaging spectroscopy to volumes. Then we're going to n equals 4, a four dimensional space.
Now, people have been working for a long time on imaging techniques that allow you to resolve that. How long do I have? Four minutes.
MODERATOR: You actually only have two minutes.
BARBASTATHIS: Two minutes. Oh.
MODERATOR: [INAUDIBLE] you do it.
BARBASTATHIS: All right. So I'll take. I figured four minutes is the standard.
[LAUGHTER]
I should have asked two minutes later.
MODERATOR: Everybody else just happened to ask four minutes [INAUDIBLE].
BARBASTATHIS: Thank you. All right, so I will accelerate a bit. So triangulation is, essentially, the way we use to figure depth. Now, triangulation is very nice, but it limits you to imaging surface objects, opaque objects. If you can actually look through objects, then triangulation won't work. I mean, it will work, but it will give you a very poor result.
So there's other techniques like coherency [INAUDIBLE] which I will not talk into. But if you look at the system here, you realize that there's no lens. So it's a very strange imaging technique that still, believe me, it still works. Many people are doing it, and yet does not use any lenses at all. This confocal microscopy, which fortunately my colleague Peter and [INAUDIBLE] he talked to yesterday, so I don't have to explain it.
Now, the point is that if you look at traditional optical systems where you have a train of lenses and at the back you have either the retina of the viewer or a piece of film, then we're actually helpless. There's no way to do depth imaging. What we can do is we can augment optical systems with, well, guess what? Information technology, with computers. And so all the systems that I showed before, triangulation, interferometry, and the confocal microscopy, they're based on the fact that the image information is collected by a computer and then post processed in order to acquire the volume.
All right. Let me skip that one. I was just going to say that the reason I did all this is because as a kid I loved Star Wars. So I wanted to develop things that can be like R2D2. I'm not doing exactly that. I'm just doing the revision. But anyways.
OK, so let me tell you a little bit about the tools that I'm using. Well, they're using something called volume holograms. Now, most of you know what the hologram is. You see them in displays at airports, museums, and so on. In fact, the best display in the world is just one block down the street, down Mass Avenue, if you go to the MIT museum, which I believe is open on Sunday, you can see the best collection of holograms in the world. Most of those made by my colleague in the Media Lab, [? Steven Bentham. ?]
OK, so the person here is the inventor of holography, Dennis Gabor, sitting next to his hologram. Now, the holograms that Gabor invented, they're known as thin holograms. At least now we call them thin holograms. And we call them so, well, because they're thin, only a few optical wavelengths thick. And their property is that if you rotate your point of view, the image also rotates. And as a result, you get a different perspective. So the hologram appears to be 3D. And in fact, because you have two eyes, each eye looks at a different angles. So therefore, you have the illusion that the image that you construct is three dimensional.
Well, the volume holograms do not have this advantage. Well, when you illuminate it at the correct angle, then you get the reconstruction to come out. When you tilt that a little bit, you get the nothing to come out. So this sounds like a serious handicap. But it does enable you to do something. It does enable you to use the hologram as a matched filter. Let me explain what I mean by matched filter.
Let's say that the hologram is recorded by a very simple optical wave, a point source. So you taking the point source. You pass it through a lens. You collimate it. And then you use a reference wave on top so the two interfere inside the volume, like a little cube. And they form a hologram. Well, now, as I said before, if the hologram is reconstructed by the appropriate angle, which in this case is horizontal, then it will actually diffract a very significant amount of light. And we'll call this Bragg matching. So that's fine.
However, if we defocus the point that we construct the hologram, then all kinds of new angular components get into the picture. Now, the central one still reconstructs light. But the off axis components do not. And therefore, what the hologram does is discard a point source that has been defocused. So that's the basic property that we use in order to utilize holograms as imaging elements.
So if you compare a focused point source and another focused point source, the focused point source makes it through. The out of focus point source does not make it through. So there's no becomes a little bit reminiscent to a confocal microscope that Peter was describing yesterday where if you put a pinhole in order to pass a focused point source through the system. But when the point source is out of focus, then the pinhole discards it. So the hologram does actually the operation of a pinhole. And we'll call it the Bragg pinhole.
All right. Let me point out a couple of things. First of all, match filtering is better. I will only take 30 seconds. So match filtering is better because it actually suits the propagation of light better. The problem is that 3D scanning is still required. So in the sense, the whole operation of this device is still as a confocal microscope. And one of the potential problems is that the hologram does not diffract all of the light that comes through. You lose a little bit of light.
So then you realize that this could be a problem. Yet now who can make holograms at around 100%? Perhaps 95% or 90%. So that's not a problem anymore. Let me not go through the details. This kind of image that you get using this imaging device. This is a piece of wafer where there was a small trench. And the trench appears black here. Why? Because the trench was out of focus, therefore it did not give signal to the detector. The trench, the surface, gives a high signal because it is in focus and so on and so forth. Let me not to go through those. It's very interesting, but I don't have time.
Now, what we're working on right now is to generalize this technique in order to acquire spectral and volumetric information. So why do we want to do this is in order to be able to image fluorescent tags, which allow us to monitor various biological effects. So the effects could be that applied fields that change the spectrum with a fluorescent spectrum or it could be the formations or it could be chemical reactions or whatever. But this way we can monitor neutrons, we can monitor protons, [? react ?] and so on. And we can do it simultaneously in volume and space.
So again, I don't have time to get into the details. I'll just tell you that this happens through a volume hologram recorded and set up in an appropriate way with a little bit of auxiliary optics down here. What it allows us to do is image resolution volumes. Well, now the volume is in 4D. So it has dx, delta x, delta y, delta z, and delta lambda.
So we need to come up with the name of these. The 2D name is pixel. The 3D name is voxel. We didn't have a 4D name, so we invented texel. So it allows us to image texels that are as big as one micron by one micron by one micron space and one nanometer in wavelength. So that's pretty good.
And we can also do this in one short. So different slices of the object get mapped into detectors simultaneously. And I will not say anything else because my friend Seth here it will have heart attack. So I'll spare you. OK, so I finished my research talk. If I may, let me go very briefly through the curriculum changes that were made for optics.
MODERATOR: I guess we need to know that. Absolutely.
BARBASTATHIS: In fact, [INAUDIBLE] I did this last night [INAUDIBLE] because it's a very important question. All these techniques are very new. And in fact, they are new not only in mechanical engineering, but also in other fields, even physics. Sometimes falter in these things. So it's very important to provide the students with a set of classes that introduce them in the proper way.
And again, I'm going to refer to [INAUDIBLE] yesterday who said that, well, within three weeks in his freshman year, he learned enough geometric optics to do lithography. But a lot of things, unfortunately or fortunately, actually, because they keep us in business, but the unfortunate thing is that geometric optics is not enough. So the students need to learn a little bit more.
OK, so optics has been slowly creeping into undergraduate curriculum. We'll have this introductory measurement instrumentation class where we have been developing at least one new optics experiment. And we plan to sort of make it the permanent one. Because in this class, the experiments change with every time. But in any case, we plan to make this permanent. There's a proposal that is circulating now among the faculty to re-engineer our entire instrumentation sequence.
So 671 is the first one. And then we have 672. There's a proposal to add the new one, 673. And if this happens, there will be enough room to put more optics experiments. And not only sort of mundane things like interferometry, but also more spectacular things like atomic force microscopy and so on.
Now, there's a very extended list of graduate courses. 2717 will be offered as a regular course starting next year. It has been offered as a special course for a while. There's an optical imaging class that Peter [INAUDIBLE] is offered this spring. Ian Hunter [INAUDIBLE] offer advanced instrumentation classes, which are not optics oriented, but they contain optics experiment.
So that's the difference. The first two are actually optics centered, whereas the others have optics [INAUDIBLE]. And then this summer, we initiated also a summer professional class which I will not say anything more otherwise I'll sound like a salesman. And at last I finished. Thank you.
[APPLAUSE]