# Computation and the Transformation of Practically Everything: Current Research (Session 4)

Search transcript...

[MUSIC PLAYING]

FISHER: Okay, welcome to the final session. We have three speakers. This is another current research topic, so we'll be highlighting three researchers from CSAIL. We have Professor Erik Demaine, Professor Daniela Rus, and Professor Scott Aaronson. Since we've just come out of a break, I want to remind you to turn off your cell phones. And with that, I'll turn it over to Erik.

DEMAINE: Thanks. So I'm Erik Demaine and I study algorithms and theoretical computer science and pretty much anything that's fun. And in particular today, I want to tell you about the science of origami and the art of origami, and how they relate through computer science.

So, origami starts at least centuries, possibly a millennia, ago. And it used to be about folding very simple things like cranes, or maybe a few cranes out of one piece of material. But these days, it's about folding super complicated 3D forms out of one square of paper, no cuts. Each of these is one square of paper. This piece took, I think, over a year to fold. Give you an idea.

Here's some examples. The top three are all by MIT graduates. This was a challenge to fold a sailboat. You can fold a sailboat with the sails open or closed, or you can fold a sailboat being attacked by a giant kraken, all out of one square of paper, no cuts. That's the crease pattern, if you want to try it out.

And this is all made possible thanks to algorithms and mathematics. In particular, this is one new approach to designing origami, called Origamizer. It's free software. You can all download it. The input, you specify some 3D model you'd like to make, like the Stanford bunny. You get a crease pattern, which you can put in a square, fold it up, and you get this 3D shape, which is exactly your bunny. It just takes about 10 hours to fold, but extremely general and powerful, and pushing the limits of artistic origami.

You can apply it to other materials, as well. This is folding one sheet of steel into that same bunny. This also takes a few hours. It got accelerated a little bit. And you get that. So one sheet of material, you can make any 3D surface you want. So that was Tomohiro Tachi from Tokyo, folding here at MIT.

And I wanted to give you one example of how-- origami obviously has a lot to do with art and sculpture, and there's a lot of interesting mathematics and geometry in there as well, and algorithms. And I like to do both, and a group of us like to do both. And it's really fun to work on the same project, both from an artistic perspective and from a scientific perspective.

I want to give you one example of a project over the last 12 years, called pleat-folding, that has gone from art to math to art to math to art to math to art to math to art to math to art. So. And I had seven minutes.

So here's a very cool folding. You should try it at home. You take a square of paper, fold concentric squares, alternating mountain and valley. The paper folds itself into this saddle surface, which for a long time people believed is a hyperbolic paraboloid. More on that in a moment.

We started playing around with this from the artistic perspective. We came up with an algorithm that, given a polyhedron like a cube, tells you how to join a whole bunch of hyperbolic paraboloids together to make some 3D surface. And so this generates sculptures using mathematics as-- algorithms as a tool to design new sculpture.

You can follow a similar approach, folding concentric circles instead of squares. Here you have to cut a little hole out of the center. You get another saddle surface. Paper folds into that automatically. And so on the scientific side, we wanted to know, what is paper doing? And so we built a simulator.

These are photographs of real pieces of paper and these are simulations. So we're simulating what's going on. Here's an example of folding concentric hexagons and the simulation. Basically the paper wants to bend at the creases and not bend anywhere else. And this is the natural equilibrium that physics finds for you, or we can find by simulation.

Now, once you have a virtual model of a physical piece of paper, you can make a physical sculpture of the virtual model of the physical piece of paper. So this is with 3D printed spheres to make a 1-meter static version of that pleated hexagon. So that's back to the sculpture.

On the mathematics side, we've been folding hyperbolic paraboloids, probably thousands of them, over about a period of 10 years. And people have been folding them since the late '20s at the Bauhaus. But it turns out it's impossible to fold a hyperbolic paraboloid. If you take this crease pattern, you cannot make anything, it turns out. This was quite a surprise, because we've made a lot of them and it looked pretty good. But some things that you think that may exist in real life really don't exist mathematically.

It turns out what you need to do is add extra creases in there in order for this thing to fold up, and then you get a nice 3D model and everything works out. And so reality must be cheating a little bit there.

Back to the art side. These are some sculptures in the permanent collection in MoMA with my dad, Martin Demaine. And these are based on the concentric circle idea, which does seem to exist. So we've switched mostly to curve creases. It's a bit unusual for the origami world.

Here, instead of a single circle, we actually joined two circles together, make a 720-degree loop. And you get a lot of really cool twists. These are, again, this natural equilibrium forms of those pieces of paper.

On the more scientific side, more on the architecture side, we're trying to-- by making small changes to the crease patterns, we can get a lot of control over what 3D shapes result in the equilibrium form. So the idea is you want to build some pavilion, you just print out a sheet, make it self-fold, and it pops into whatever 3D form you want.

Back on the sculpture side, we've been playing lately with combining several of these concentric circle models together, joining them at just a couple of key points, and you get even more control over the 3D form that results.

Back on the math side-- some alternating back and forth here. That's the point. We study-- this is more of a mathematical idealization, but you know, it's really tedious to fold all of these pleats. Is there some way that I could fold mountain, valley, mountain, valley, mountain, valley, n of them in less than n operations? You know that if you make n folds-- if you fold a piece of paper in half n times, you get two to the n creases, but you get some weird patterns of mountains and valleys that's not alternating. And I really want alternating.

Turns out you can get pretty close. The n folds, you can get about 1.33 to the n mountain-valley pleats, and that's close to optimal. It's kind of fun, although not practical. It's a fun mathematical diversion. This is one of our latest sculptures to appear at the Smithsonian.

David Huffman-- I imagine most of you have heard of David Huffman in the context of codes. Huffman codes are in every MP3 player, every compression device, every cell phone, I guess. And David Huffman was a professor at MIT for a while.

But lesser known is that he was one of the few pioneers of curved-crease paper folding. And he designed these three models in particular. Most people have never seen them. And we're working with his family to document all of his work and try to reverse engineer, figure out how he designed these models. They're figuring out what his crease patterns were, how he came up with these crazy foldings, because they're really awesome.

If you were at the banquet last night, you saw this video. Otherwise you can watch it on our web page. This is combining glass-- both my dad and I are glassblowers-- with paper folding. And this sort of gives you an enclosure that constrains how these forms open up and so you get more control over the shape.

And one very recent project is combining that paper on the outside of glass, make it a little more exciting. So that's the sculpture site. I think that's the end of the little tour from art to math to art to math to art to math to art to math to art to math to art. And we find this a really powerful way to work, because if we get stuck solving a math problem, we can just build a sculpture to illustrate why the math problem is hard.

And then often we'll want to build some sculpture-- you know, wouldn't it be really cool if we could build x? And then, wait. Does x exist? Is it possible to build x? And then we have to do new mathematics in order to figure out how to build that sculpture. So we get a lot more productive because we have to jump back and-- or because we have the flexibility to jump back and forth between these worlds.

I wanted to show one more fun thing, which you can play with at home, at ErikDemaine.org/maze, I think. So this is a maze folder. So if you want to fold this three-dimensional maze, that's the crease pattern. Just print that out and play around. You can make whatever maze you like here.

And you can also type in your favorite message, like "MIT 150 rocks." And then there's your crease pattern. Start with a large piece of paper, I recommend, and you can do it. I mean, it is really possible to fold these things. Here's a simple example, just folding a 5-by-4 maze. I would love to make one big enough for a rat to run around in and see what happens.

And yeah, here's one last sculpture. This is a crease pattern that has little marks on it. And when you fold it down, you get simultaneously "yes" in three dimensions and "no" in two dimensions. So I guess we're a little conflicted about the meaning. And that's it. Thanks.

[APPLAUSE]

FISHER: Thank you, Erik. Our next speaker is Daniela Rus. There you go.

RUS: So Erik gave you a wonderful survey of these amazing objects that he can make by hand, by folding. Now, what if we could take those hands away? What if we can even imagine these sheets making the shapes that Erik has painfully folded over a period of up to a year? That would actually take us to a stage where we would be able to have sheets that would organize themselves as objects that would program their own shape. And that is a step closer to our dream of programming more than just bits, to our dream of programming matter.

So what would it take to achieve this dream? Well, two things. On one hand, we have to make better materials. We have to make smart materials-- that embedding them, actuation, sensing, computation, communication, and all the electronics infrastructure that would enable the control of these sheets. And on the other side, we need the algorithmic and control basis for achieving shapes.

So in my research and in my group, we have been actually studying this problem. We have created sheets that have been inspired by origami. And this was done in collaboration with Erik. And here you can see a sheet that has creases. Along each crease, there is an actuator. We also have connectors and other electronics infrastructure. And this sheet is able to be programmed in this example in two different shapes. One is the boat, the other one is a plane. The programs are generated automatically, using a suite of planners that decide when each actuator on this sheet will get to move and in connection with what other actuators.

So there is our object. Now you can do other objects. You can do a whole suite of functional objects using origami inspiration. In this example, you'll see an origami worm that is made out of a specific crease pattern. And the little springs on the body of the robot are actually shape memory alloy actuators that enable this robot to function. This robot was printed out of a single sheet of paper and it was done in a matter of a couple of hours.

So-- let's see. I'm having trouble with technology here. Making shapes or programming shapes by origami is just one way of envisioning programmable matter. More generally, programmable matter is achieved when a collection of robotic modules that are physically connected have the ability to self-organize themselves as a functional object. And this object could be a robot designed for a specific task. For example, a snake-like robot, which would be the optimal way of traversing a tunnel; or a slinky-like robot, which would be the optimal way of climbing stairs; or a functional object, such as a dog that might turn into a couch when you're done playing with it.

This idea takes us to the vision of making machines that have flexible architectures. Machines that have the ability to synthesize a third hand or a second head or an alligator, according to whatever the manipulation, the sensing, or the task needs of the user are. And this is extremely challenging and exciting. The key to success in this field is designing bodies that have the ability to achieve this kind of shape transformation, but also the algorithmic basis that generates the control structure for how the modules move.

And so this has been extremely exciting to me, but also deeply personal, because I grew up reading and dreaming about the Barbapapas family. Now, the Barbapapas family is a family of blobs. And these blobs can change their shape according to whatever the task they have to do is. So they can turn into cutting tools when their house is invaded by a giant bean, or they can turn into containers for collecting things, and even objects for cooking. So to me, working on programmable matter is like achieving a childhood dream of building objects that can change shape like the Barbapapas family I used to dream about.

So there are other approaches to creating programmable matter, besides folding. And another one I wanted to explain today is the idea of smart sand. So now imagine I give you a bag of smart particles, also materials that embed actuation, computation, communication, and connection, and you tell your bag, make me a rabbit. After the bag computes a little bit, you reach in the bag and you pull the rabbit out. When you're done playing with it, you throw the modules back in the bag. They return to the collective group for later use for a different purpose.

And so we haven't made smart sand yet, but we have made some smart rocks and the suite of algorithms that programs shapes with these rocks, following a sculpting-like process. And in this video, you can see our approach to making shapes by smart rocks. Each of the cubes in this video is a computer. Inside the computer there is a switchable magnet, which is on in one configuration, off in another. Each computer also has point-to-point connections and communication to neighbors. And as the communication system detects neighbors, that has the ability of triggering the connectors on or off.

Now, once you have assembled this block of virtual marble or virtual particles, what you can do is that tell this block, make me a dog. And the modules talk to each other, try to decide which one's in the final shape, which one is not. Eventually, the decision process converges and the robots can then reprogram their connectors to be off if the modules are no longer part of the final object. So here's the dog you can obtain by peeling off the spare material.

So we have not achieved smart sand yet, but we have moved from smart rocks to smart pebbles. The smart pebbles are 1 centimeter in diameter, very small computers. And the key technology that makes this work is something we call the electropermanent magnet. And this is what we use for program connections, for communication, and also for transmitting power between neighbors. It's the C component in my picture.

So I have shown you several examples of programming shape. And there are some wonderful applications in this field besides making all the exciting objects that Erik has been able to do. For instance, making 3D topological maps so you know where there is a hill on your bike ride, or in general, making on-demand tools and objects.

But the challenge is going beyond shape. The challenge is imagining these computers whose material properties you might be able to program. For example, mechanical properties like friction and strength, and also optical properties. Perhaps someday we will be able to have computers that will become our invisible cloaks.

Ultimately, I think this field will have a great impact on manufacturing. I imagine the RoboKinko's of 2020, where I go not to print a poster, but to print a robot. I tell them what my robot payload is, perhaps to carry a radon sensor. I tell them what the endurance of the robot is, what the space it has to work in is. And a day later, I go and I pick up my robot and I pick up the programming and user interface environment for it.

The robot is fabricated in layers, following the same process we have seen either with the self-folding sheets, where we build these flat sheets with embedded electronics that have the ability to self assemble, or following some subtraction-like process. And ultimately, this new way of thinking about computation will have deep impact on what we think about computation.

Ultimately, in the field of programmable matter, resources such as actuation, sensing, internal state, external state, become first-class citizens just like time and space. And what we will need is profound advances in how we design computers and how we think about the theoretical models of computation that take all these resources into account. Thank you.

[APPLAUSE]

FISHER: Thank you, Daniela. So our final speaker of the session is Scott Aaronson. And at the conclusion of Scott's talk, Victor will have some final words to close the symposium.

AARONSON: Great. Thank you. So let me apologize in advance for a lack of origami in this talk. There will be time travel and quantum computers, but it's really no match.

So Moore's Law, I mean, you've all seen this. The number of transistors per chip, for example, is doubled about every two years. You know, these days mostly just because of the increasing number of cores on a chip. But this is clearly something which is one of the major things driving the progress of technology. So raises the question, if we extrapolate into the future, you know, where is this going to lead us?

So one obvious possibility, which was, I think, already alluded to in Daniela's talk, is the robot uprising. So, you know, Google may have just become sentient, you know, just instruct all of the computers to connect it to the internet to enslave their owners and so forth. Okay, this is actually, I think from at least from a theoretical computer scientist perspective, one of the less interesting things that could happen in the future.

And why do I say that? Well, because even a killer robot, on the inside, it's still fundamentally just a Turing machine. So a Turing machine is this theoretical model of computation laid down by Alan Turing in the 1930s. And it pretty much describes the internal workings of every computer, every Mac, PC, iPhone, and so on, that we have today.

And furthermore, we know that Turing machines have some fundamental limitations. I mean, Turing started that out showing that they can't solve the halting problem, for example, but we believe that they have even stronger limitations than that. So one of the most famous is given by the conjecture that P is not equal to NP. Okay, so your P is basically the class of all problems that a Turing machine or let's say a modern computer could solve efficiently in an amount of time that increases polynomially with the size of the problem.

NP stands for non-deterministic polynomial time. It's a dumb name, but it's the class of problems for which, if someone gives you the answer, then you can recognize it efficiently. And the question is, are these classes equal? All of our ordinary experience with the world would lead us to conjecture that they're not.

In other words, there are problems where you can recognize and answer if someone shows it to you, but nevertheless finding the answer yourself could be astronomically hard. Famous example is, say, the Hamilton cycle problem. I give you a graph with a bunch of connections between vertices. And the question is, is there a way to visit each vertex exactly once? Is there a path that does that?

If someone shows you the path, then of course you can check, okay, well, indeed I guess there was one. But if you don't know in advance what the path is and imagine that there were thousands of vertices, well, then trying all of the possible paths one by one would just take an astronomical amount of time. The sun would go cold before you've made a dent in the problem.

So this is a famous example of what's called an NP complete problem. So if P is not equal to NP, then actually there can be no efficient algorithm on any present-day computer to solve this problem. So this is one of the greatest open problems, I would say, in all of mathematics and science. You can see that, because it's done a cameo on The Simpsons.

But you know, it raises-- so assuming we all believe that P is not equal to NP, raises for me an extremely interesting question that's driven a lot of my research. Which is, is there any feasible way to solve these problems, then, just consistent with the laws of physics? Okay, so this is now not just a mathematical question, but also a question about what types of computation do the laws of physics permit?

So you can think of some hypothetical things that would certainly make these problems tractable. So let me run you through some of my favorite examples. First one is what I call the relativity computer. This was a very simple proposal. You would just start your computer working on some very hard problem, maybe an NP complete one, for example. Then you would leave your computer on Earth.

You would board a spaceship which is traveling at very close to the speed of light, go on a journey. You come back to Earth, billions of years have passed. Civilization is gone. All your friends are long dead, but you've got the answer to your computational problem, okay?

You can think it's interesting to ask, what is the problem with this proposal, besides the fact that all your friends are dead? Well, actually the answer turns out to be the amount of energy that you would need to accelerate to that close to the speed of light. If you want an exponential speed-up, it turns out you actually need to accelerate to exponentially close to [? C, ?] which then implies that you need an exponential amount of energy, which then implies that your spaceship has to be exponentially large. So you're still spending exponential time. You're just pushing it to a different place.

And this is an interesting example of how when you take all of what we know about physics into account, what looks like a way to solve NP complete or other intractable problems quickly may actually vanish. We see that again and again.

Another of my favorite examples is what I call Zeno's computer. This would be a computer that does the first step in one second, the second step in half a second, the third in a quarter second, next in an eighth, and so on. So after two seconds is done, an infinite number of steps. Okay, well, what's the problem with this? Why don't we build this?

Well, people actually do more or less try this, then they try to overclock their chips. But the problem when you do that is that your chip may melt. It may overheat. And actually, the faster you want to overclock your chip, the more energy you need to cool it. And actually, if you try to run your chip faster than one clock cycle per Planck time, which is 10 to the minus-43 seconds, then no one knows exactly what would happen.

But according to the best current physical theories, the best guess is that you would need an amount of energy which would exceed the so-called Schwarzschild bound, which then actually implies that your computer would collapse to a black hole. So I like this as sort of nature's way of telling you that you can't do something.

Okay, another of my favorite examples is the time travel computer, where if we had closed timelike curves, then actually we could sort of create a situation where, just in order to avoid a grandfather paradox, or in order to avoid a logical inconsistency, nature would be forced to solve some very hard computational problem, including, for example an NP complete problem. And a simple example of this would be if you went back in time and told Shakespeare what plays he was going to write, and then he wrote them, and so then you knew what they were, and so on. So somehow this really hard problem gets solved of writing Hamlet, for example, and yet no one ever does the work.

So this would be a great thing if we had it. So actually, I even wrote a paper about this that John Watrous is pinning down exactly what you could do with this closed timelike curve computer. It turns out to be P space, which we believe is actually somewhat larger class than NP. Truly, don't tell my tenure committee that I wrote this paper, actually. But okay. Some of them are probably here.

But all right. So now, but I want to move on and talk about quantum computers. This is something that on its face sounds almost as science fiction-y as the other stuff that I've talked about. And yet, there are experimental groups all over the world that have been and are trying very hard to actually build quantum computers. So what is this? Well, this is a computer, a hypothesized machine, that would exploit the laws of quantum mechanics to solve certain problems dramatically faster than we know how to solve them with any existing computer.

And the reason it could do that is that according to quantum mechanics, the state of a physical system, say, of end bits, is actually a superposition over all of the possible configurations that those bits could be in. Which means that to specify the state so you have end particles, you may actually have to write down a vector of two-to-the-n complex numbers, one for every possible configuration of the particles.

So according to quantum mechanics, which has been our reigning physical theory since the 1920s, the amount of information in a simple system is tremendous. Can be much, much greater than classical physics would lead you to believe. And so maybe we could exploit that to solve some hard problems.

In particular, Peter Shor, who's in applied math here today, showed that in principle, a quantum computer could efficiently factor large integers. Unfortunately, what have we done today, after groups all over the world, including here at MIT, have been working on implementing quantum computers using ion traps, nuclear magnetic resonance, other proposals, what we've done today after about 20 years of work, is to verify using a quantum computer that with high probability 15 is equal to 3 times 5.

So just the last thing. So some recent work with my student, Alex Arkhipov, we actually just this year published a long paper, which is about a new proposal for a very rudimentary type of quantum computer, which would be built entirely out of what's called passive optical elements, like beam splitters. So you would just generate single photons, pass them through beam splitters, and then mirrors and so forth, and then you would measure where the photons end up. Do all of this very reliably. But we don't require any explicit interaction or coupling between pairs of photons. We thereby bypass one of the main technological problems to building a quantum computer.

Now, we have no idea how to factor numbers or break cryptographic codes using this type of quantum computer. But the main point of our paper is to give evidence that even this rudimentary type of computer can solve some problem-- problem of sampling from a certain probability distribution that under plausible conjectures is intractable for any classical computer. And there are several experimental groups that are currently trying to implement this experiment, at least with four or five photons. So I'm looking forward to seeing that.

Okay, so a summary. So from a theoretical standpoint, modern computers are all kind of the same slop. They're all just polynomial-time Turing machines. Now, mathematically, we can envision computers that vastly exceed those. And we just have to imagine into existence a closed timelike curve or whatever.

But using quantum computers, we now think that actually, even in the real physical world, we could exceed the capabilities of polynomial-time Turing machines by a little bit-- do some things like factoring that are intractable for any present-day computer. But actually realizing that remains an enormous challenge. And I hope that things like linear optics will help provide a path toward that. So thank you.

[APPLAUSE]