Computation and the Transformation of Practically Everything: Current Research (Session 1)
LYNCH: Okay, I'm very pleased to welcome everybody to our first session on current computer science research. Our speakers in this first session are Russ Tedrake, Anant Agarwal, and Shafi Goldwasser. They're all faculty in our computer science and AI lab in ECS department
Russ is going to begin. He's the ex-consortium associate professor of EENCS. And he'll speak about new research in robotics.
TEDRAKE: Thank you very much. Okay, so thanks for coming. I'm actually enamored by the types of models that Maria talked about in the last talk, and the complexity of the models, yet the structure and basic properties that emerge. And I want to talk about, for a few minutes with you, about the role computer science might have to play in decision making inside of models of that type of model.
If we could make a few decisions about controls that we might take to affect the long term dynamics of a model. I'm going to talk about, relatively, much smaller scale events. Let me start with a motivating example. Talking about wind farms.
So imagine we want to make wind turbines very efficient. It's extremely difficult because wind turbines are constantly faced with an incredibly difficult incoming flow field, with a lot of variability which causes a lot of transients on the blades, and post-stall dynamics constantly on the blades. And also, they get that from variability in the wind, but also from just wakes of upstream turbines.
This a particularly dramatic picture showing the effect that an upstream turbine can have on these downstream terms, which you can just barely see in here. Right? That makes the operating regime of these blades incredibly complicated. And it dramatically affects the efficiency of our turbines and the sparsity or density of the wind farms we can create.
One potential solution is to make intelligent wind turbines, which are actively making decisions by sensing the flow, choosing the pitch angle of their blade, for instance, and making conscious decisions in order to improve their efficiency.
Now, I think the problem, the challenge, of doing good control design and type inside models of this complexity is one of the richer, technically deep, problems that I'll probably ever work on. I found a really fun way to work on it in our group. And it might not be what you think. We actually-- I'll show you some wind turbines at the end, but let me start with some robotic birds. Okay?
So we've been building these two-meter wing span autonomous flapping birds. Okay? They fly pretty well around campus. And before you stop and say, this looks like a pretty different system, let's just remember that birds are actually incredibly talented, and they get exceptional performance, in the dynamic regimes that we care about in the wind turbines, the ones that the wind turbines have a hard time with. Okay?
So, imagine a yard bird here, a cardinal, just landing on a perch. Happens any day of the week. These guys are very quickly going into a very high angle of attack regime, where their wings quickly go to a very high angle relative to the oncoming flow, putting themselves in post-stall, very nonlinear, very transient dynamics, exactly what the wind turbines experience.
And they do it with such prowess. It's amazing. Right? So they stop incredibly efficiently compared to a modern aircraft. And they do it with enough accuracy to tag a branch, to land on a perch. Right?
Modern aircraft, our best engineered systems, by contrast, come in-- if they want to stop on a short runway like on a carrier-- an F14 comes in at a very, very conservative angle of attack, not getting anywhere near the effective drag it's capable of. Just because we're, so far, unable with our engineered systems to go into that very complicated flow regime and do good control decisions.
Okay. It's a computer science problem. So the models that we have, potentially, if they exist, of these very complicated flow regime-- the types we were seeing in the last talk-- they can take a long time to compute, days, weeks. Sometimes they don't exist for very flexible interactions with the wing and the air.
So what we're doing in my group is we're using machine learning and nonlinear system identification techniques to build efficient, but approximate, models of the dynamics. So we can do inference on, from data.
And then we've got a series of nonlinear feedback design-- very computationally, nonlinear feedback design. The complexity of these models is beyond what we can attack analytically. But we have ways to now compile provably good feedback controllers for these complicated regimes, with through a combination of randomized motion planning and convex optimization. Okay?
And then, because we used an approximate model to begin with and because of the constant changing conditions in the world, we overcome those difficulties by actually learning and continue to adapt the controller online.
So let me show you a few examples. So this is an airplane that we've now been able to make land on a perch like a bird. Okay? So it's just a simple, fixed wing airplane, but it executes the type of maneuver that we see out of a bird, and is able to land very reliably on a piece of string then we run across the lab. That's slowed down about 11 times.
It's in a very complicated flow regime, like the bird. And, in that, it's able because it goes into this very complicated flow regime. It achieves about four times more drag. It stops dramatically faster than, just for instance, the best runway landing, the short runway landing of the X31 research vehicle. Okay. That's just one example of trying to be like a bird and building a control system that's capable of operating those regimes.
We've been building things that look more like birds. Flapping plates, we've actually built these ornithopters where we're trying to actively sense the flow, operate inside the flow. And we're able to learn and adapt very efficient controllers in just a few minutes of running with the real robot, as opposed to simulating for a long time on a computer.
We're very serious about learning from the birds. This is a picture from Harvard of a pigeon with a camera on its head, some motion capture markers, a tracker on its back, flying at very high speeds through complicated obstacle fields. And we're collecting all this information about his dynamics, about his kinematics, about where he's looking, and trying to understand how the heck birds are doing all the things they were doing.
Now we're trying to make UAVs, unmanned aerial vehicles, that can fly through the same forests at comparable speeds. And we're actually applying it to the wind turbines. Okay? So we built a small wind turbine in the lab, a small wind tunnel in the lab. And using the exact same principles that we started developing for our robotic birds, we can show that a wind turbine with no model of the aerodynamics of the wind-- we can superimpose it on a model that we build and see that in just a few moments, less than two minutes of optimization-- the power output of the wind turbine goes dramatically up.
This is about a minute and a half here. Compared to what we would have done if we had just taken the model from the literature of the wind turbine and tried to put its peak operating condition according to the model, we're able to get dramatically more power output.
Okay. So these are just a few scattered results so far. But I want you to see that there's actually-- it spans a range of different geometries, different Reynolds numbers, fairly different flow conditions, and the solutions-- the feedback solutions-- are very different. It's only the computational principles underlying the decision making that were the same. Okay?
And I think feedback for fluid dynamics is a great challenge going forward in the next few years, many years potentially, because like I said, the complexity of the models we're dealing with is just beyond anything we can do with analytical control design.
And I want to thank the students that helped make that happen, and take any questions.
LYNCH: Okay. Our next speaker is Anant Agarwal, who's head of the Angstrom Parallel computing project. And he'll talk about new research on parallel computing.
AGARWAL: Thank you, Nancy. Good morning. These are really exciting times. About a year ago, I took my daughter to Best Buy. And she wanted a computer. So I saw this computer on sale. And I said, hey Anisha-- she's 10-- how about this computer?
She looks at it and she says, dad it's got only two cores. My brother has four. I want this other one. So that's when I realized that parallel computing is finally here.
So if you look at-- and this happened pretty recently-- until about 2003, 2004, all the computers had a single processor. But around 2005, the world really had a sea change in computing, where we went from single processor to two processors in a chip to four. Startups today have come out with chips with 64 processors in them.
And our vision is that within a few years-- maybe by 2015 to 2018-- we should have single chips with a 1,000 processors in them. Okay? This will signify a sea change in how we do things.
In terms of how this will impact us, we all like video conferencing. I like to talk to my parents. And if you look at your iPhone or a little laptop computer, to have high quality video conferencing, we would need 10 to 100 times more computing performance than we have today, at the same energy, at the same price.
Just a simple example of how lots of computing in a single chip can really, really change our lives. Just to get much higher quality video conferencing and at the quality of watching an HD movie, for example, on your flat screen LCD TV, 10 to 100 times more computing performance in a single chip.
So our vision for the future of computing is, how do we build these chips containing thousands of cores. Okay? Not two, not four that you can buy at Best Buy today, but thousands. Okay? 100 to 1,000 times more than we can do today.
Well, it's nice to have a vision, but there are a lot of challenges that you have to solve. Okay? It's, what do you say, a mere matter of engineering. So here's where big challenges.
We have to-- energy's a big issue. Okay, how do I build a chip that doesn't burn up as it gets hotter and hotter? How do I scale it? How do I get the performance out of 1,000 cores on a chip? I can get two people to work together. I can get four people to work together. But how do I get 1,000 people to work together coherently solving the same problem?
Okay, the same issue in parallel computing. How do I get 1,000 processors, or cores, on a single chip to work together to give me a thousandfold improvement in performance? Lots of other issues.
Resilient to faults. When I have thousands of processors, how do I make sure that computing can progress even when certain things are failing as we go along? The fundamental point is that in order to make this really work, we have to think all of computing. Rethink it from the ground up. Okay?
We've got to rethink compilers, operating systems, architectures, how we program them. And this is the Angstrom project at MIT. At [? C Cell, ?] we were looking to really think about how to build these chips with thousands of cores and program them effectively.
So project Angstrom is a collaborative project involving about 15 faculty, a couple of companies-- Freescale and Mercury Computer, and Lockheed Martin and University of Maryland. There are two key ideas, two big ideas, in Angstrom in terms of how we do this.
One is what we call creating an architecture, both for the hardware and software, that is fully factored. It's completely distributed. We'll talk about that in more detail in a second.
And second is something we call self aware computation. So fully factored architecture is one where all the components are completely distributed. If you want to buy a jug of milk, you don't drive to a Walmart 20 miles away. If you want a jug of milk, a gallon of milk, you go to a neighborhood 7-11 store. Okay?
So in the same manner, we want to build processors where everything is local to where they're needed, as opposed to building things that are big and far away from each other. So as one example, we want to take these processors and build them as tiles that are placed in a two-dimensional array. So that if one processor want to talk to another processor, there's a short connection, short wires. Short wires mean low energy.
Each of these tiles is identical and we're going to be using a broadcast WDM optical interconnect to be able to broadcast values very quickly to all the cores. Not from the viewpoint of performance, but to see if we can make it easier to program these chips using this kind of interconnect.
Another example of how we distribute everything fully, at [INAUDIBLE] architectures, you have a processor and you have DRAM memory sitting somewhere far away. One idea we have, with full distribution and full factoring, is stack the DRAM on top of each one of these processors.
So each processor here can get a direct connection up into DRAMs, much like the neighborhood 7-11s in every neighborhood. Okay? So by doing so, I get what is called locality. When things are close by, I don't have to spend the gas and the energy to drive to a gigantic Walmart 20 miles away.
So the first concept I talked about is a fully factored architecture in order to be able to make scalable computers. The second idea is called self aware computation. And the idea here is that why can't our computers become more like humans. Okay. When I'm running-- if I could run the marathon, and if I ran the marathon-- it actually turns out that if you're a good runner, your body temperature actually goes down the longer you run.
Okay, computers are exactly the opposite. Okay? The more they work, they get hotter and hotter. Just a simple example. So here, we want to build a system which is self aware. Computers become more like humans. And here's how it works.
So this is my operating system. I want applications to be able to the computer what my goal is. Today's computers are not able to tell, our applications are not able to tell the computer, what my goal is. How fast do I want to run? People write an application and they just go run it.
They don't tell the computer, look, I want to run this at 30 frames per second because I'm doing real time video. You cannot burn more than, let's say, 10,000 picojoules of energy to do this computation. Or you have to run on a 1.5 volt AAA battery cell.
Applications don't say that today. So first thing we want to do is for applications to be able to communicate their goals to the computer so the computer can manage these things.
Second, I want to be able to find out, how is the application doing? Those applications have no idea, as a computer, how the application is doing. But humans, my heart rate tells a lot. So I'm carrying this measurement device wherever I go. And any doctor can look at me, and say, oh, Anant, you're stressed. Okay? You've got to get your heart rate down.
So that's a cool thing about our bodies. We have a measuring device built in there. Applications in our software do not. So we've defined a new open source interface, called the Heartbeats API, that can be embedded into software. The software can tell, look, here's how well I'm doing.
The hardware, the same manner, will communicate its things like temperature, and power being consumed, and so on through the operating system at the same time. The operating system does analysis. Our systems will have a learning engine built into it. There's observing these things from applications. It's making decisions based on, hey, look, the user wants to run video conferencing HD. Okay?
And the heart rate is lower than the goal. User wants real time video, 30 frames per second, but the heart rate is only 25 frames per second. I need to bump it up a little bit. So it says, hm, interesting. The power is manageable. Okay? The system's not heating up. The performance needs to be better.
So what the system will do is say, okay, I can go in and increase the core frequency. I can run it a little faster. We can't do any of these things today. Not our software, not our hardware. Okay?
So the decision is made to say, let me go increase the frequency. And then a enact part of the routine-- observe, decide, enact part of the routine-- will say okay. Hey core, hey processor, go increase the frequency.
So a lot of these ideas are not rocket science. But the point is that the key idea here is, can we put them into a system and build these new interfaces to make these computers be able to behave more like humans. Then we have to optimize for performance, optimized for energy, and so on.
And this requires pretty dramatic changes in how we doing things. We need to redefine interfaces. Applications have to be able to communicate their goals, and their existing performance, to the system.
And the system needs to be able to communicate new things to the software. How much energy am I consuming? I don't have any measuring devices to do that. And so here are some examples new interfaces we are building in order to be able to build computers like this that can be more like humans.
Thank you.