Future of Exploration: Exploration Visions (Panel 2)

Search transcript...

CHRISTOPHER CARR: Good morning. Welcome back. I'm thrilled that you're all here to join us for our second Exploration Visions panel. We have a great group assembled here today. I'm Christopher Carr. I'm a research scientist here at MIT and we're going to start our panel today by hearing from some remarkable individuals about the work they're doing in exploration. But before we hear about humans exploring Mars, finding extra-solar planets, robotic hoppers on the moon, and discovering the deep ocean, I want to take a few moments and set the stage for our discussion by introducing my own work and perspective.

So there are a lot of aspects to exploration and many facets that shape why we explore, where we want to go, how we're going to get there, what we're going to do there, and what the impact of those efforts will be, and I want to focus on a couple of those facets today. Namely, biodiversity on Earth, and both on earth and beyond-- more on that in a moment, and also take an opportunity to describe one vision for how we could achieve accessible participatory and engaging exploration.

So I said biodiversity on Earth and beyond, and so I'm going to tell you about a project I've been working on since I stayed on at MIT as a post-doc in 2005. So this is a project called the search for extraterrestrial genomes. And the big idea here is that life on Mars, if it exists, could be related to life on Earth. So we are building an instrument to test this hypothesis, and it's an early stage in some development project. We're not slated for a mission at the moment. But to understand why this mission makes sense we have to go back more than 4 billion years ago.

And the idea here, it's thought that Jupiter's inward migration-- Jupiter was migrating inward and this caused a disruption of asteroids and planetesimals that resulted in a period called the Late Heavy Bombardment. In fact, these impacts were so intense that they could eject material from one planet and have it end up on another.

So to think about how this might work let's imagine an impact here on earth, and one possibility is that material could simply be ejected into space. Another possibility is that ejected material could come back to the earth. And in fact this is a great way for life to survive. So we may disappear if we have another major impact event on earth, but the microbes will stick around.

So another possibility is that ejecta could be transferred from one planet to another. Here, Earth to Mars. And in this way, probably about a billion tons of rock has gone between these two planets, but it turns out most of it is in the other direction. There's some reasons for this, but one very simple one is Mars smaller, lower gravity, easier to eject material. Earth is larger, easier to attract material. Mars is also closer to some of the impactors.

Let's think about the history of the earth. So back here at about 4.5 to 4.6 billion years ago, we have the formation of the Earth, and very soon after what we think was a large impact that helped form the moon. There's this period called the Late Heavy Bombardment here that I just spoke about, about four billion years ago.

And very soon after that there's evidence for life on earth. And this evidence goes back to basically the oldest rocks on this planet, so 3.8 to 4 billion years. And it's possible this reflects rapid evolution of life on Earth. But it may also reflect arrival of life from Mars. We don't know.

So we have several dozen Mars meteorites on earth. And some of those have actually been transferred from Mars through a low temperature trojectory, below 100 C, so microbes may be able to survive the trip. And recent experimental studies have also suggested microbes can survive the shock pressures required. So only in the last third of life-- in the history of Earth here I should say-- do we see multicellular life. And only in the last little sliver do we see humans. So the idea here is if life on Mars is related to life on Earth, it's likely to have been isolated from life on Earth for several billion years, and it's almost assuredly microbial.

So here is a tree of life, that you may have seen an image like this many times before. And so we can look at the genomes of living organisms and identify a set of fairly universal genes, there's about several of these, that are shared by all known life. And some of these genes have regions within them that are nearly identical in all known life.

And so the most conserved sequences fall within genes that encode the ribosome, an RNA machine that turns RNA into a protein. It's a key part of regulating the genetic code. And this is one piece of evidence along with others that suggests before the DNA world there was actually an RNA world.

So there was an RNA world evolving into a DNA world, a common ancestor, and then into the different branches of life that we now know about with bacteria, archaea found all over in addition to extreme environments. And then us, just on the very tip. Everything we're used to, here, animals, fungi, and plants at the very end.

So the astrobiology community has traditionally tried to find a way to search for life, for example on Mars, in a non-Earth centric way. However, what this suggests is, at least for Mars, we should really consider an Earth centric approach. So if we look for the DNA or RNA of potential Martian organisms, we can use this to find out who's there, what they're doing, and how they're related to us.

Now one problem that Steve Squyres talked about earlier was the complexity of Mars sample return mission and the high cost. So unless we want to wait several decades to do this, and potentially work with partially degraded samples, the best approach is to send an instrument to Mars and do this there.

So I talked about the ribosome. And finally, because of the isolation of life on Mars-- Earth, and Mars, if it exists, there may be a common tree of life. And where would we expect to find Mars sequences. Back here, deeply branching when compared to other earth sequences.

So here's our current instrument. I'm an engineer so I need to have a block diagram. We go from a raw sample. We need to extract DNA or RNA. We want to amplify and detect it, and then sequence it. Our current instrument prototype only does the middle of those functions.

Here's what it looks like on the bench. The longest dimension's about a foot. There's a microfluidic chip that forms the core of that. Basically we open this top right here and then we place a small microfluidic card and this one inch by one inch chip, and that is where we can actually amplify and detect DNA.

So this chip actually has 3,072 little chambers. Each chamber can independently amplify and detect DNA. And each chamber is about the width of a human hair. And it's blue because we use blue light to excite a fluorescent dye, and then look for green light.

So how are we going to make this smaller? Well, we've got the sample prep part in progress, but one of the major challenges is to do sequencing in a very small system. So we're basically relying on advances that have come about from the Moore's Law, not a law per se, but the advances engendered by these technologies, and micro fabrication, electronics, et cetera.

So this really helps us and helps many facets in exploration. But as some people have pointed out, sheer processing power is not magically going to solve all of our problems. There are lots of other hard problems we have to address. Battery storage. Energy storage was one that was mentioned for ocean exploration in particular. But it really does help, and in particular, for sequencing it's helped dramatically, particularly in the last few years.

So here's a chart showing the cost per million bases of a DNA sequence. And you can see the since 2001 this cost has declined from nearly $10,000, now down to a fraction of $1. So this doesn't mean that we can sequence in a small size instrument, but we're actually making great strides in that area as well.

So some of the older sequencers were quite large, but now we've gotten down to things that are fridge or freezer size, and now we're down to a variety of platforms that are kind of the size of a laser printer. And good news for us, one of these in particular, the actual business end of this is a one inch by one inch chip. It has no optics, simple chemistry-- as simple as you can get really, and a little bit over a million wells, but by the end of the year they project it will have 32 million wells.

And so what we need to do is focus on miniaturizing the rest of this. Combining it into an instrument. And we will have the capability to build a portable sequencer.

So let me talk now about how we can make exploration accessible. First of all-- let me go back to sequencing for just one moment.

I wanted to say just a few of the applications about this. One is we can do sequencing in situ in extreme environments. We can use this for environmental monitoring. We can use it for tracking an outbreak. Pathogen identification for clinical, water, and food safety applications. And potentially for sequencing pathogens, full genomes, or eventually human DNA, if the throughput is high enough.

And then finally also monitoring of genetically modified or future synthetic organisms. Nasa is actually looking at synthetic organisms and their potential space applications, although I would say that's quite a ways off.

So I think the idea of a portable sequencer and the reality of a portable sequencer represents a democratization of sequencing. And will enable unprecedented discovery and understanding of the biodiversity on Earth, and potentially on Mars as well. And it will allow us to assess the level of contamination we're bringing to other worlds.

This all requires data processing. And so multi-core smartphones are now becoming available, and will have the power to sequence, to handle the data processing requirements. And so to make exploration more accessible, I propose we should leverage the dramatic expansion in cellular telephony and connectivity on this planet.

So if you look at the percentage of people who have access to mobile phone and PCs, it's just been growing exponentially. We're in the exponential growth phase of a standard transition model here. And basically within a decade or two it will be the case, almost assuredly, that most all humans will have significant computing power and access and network connectivity.

So whatever domain we are exploring, we have the potential to engage a huge group of people. Whether through gaming, such as satellite play that we heard earlier; crowdsourcing; or other ways. And we should not overlook the use of smartphones themselves as tools of discovery.

So I'll leave you with this image. As a part of my doctoral work, I participated in a NASA test of an advanced space suit on the flank of Meteor Crater. This long exposure image captures some of the beauty of exploration. It remind us that exploration requires a place, physical or virtual; it involves movements; considerations about where to go, how to get there, what to do there; information and communication at multiple scales. The movement of this person, a bit hard to see, but the rotation of the earth.

So as I leave you a few visions for the future of exploration. Rapid portable sequencing, ultimately single molecule. Sequencing a dominant portion of the biodiversity of the earth. Discovery of life beyond Earth, possibly on Mars. And we'll hear about some other visions, including discovery of Earthlike planets, human exploration of Mars in just a moment, and a robotic-- I hope robotic and eventually human-- missions to another star system. So thank you very much.

Now we're going to hear from professor Dava Newman about her vision for the future of exploration, including a human exploration of Mars. Dava has many titles here at MIT, but for me she was my doctoral advisor and a mentor.

A couple of years into my PhD, Dava took a sabbatical to sail around the world. Well, while we missed her in the lab it was wonderful for a couple of reasons. One, I kinda got to run the show. And two, we got to live vicariously through photos and stories. And there's just one I wanted to share with you.

It was an email that arrived May 1, 2002, from the middle of the Pacific Ocean. She wrote logbook entry 29 April, 2002, 0120 hours. Hydraulic system on steering failed.

There are many repairs listed, but to make a long story short after a day of repairs, a bountiful supply of olive oil proved to be the missing ingredient to complete the remaining 850 nautical miles to the Marquesas. So Dava, it's my pleasure to have you talk to us today about the future of human exploration of Mars.

DAVA NEWMAN: Good morning, everyone. [INAUDIBLE]

Testing. There, we got me. OK, how about if I [INAUDIBLE] hear-- I'm going to speak into this mic to--

OK, let's start over. So for exploration of personal, societal, and international sense-- my students will know I use [INAUDIBLE] famous masterpiece of really reflecting on the past, spending a lot of time in the present, and most importantly for this symposium, that where are we going in the future?

So the past for me-- some of my work that I just wanted to highlight. This was working with the Russians for two years, from 1996 to 1998, studying astronaut performance inside the vehicle. Intervehicular activity. It's really pretty magnificent, elegant, [INAUDIBLE]

We fly forced torque sensors-- small, smart systems that can measure the forces and the torques as the astronauts are going about their daily business. We answered the question, do the astronauts disturb the microgravity environment? No, after thousands and thousands of actions, they don't disturb it at all.

They can, they can be the 800-pound gorilla, but the thing is they don't. They have very elegant motions.

For instance, another thing we study-- you can't do this at home-- it's a Japanese astronaut. Not defying physics, he's just controlling his angular momentum. So we model this in great details-- 37-degree freedom model from a former student-- and you can do it about all your axes of rotations.

There's Dave Wolf, here's Rhea Seddon, a surgeon. So again, it's fun but we do the dynamic analysis and really understand what is that adaptation to these extreme environments? In this case, the microgravity environment, flying smart, small sensors to enlighten that.

Moving on just one slide, Chris asked me to talk about the circumnavigation with my partner Gui Trotti. It was really our personal exploration. Two people on a boat, over 36,000 miles. You're either together or you're no longer together, we're together.

So we had the joy to visit 33 different island nations, and most importantly the children of those nations. And we taught them seminars by sea and space, because these kids on the island nation-- I said, no we're here to recruit the next Martian astronaut. What? We never thought of space. Absolutely, you know a lot about living in isolation, confinement, and extreme environments.

So those kids had everything to teach us about their lives. And we also learned a lot from nature, being in nature, crossing all the major oceans of the world. It's a big, blue desert out there.

And just wondering every day about survival-- our survival-- because you can't take it for granted out there. And synchronicity, in the Jungian sense of the whole person-- emotional, physical, and spiritual.

Chris mentioned our Apollo 13 moment. We were stuck without steering and hydraulics, 2,000 miles off of Galapagos and still 1,000 miles to the Marquesas. It worked out, that's when creativity and engineering ingenuity really comes into mind.

And Larry had a great question for the astronauts. Was it a transformative experience of exploration? Our trip was definitely transformative in many, many senses, personally.

I'd like to refer to Buckminster Fuller's coined phrase Spaceship Earth. This is the only one we have. Another Bucky image of why don't we have everyone in the United Nations building look out, but if they could see the whole Earth, and if you could see the climate, and if you could see wars, and if you could see hope but destruction-- if you could see that visually, maybe you'd have a different perspective on our small fragile planet. So I like to keep that in mind, in passion for the work.

Where are we presently? And just going to pick one example, some of our spacesuit design work. This is actually the past, there's a little bit of bloopers from Apollo.

That was not an ideal spacesuit. The great thing is-- it was a long time ago-- it was the best spacesuit ever designed. But it's rough when you can't bend down to get your scientific experiment. So we have to do better, we have to do a lot better.

In the middle, that's our current system. It's called the Extravehicular Mobility Unit. It's 140 kilos, it's pushing 300 pounds. It's not very flexible, we have to change that.

But we do study it, we quantify it, we have to say what is the next system? Where do we have to go?

That's a former student, Rudolf Opperman. So we tested-- basically our robot is a surrogate astronaut. We want to make sure that someone has maximum mobility-- just human-like performance-- not being so constrained in this space suit.

If I plot a little info-diagram here for you. It's hard to see, it's just a picture of all the human space flights accomplished. It adds up to just over 500. That's NASA, that's Russia, that's everyone in the world.

I had to change scales. The little red dots on the side, that's just our first mission to Mars. Four to six people, we need over 1,000 spacewalks. Well how are we going to do that?

Again, just looking to the past real briefly here. These are EVA workspaces that you've never seen probably before. Well, I don't know what the future of a spacesuit is going to look like. I tell my students maybe it looks like this. Who am I to say, it doesn't have to look like the white inflatable Michelin man.

This is from Johns Hopkins, from their advanced physics lab. in the late '60s. Again, looking at what are all the possible solutions that we can do for human performance, especially when we're sending humans to other planets? These are Apollo arms and legs strung on this-- I call it the hamster cage model. It didn't work so well, it didn't become a flight system of course, but it was still a great creative-- I think-- approach.

Now I wanted to highlight the work of Dr. Paul Webb. So Paul Webb is the first person who said there's another way to totally do this very differently. Real radical idea, why don't we shrinkwrap the astronauts? This is his Space Activity Suit.

So he said, what if we apply the pressure? We have to apply the pressure-- it's about a third of an atmosphere-- to keep the astronauts alive, to pressurize them and keep them alive. But that was a great idea. I think it was actually a great idea, just way before its time. So we've picked up on that, again, with our Bio-Suit design.

And before I get that, just back to Chris's image when he's doing his PhD. Hopefully you'll recognize Duchamp's Ascending Nude, but it sure reminded us when he was out there exploring in the Arizona desert.

So these are the inspiration for our Bio-Suit system, which is a mechanical counterpressure suit, again shrinkwrapping the astronaut. We apply the pressure directly to the skin, and right now we're actually concentrating a little bit more on the Earth applications. We're looking at a system, actually, for children with cerebral palsy, to see if we can't help locomotion here on Earth.

I'm waiting to get to Mars. We're ready in 2014, 2018-- those are great launch dates we could be ready. But unfortunately, I don't think we'll be there for the next maybe decade or two. I'll work on it for another decade or two.

Our calculations, we basically modeled the human skin in great detail to really provide you with full, dynamic motion, if you will. And then the suit materials and properties need to accommodate this. This is just a little animation of what we call the lines of non-extension-- that Spider-Man looking pattern that we have. Well, it's actually based on eigenvector analysis.

So how do we get there? Look at a Maximum Mobility design, it's an incredible international collaboration from the US, Italy, and now with some contributions and partners from Portugal. You just jump in the laser scanner, get your CAD models, we go from the 3D scans, we do this patterning.

The mock-up that's just my size, that's 340 meters-- three football fields-- of lines there, just the patterning on it. Definitely maximize your mobility and causing you very little-- we're trying to minimize your energy expenditure. You shouldn't be working against the suit, you want the suit to work with you. You want to go about your business of exploration.

Some nice, recent work with current folks who are with us. This is [INAUDIBLE] design, if you saw that in the exposition.

So again, tying Earth to space, trying to solve some locomotion, enhancement, performance issues here on Earth. And then we're looking to the future.

A new project that we're just starting on, really astronaut injury protection comfort. How can we really enable performance? We want 100% performance for our folks for the future.

We've been fortunate to get a lot of press. Not so much NASA funding these days anymore, but the press is great.

And I just pause here because the outreach-- so at the show at the Metropolitan, 3 million people came through that show, and most of them were kids. And that's my job as an engineer, to make sure that kids know that science and engineering, math, and art-- I like the comment yesterday, we'll call it STEAM from now on-- that hopefully is the future of our young, of our next generation.

Just to get to my end here.

Thinking about the future, what are the breakthrough technologies that we're looking at? There's a recent report out that we contributed to on technology frontiers, breakthrough technology. And I put this up here-- sorry, it's not an eye chart-- but I circled what are the interesting breakthrough technologies we're working on.

Well, it's Hoppers. We're going to hear about Hoppers. It's Superhumans, that's the one I have highlighted here. Because that's more in line with my expertise. But it's reconfigurable rovers, there's also great ideas among our community on that as well. And then moving out to even sailing the seas, or the oceans, of the planet's future out.

Just a little bit on that. You can just map that from basically what can we do now to what we could do in 2025. We're always hesitant to predict the future, but what do we really need to work on?

Now we have a lot of capability, there's great capability now in terms of manufacturing. I think about this as mostly in the structures and the materials and the processes. We have a lot going on, but where we really need to work in the future is self-assembly.

There are some other interfaces and things that we're working on, and how can we do this? Because we're really stuck in some of our technologies and some of our methodology.

So we look to biology, and really love to do work that is bio-inspired. And so on the left you have here biology, and then on the right we have engineering or technology. The charts here, these are six categories. The parameters that would cover the entire space of design-- in my opinion-- from information, energy, time, space, structure, substance.

Now the reason I put this up there-- and this is my final thought, in terms of where we're going in the future-- I'm on the right-hand graph as the engineer thinking about what technology we have to invest in. And you can see the scales here are, again, from the very small-- the nano and the micro scale-- to the millimeters to meters, and meters to kilometers. In designing for humans and empowering our Mars missions for humans, we're in the meters to kilometers.

And you can see fringes of these small, very high-energy, intensive systems that we have now as designers-- as engineers and designers-- compared to how darn optimized they are in biology. So we have a bit of a ways to go, but I think these are just a little bit of the roadmaps-- I think of them as technology roadmaps-- in terms of what we're doing on research, hopefully, for the next decade to really enable exploration.

Thank you very much. We don't have questions yet.

CHRISTOPHER CARR: Thank you, Dava. Now I'd like to introduce Sara Seager. She's the Ellen Swallows Richard Professor of Planetary Science and a professor of physics at MIT. After undergraduate work at the University of Toronto, she went on to earn her PhD in astronomy at Harvard, where she became interested in the then-brand-new field of exoplanets.

Before joining the MIT faculty in 2007, she was a long-term member at the Institute for Advanced Study in Princeton, and a Senior Research Staff Member at the Carnegie Institute. She's introduced many new ideas to the field of exoplanet characterization, including work that led to the first detection of an exoplanet atmosphere. She won an award for this from the American Astronomical Society-- the Helen B. Warner prize in 2007-- and she's currently a member of the Kepler Science Team at NASA. Which-- as we have already heard at this symposium-- has already produced many new exoplanet candidates.

It's my pleasure to have you join us today to talk about exoplanets and the search for habitable worlds.

SARA SEAGER: Thanks. Good morning, everyone.

So it's really exciting in exoplanets, because now space exploration is moving not just beyond Earth, but beyond the solar system to uncharted worlds. Does anyone recognize this planet? Especially the astronauts.

You're actually the first group who hasn't recognized it, most people assume what it is. This is an artist's conception, it's an illustration of an Earth-like world.

Now in the future when we find planets like Earth-- because we have not found any yet-- we won't be able to see them in this spatially-resolved detail any time in the next 50 years. They will actually appear like pale blue dots, just point sources as the stars appear to us right now.

But not to worry, because myself and others, we work on remote sensing for exoplanets. And using space and ground-based telescopes, we have studied over three dozen exoplanet atmospheres, limited mostly to big hot Jupiter-like planets. And we study the atmospheres and we try to determine what's in them.

For Earth-like worlds, we want to find bio-signature gases. We want to see the atmosphere and look for gases that shouldn't be there that indicate the presence of life.

So how are we going to do this? Well, it's really challenging to find Earth-like worlds. There are two ways to do it, where we have a chance of following them up to look at their atmospheres.

And one of them is just imaging, imaging that pale blue dot. But it's very, very difficult, and the analogy is that it's like looking for a firefly next to a searchlight, when that firefly and searchlight are about 2,400 miles away. That's like the distance from here to LA. It's a very challenging problem, and I'm going to come back to that in a few minutes.

But that actually is because at visible wavelengths, Earth is 10 billion times fainter than the sun. It's an extremely huge challenge. Earth itself is not fainter than the faintest galaxies ever observed by the Hubble Space Telescope, it's just next to a very, very bright host star. So that's a really big problem for us, anything on the order of billions is very challenging.

Now fortunately, there's another way to find planets-- where we can characterize them and their atmospheres-- and it's just called transiting planets. The planet goes in front of the star, and for an Earth-sized planet in front of a sun-like star, this is only one part in 10,000. Arguably for any engineering or science problem, that's still a very challenging measurement, one part in 10,000. Nonetheless, it's actually doable with current technology.

And here's a graphic showing you the transit technique. See the little planet going in front of the star? Well, we will not spatially resolve stars like that. So it's the bottom graph to pay attention to, that if you're looking at a star, the brightness is constant with time except when the planet goes in front of it. And this illustration encapsulates noise in the star or in the instrument.

This is actually what we look for when we look for a transiting planet, and the Kepler Space Telescope is doing this. They have 1,200 planet candidates. There are many, many ground-based efforts, trying to find planets like that.

So now I'm going to turn to my two favorite space engineering project concepts having to do with finding Earth-like worlds. And the first one is a project where I have this demonstration model-- you're all welcome to come up and take a look at it later.

This is actually a triple CubeSat designed to detect Earth-sized planets orbiting nearby bright sun-like stars. This is a demonstration model, it's not an engineering flight model. And I'll come back and talk about this in a minute.

I want to say this was an MIT Draper Labs collaboration, and we also have a contribution from a few other organizations. But very significantly, we have an undergraduate, three-semester design and build class-- AeroAstro-- here at MIT. And for the first time in this class, we had physics and planetary science students.

And these students have done a phenomenal job with the project. They took several of the subsystems-- power, structure, communication, and parts of the payload-- from basically no work having been done at all up to the level of prototyping and some hardware testing. So the class has done a really great job, and some of them are actually here in the audience. We also have a series of graduate students-- including two who I just want to mention briefly by name, Matthew Smith and Chris Pong-- who have literally led the critical components of this project.

So what we're trying to do is look for planets like Earth with a transit technique in front of the very bright stars. And the reason we can get away with such a small telescope-- such a small aperture-- is that the brightest stars in the sky, they're really, really bright. So bright, for example, I think of Alpha Centauri, it's very bright, and they're spread out all across the sky. If you go out and look at the dark sky and you look for the stars, they're spread out all across the sky. So we want one of them, each per star.

So that's our goal here, and why we have the telescope. And I want to actually-- I have a very detailed design here. I'm not going to go into technical details. If you have questions, come up and talk after the session is over.

But basically, the main problem is getting the photometric precision-- measuring that star from point to point and from orbit night to orbit night-- to get the photometric precision required to find that Earth-sized planet in front of the sun-sized star. And actually, this is a problem of pointing precision. Because the real problem boils down to that you need to keep the star on the same fraction of a pixel at all times, because the main problem actually is a pixel response.

If you could open your cell phone or your camera, and take the detector and shine a light at it like a laser and measure the response across the pixel, it's quite big-- '10s of percent. And so the jitter from the spacecraft and moving around, that actually creates the biggest noise source. So figuring out how to point this thing is something that we've demonstrated-- the students have demonstrated in the lab with Draper's help-- and so we have our proof of concept. And now we actually have a potential launch date through the NASA CubeSat Launch Initiative.

And just to finish talking briefly about the model, this here would be the attitude control system unit. We have the lens, we have this PSO stage-- which does fine pointing control down to basically several arc seconds-- and in here we have the avionics and the batteries and everything else.

So the general idea here is that we eventually want to launch a constellation of them. And the goal is to demonstrate a new paradigm for space science missions. Like the Kepler Space Telescope over 25 years from concept to launch, this would be less than five years for the prototype from concept to launch, and have a graduated growth of a constellation. So we changed the risk idea and the cost idea, and send these up to space.

And I could talk a lot about CubeSats. If you're interested, we can come back to that in the discussion.

OK, so now I'm going to switch to the direct imaging techniques. Because transits are great, we can find Earth-sized worlds right now with the technology in hand, and Kepler is going to do so. But transiting plants are rare, because the planet has to go in from the star, it has to be specially aligned. So eventually we want to do the direct imaging problem from space.

And we were asked here to comment a bit about what we can do now. And within 10 years, that's the constellation of CubeSats. And what we could do within 50 years-- and well within 50 years, I hope and believe we will be doing this concept.

So the problem with direct imaging is if you put a perfect telescope in space, it's diffraction. Imagine for simplicity that this is your telescope mirror, and if you look at a star you're going to get the airy ring pattern. I'm going to assume here everybody knows what that is.

And the problem is that so you don't image a point source, you get the airy ring pattern because of light diffracting off the edges of the mirror. So here you see the airy rings, the first airy ring actually is 100,000 times brighter than the planet you're looking for. So you have a problem.

But you know what? If you haven't been paying attention to the exoplanet world, in the last 10 years a whole new field of optics has been developing, and this has a way to use diffracted light as the solution. So watch carefully now.

If your mirror instead is this shape, look what your image will look like. So now you throw all the diffracted light off to the edges, and you get a very, very dark image.

Now I have to make a long story very short, but evolution has happened. And now one of the favorite ideas is instead of making your telescope mirror-- or actually it would be something along the optical path like this-- you would put a giant external occulter in space.

And by the way, I'm so pleased to talk to you today, because here people have been talking about going to the nearest stars. And this idea will not seem as difficult or far-fetched as some of the things brought up already.

So here would be the external occulter, a large deployable put into space. And this deployment will be about 50 meters in diameter, it would be about 50,000 kilometers from the telescope. OK, so there is no easy way to find Earths.

And this, actually, specific shape is really important-- the number of petals and their shape is something that really matters. There are a lot of tall poles here, including fuel to move this thing around the sky to different targets, station-keeping, sending the large deployable to space, and the list actually goes on. I don't have enough time to tell you all of the work going on in this, but I want you to know there is work happening to make this happen. There is technology development ongoing, although there's no money for the huge mission itself.

So I'm going to show you some of the technology that's ongoing. By the way, those are two of the students in the exoplanet Sat class. They were out visiting JPL, representing the exoplanets Sat team, and they did a fantastic job on behalf of the team and behalf of MIT.

Now do you know what that is? Let's back up and look at that. This is like the interactive part of the lecture. Someone just throw out a guess what it is.

It's one of those petals. See the petal? Remember I said it would have to be 15 meters across in diameter. This actually is not life size, it's about 2/3.

And what was going on at JPL was they were actually trying to show how it would unfurl. Could it unfurl? Other places like Northrop Grumman, they're investing in the shape, seeing if they can actually manufacture this.

The shape is actually called a hyper-Gaussian. It has to be very specific to get that specialized pattern.

By the way, whoever took this picture, they forgot one of the most important parts, which is the tip. It goes like this, and that tip has to be like '10s of microns. And then in the valley between the petals, it has to be '10s of microns, so it has to be very precise. You have to get a lightweight material that is not actually transparent to one part in 10 billion.

So a lot is happening in this field. We're hopeful that this will go forward sometime in the future.

OK, so I have a summary now. And at this point, I'm going to ask everybody who's not paying attention-- the people on their cell phones and computers, I just have a message I want to give you.

So if you only remember one thing from my entire talk, I want you to know that with exoplanets Sat and with the future external culture-- we call it the Terrestrial Planet Finder Mission-- we're going to find planets around the very nearest stars. We're talking about Alpha Centauri, we're talking about the nearest sun-like stars within 10 to 30 light years. That's what we're going to do in exoplanets.

And we're doing this not just to leave a legacy for the people 150 years and beyond from now, but so that all the ideas that we've heard floating around here can become reality. So we will know which star system will be the first one that we will send a probe to. Thank you.

CHRISTOPHER CARR: Thank you, Sara. Now I'd like to introduce Jim Shields. He's the President and CEO of the Charles Stark Draper Lab, he's served in this role for the last four years, and previously as the Vice President for Programs. In addition to his operational role at Draper, he supports a number of senior advisory boards and study panels, including the Defense Science Board.

Prior to joining Draper, Mr. Shields had a 28-year career at the Analytical Sciences Corporation. As Vice President for Strategic Development there, he was responsible for the planning process and creation of strategic plans for the Analytical Sciences Corporation. His experience includes integrated multi-sensor navigation analysis, modeling and simulation, weapon system performance analysis, information management, systems development, and logistics management. And he also has a Bachelor's and Masters in electrical engineering from MIT.

Jim, it's our pleasure to have you here to join us to talk about the future of exploration, from the Draper perspective.

JIM SHIELDS: Thanks, Chris. I've been asked to speak this morning a little bit about the future of exploration from the perspective of Doc Draper and his laboratory.

So in order to put that into context for those of you that aren't familiar with that history, the history of the laboratory is really very closely intertwined with that of MIT. We were founded in 1932 by Doc Draper as part of the aeronautical program in the mechanical engineering department. So the laboratory actually predates the AeroAstro department on campus.

In the period of time we were together-- when we were together up until the early '70s, we were part of the Institute. The comments that Ian made this morning about the contributions that the laboratory made to the Apollo program, to [INAUDIBLE] navigation during World War II, to ballistic missile guidance-- all of those things were done as part of the Institute. The last half of our history, we've been an independent, not-for-profit entity.

Doc Draper-- who is our founder and namesake-- was an iconic, little bit eccentric professor here on campus. He, with his laboratory, created significant capabilities to enable exploration, and I'll comment on that. But he himself was also very explorer-minded.

He, during the Apollo program, lobbied very vigorously to replace Buzz on the first mission. He felt that he was the best person to manage his guidance and control system, because if anything went wrong no one would know it better than he. Now Doc was in his mid '60s at the time, but he was serious about that particular offer.

Since the laboratory's been independent, we've continued to have a long relationship with the human spaceflight program. We wrote the on-orbit control software for the space shuttle, we've continued to bring other technologies from campus into practical real world applications, we created many of the first instruments to create micro-electromechanical systems. We designed the most accurate solid-state gyroscopes, we've looked at fiber optic technology, we're looking at vanishingly-small systems to do close NSIR applications.

So there's a whole range of activities, but the key element of the laboratory is that Doc established the lab with the primary purpose to create an opportunity for his students to work on real problems while they were taking classes. And when we were spun out of the Institute, we continued that mission, so we continue to have a very active student development and student training program. We have a graduate education program called the Draper Fellows Program, where we have in the neighborhood of 50 graduate students each year that take classes at nearby campuses-- most of them are here at MIT-- and then they do their thesis work on a project at the laboratory, supervised by one of our staff members and an academic advisor.

So thinking about issues that are enablers for exploration, there are three points I just wanted to comment on in terms of key elements associated with exploration. One of the most important aspects from the beginning of exploration is knowing where you are. In the early days of navigating at sea, we were able to find latitude reasonably well by looking at the stars.

We had long numbers of years where we couldn't figure out where longitude was, which had explorers running into land masses they didn't know were there in fog. And it led to the significant development of precision time. And so the continuing to know where you are is an important aspect of exploration.

The second issue is sensing and remote presence. And a lot discussion over the last couple of days about robotic exploration, and the dimensions of sensing associated with that, are important.

In the early days, the first sensor was the human eyeball. The first documentation were diaries and journals. The first mechanical sensor was basically a camera, which documented primarily the event.

But as sensors got to be more capable, we're increasingly using them as an extension of the capability of the individual human explorer. I don't think we'll ever completely replace the human explorer, but it certainly augments and provides reach beyond what human explorers can do.

And then the third aspect I want to comment on-- which really, Sara alluded to-- is tapping into the energy and creativity of students. Exploration historically has been for the young. In early days that's because of the physical pressures of trekking across Antarctica or flying in a spacecraft, but it's also an issue associated with imagination not knowing what can't be done and the wonder of being young.

It's interesting to note that the average age of the team that developed the Apollo program was 27. We don't let folks at that age have significant roles in our programs today-- that I think's shame. And so there's an opportunity for us to think going forward from that perspective.

So it was interesting-- if you think about where we are from a navigation point of view today on Earth, GPS as a radio navigation system has taken that problem away for most environments. But it was interesting to see that of the student projects highlighted yesterday, undersea and deep in the canopy of the jungle are places where you really don't have the navigation problem yet solved. Similarly when you get into outer space and get outside the GPS constellation, navigation is going to be a problem.

So where do we think things are going from a navigation point of view over the next 25 or 50 years? We're doing research, and I do think there are capabilities to create significant-- probably two or three-- orders of magnitude improvement in gyroscope and accelerometer technology, largely by using atoms as sensors.

We currently use atoms for precision time and atomic clocks. In the laboratory environment we've demonstrated the ability to use atoms as gyros and accelerometers by cooling them and then measuring the acceleration on them. Or by rotating them and doing interferometric processing in order to detect angular rate.

In any solution, I think, going forward-- even if you get better gyros-- in environments where you don't have access to radio navigation, you're going to be able to have to use multiple sensors and fuse those capabilities together. And so in the environment when you're under sea, you may have to use navigation buoys. In the Amazon, you may have to use access to GPS when you can get a shot through the canopy, and figure out a way to merge all of those together. So advanced signal processing techniques are going to come along there.

And then when you get outside the GPS constellation, you're going to have to figure out how to get more than just attitude out of stars. You're going to have to figure out how to get position and velocity by measuring from the stars as well. And I think those capabilities will develop over time.

The whole issue of sensing and remote presence. We've talked a lot about robots from an exploration point of view. The key challenge there and the real limitation is perception and situational awareness. And that's a limitation of sensors, knowing what's really going on around it.

And one of the reasons why our rovers on Mars move so slowly is that they really don't have a good understanding of their current environment, to where we're confident enough to let them make decisions. And so as we get better sensor technology, the ability there is going to improve. Also, we're going to get to the stage-- I think-- where the ability for the human and the machine to interact is going to change. And then we'll be able to get real telepresence, and you're going to be able to--

No autonomous system is really fully autonomous. It's always supervised at some level. And today, our interaction between computers and people is largely related to data filtering and displays. And I think over the next decade or decades, we will see that interaction change so that the computer does what it's good at-- which is repetitive, well-defined tasks-- and we will use the human mind for perception and drawing inferences, which they're good at.

And ultimately, I think we'll see an environment where we will get more capable immersive telepresence. So that while the speed of light between here and a planet will turn into just a delay as opposed to being in the detailed control loop, which will limit how you can go forward.

From a student project point of view, much of what the laboratory does-- we spend a lot of time interacting with students through our formal training program and then through collaborations with the campus. Because that's really an important part of our mission as a laboratory. And those of you who went to the reception yesterday afternoon, you saw a couple of these projects.

I'm just going to highlight two activities that we're actively involved with now. One is hopping for planetary movement. Most of the deployed capabilities today have either been lander or rovers-- which have tended to be very up-close and personal science but not much geographic diversity-- or orbiters, where you've had the ability to map the entire planetary region but haven't been able to get very close.

In collaboration with students in the Aero department, we've designed a concept that was originally motivated by the Lunar Google XPRIZE, to traverse on a planet using hopping technology. So that you would basically have your vehicle lift off, and with gas generators be able to lift off the planet, and then traverse horizontally, and then come back down. So we hope to be able to move hundreds of meters or thousands of meters in a single bound, which would give you the ability to compromise between geographic diversion and up-close sensing.

The project we're demonstrating now is really to do that in a demonstration environment. Hopping probably works best in low-gravity situations where you don't have a big gravity well to fight against from a sensing point of view. And so that's one of the projects we're actively involved with.

And the other one is the exoplanet Sat that Sara highlighted with the prototype she's carrying around. The work we've been doing is really-- Sara has focused primarily on the science, and we've helped work with the students on the engineering side of this.

And she highlighted very well, the two primary engineering problems are the photometric precision of the sensor and then the pointing activity. And we address the pointing problem really from two dimensions. We use mechanical reaction wheel pointing to get course activity, and then we do electrical pointing at the finer inner-level control loop.

Again, this is a project-- in both of these cases-- where we've got students at both the undergraduate and the graduate level. We've facilitated many graduate student theses in both of these areas, and it's a key element of where I think the energy and the imagination for future exploration's going to come from.

That's end of my comments for this morning.

CHRISTOPHER CARR: Now I have the pleasure of introducing Susan Humphris. She's a Senior Scientist at Woods Hole Oceanographic Institute, in the area of geology and geophysics. And she earned her doctorate in chemical oceanography from the MIT Woods Hole Institute joint program in 1977.

And during her career, she has spent more than three years at sea on various oceanographic research ships. She's completed many dives in the submersible Alvin, in both the Pacific and the Atlantic, and operated in autonomous underwater vehicles in the Atlantic, Indian, and Arctic Oceans. She's the PI on a major upgrade project for Alvin that's going to ultimately allow it to go down to 6500 meters.

It's my pleasure to have you join us to talk about the future of ocean exploration.

SUSAN HUMPHRIS: Well, thank you, it's a pleasure to be here. The rest of the panelists have been taking us on some wild adventures into outer space, I would like to try to concentrate on talking about the future of exploration in inner space, which of course is our deep ocean.

Now if I'd been asked to present this talk 50 years ago, I probably would not have predicted where we are today. 50 years ago, the paradigm of plate tectonics was only just becoming accepted. This global compilation you see on the right-hand side of eco soundings from around the world had not even been published yet. And so the complexity of the sea floor-- the mid-ocean ridges, the sea mounts, the deep trenches-- really hadn't been recognized.

Hydrothermal vents or submarine hot springs-- which is the area that I've spent most of my time-- had not yet been discovered. And the ecosystems that they support based on chemical energy from the Earth were completely unknown. We had no routine way to have a human presence in the deep ocean, even though Jacques Picard and Don Walsh had just gone down to the bottom of the Mariana Trench in 1960. So it's with some trepidation-- knowing that I would probably have been completely wrong the last time if I tried to do this-- that I'm going to try to do it today.

Now since then we've had rapid development of equipment to either access directly or remotely the marine oceanic environment. Oceanographers routinely go to sea, moorings are used to monitor specific parts of the ocean at specific locations, drifters are used to record characteristics such as temperature as they drift with the currents. And we also use satellites to get a global perspective on characteristics of the oceans, such as sea surface temperature, ocean color, currents, ice extent, and thickness.

Now these are all technologies that we're going to see used in the future, and we're going to see a very close relationship between technical advances in these activities and the progress we make in oceanography. But what I want to do is focus on the deep ocean. And perhaps one of the most remarkable advances over the last 50 years has been the development of deep submergence vehicles, to the point where we do have routine access to the deep ocean.

Our Dana talked in the last panel about the vehicles. We now have a US National Deep Submergence Facility that is operated by Woods Hole Oceanographic Institution, and it includes the human-occupied submersible Alvin. Which is now undergoing a major upgrade after 4,664 dives to the bottom of the ocean.

We have a remotely-operated vehicle such as this one, Jason II. And most recently, autonomous underwater vehicles like Century, which is the cruise that Dana is currently out on. All these are now available to the oceanographic community for work in the water column and on the sea floor.

Now in addition, there are a variety of other vehicles that have been developed or are under development for use in extreme environments. We have vehicles that have been developed for use at high latitudes under the ocean, we have the hybrid ROV that went to the bottom of the Marianas Trench that Dana also talked about, and then we have Neptune Canada's deep sea crawler that allows sampling and exploration on the sea floor.

Now these observations and sampling with these vehicles has really led us to appreciate that there are many previously-unrecognized environments and ecosystems in the deep ocean. Yet right now, we're constrained to get a glimpse of one place at one time-- essentially a snapshot. What scientists actually really need is to understand the processes going on at the bottom of the ocean. We need more special coverage and even more importantly, we need to add the fourth dimension, which is time.

What we really need is a continuous human telepresence in the ocean. And I predict that the goal in the next 50 years will be the ability to interact in real-time with the ocean from anywhere on Earth. Now our ability to place high levels of power and bandwidth in the oceans and couple them with creative robotic systems is increasing, and I think it's going to lead to a transformation in the interplay between humans and the oceans.

This transformation is being enabled by emerging technologies from other disparate fields that are now being incorporated into oceanography. Robotic systems are becoming more capable and are being used in the ocean for a variety of functions. We have fiber optics, optical cables that are able to provide us high levels of power and bandwidth to specific places of interest in the ocean. They also allow us to use a number of instruments that we would typically use on land, such as mass spectrometers, in situ DNA sensors, and flow cytometers for genomic analyses.

Nanotechnology is going to have many uses in the ocean, and it may provide us with the solution as to how we investigate the flow of fluids in the crust that makes up much of the Earth's floor. And in fact, makes up the biggest aquifer that we have on the planet.

Digital imaging is going to advance and in no time we're going to have 3D and stereo imaging, with a significantly higher definition than we currently have. And then eco-genomics is going to allow us to do things such as have instruments in the ocean in which we can analyze microbial material, perhaps expelled during some sort of seafloor event, and have its genomic character transmitted back to land.

Now I want to use just two examples to show you the sorts of things that are going to be developed over the next few years.

In this example, I'm going to use the ocean cabled observatory-- which is in its infancy, it's a project funded by the National Science Foundation. Right now it's at the design and build stage. This is a project in which fiber optic cables will take power and high bandwidth out to various parts of a tectonic plate, out to the mid-ocean ridge, to the mid-plate, and then to the converging plate margin where we have a subduction zone.

Neptune Canada is a similar effort by Canada, this is actually one that is actually active right now. And it has links to bore holes in the ocean floor that are actually carrying out in situ experiments to look at the vast biomass that exists or is thought to exist within the ocean crust. And there are many that believe that biomass may be greater than the biomass that we have on the surface of the Earth.

I'm going to take a closer look right now at axial seamount, which is the most magmatically-robust volcano on the spreading center. And it has three active hydrothermal sites. We have cables going out to primary nodes, and then additional connections for a variety of senses.

And you can see here that in this animation, there is a mooring that has a profiler that moves up and down, recording characteristics of the water column. This will be able to be controlled from on land. And you see a variety of other instruments-- including robotics-- that are monitoring the state of the volcano, the hydrothermal activity, and the communities of organisms.

As we fly across the caldera floor, you can see a variety of instruments. And the cable leads us to-- excuse me, I'm sorry. Well, it leads us to a specific site where we have a confluence of instruments. And you can see here is fluid samplers, various chemical sensors, pressure and tilt meters, seismometers to measure earthquakes.

Now these are all, in actual fact, around a very specific site, which doesn't show up well on this resolution. But when you actually go down and look at it, what you find is it's the site of very, very active hydrothermal discharge from the sea floor-- chimneys with fluids at temperatures of about 350 degrees centigrade emanating out of these chimneys. We will be able, through the use of robotics, to stage lights and cameras to best image the vents and their associated ecosystems. We'll have 5 to 10 stereo high-definition systems running continuously, and we'll be able to control both the lights and the robots from land.

Now when this site is set up a few years from now, it is going to be the largest single experiment in the global ocean focused on long-term measurements of underwater volcanoes. And I'm just using it to point out to you what I view as an example of a future networked ocean.

The second example I'm going to give you relates to earthquakes. This shows a map of earthquakes of magnitude greater than six around the Pacific Ring of Fire, and you can see here the recent earthquakes of Sumatra, Chile, and Japan that have happened-- the really big, mega-earthquakes over the last six years. Now you should also note that off of the west coast of the US and Canada, we have a similar plate boundary with faults that are similar to those at the three other sites. The last time there was a major event at this site was in 1700.

Now despite the destructive power of the earthquakes and the associated tsunamis, we are currently unable to predict their occurrences. With new sea floor observations in this area-- continuous data 24/7 from seismometers-- there is the hope that scientists will be able to analyze activity and signals prior, during, and after earthquakes. So it's not inconceivable that in the next 50 or maybe 100 years based on data from continuous observations in key areas of the sea floor, we might be able to develop a predictive capability. And you can imagine a world where sea floor sensors send signals back to robots, who communicate with robotic systems on land, that automatically generate alarms and shuts off major municipal grids-- such as gas lines and power stations-- prior to a large earthquake and tsunami.

So as we move forward in deep ocean exploration, we're moving to a networked ocean. The ability for humans to interact with the oceans in real-time is going to really revolutionize our perceptions about the ocean. But this is just a vision of scientists who've been in the field for the last few decades. The people who are really going to determine the direction are these, the future generation of scientists who are going to interact with data being brought into their classrooms.

So in another 50 years, after they've had their turn at predicting the future of ocean exploration, we'll see if this vision I presented today is correct. Thank you very much.

CHRISTOPHER CARR: Thank you, Susan. I'd like to invite the audience to write down some questions on the cards that are being passed out. And in the meantime, I'd love to get started by--

PANELIST: Are we allowed to ask questions to each other?


PANELIST: Do you want to start with--

CHRISTOPHER CARR: Okay. I'd love to start by asking about what does it mean to be an explorer? Now we've heard about some personal exploration as well as technological exploration-- both human and robotic-- and I'd love to hear your thoughts on what it means to be an explorer and how that might change in the next 50 years. Any takers?

DAVA NEWMAN: People are looking at me. Should I start? My mic works now.

Again, based on some of the wonderful talks we had yesterday. From the cultural and societal and maybe more of the economic imperatives in the past or previous ages of exploration, now moving to the image that Susan just left us with. With the networked virtual-- what kind of experience can we get? But maybe humans always in the loop, but just where is the human place?

So I'm here in Kresge, I might be on the space station, I might be on Mars, but am I having that rich experience? Because Sara beaming us-- can we really have that experience of maybe seeing these other Earth-like planets? Is there a way for me to feel that and almost touch that? So I think that really does change the paradigm for exploration.

SUSAN HUMPHRIS: Yeah, I also think that-- I know that when I actually am down exploring the bottom of the ocean, I feel a tremendous responsibility to be as observant as I can, to describe as much as I'm seeing, and really use the human ability that perhaps we don't yet have with cameras-- particularly in the ocean-- to try to remember and visualize as much as I can. Because I think the power of having a human somewhere in the system-- as was just pointed out-- is really what drives progress in exploration. And drives not just exploring, but also the advancements that you make in understanding whatever it is you're looking at.


SARA SEAGER: I'll add a couple of comments. I think exploring is the journey, it's about going to the North Pole or going to the Arctic or going to the deep sea or going to space. But I think every research problem is an exploration, and that's why all of us are here, really.

And this conference wasn't about theory and computation, but I can speak a bit to that. It's like when you start with a blank screen or write a huge computer program to make a prediction or an understanding, that itself is a journey. And I think that's what exploration is all about.

CHRISTOPHER CARR: Any comments? On a personal note, it seems to me that my two-year-old likes to go look at zoos on an iPhone or an iPad or something. And I look forward to the day when I can do that under the ocean or other environments as well.

So one other question is when we go to these environments, we're obviously learning things. How might that change us as a society or at a personal level as well? How has it changed you?

DAVA NEWMAN: Well, I'll start. I put Mr. Fuller's vision up there because I think it can profoundly change us, personally, to again consider the whole. Consider the earth, consider our environment, because it's really important that-- we're on the treadmill, fast and furious here-- but the best thing in the world is to think.

I always tell people I have the best job in the world, I love it here, and I really mean that sincerely. So the luxury, the ability-- we have to remind ourselves to think, because some of our biggest ideas, I think, happen when I'm jogging around the Charles River. It might not happen when I'm in my office with eight hours of meetings. So just that reflection and that appreciation, but also the holistic view.

JIM SHIELDS: The discovery or exploration-- as commented yesterday-- was extending the bounds of human experience, and that inherently changes people. If you look at the excitement that's generated with a significant new event-- the Rover landers on Mars or the way in which the entire world rallied around the accomplishment that Buzz and Neil did when they were the first human beings to walk on another planetary body. The world isn't the same after things like that, and it's not just the individual who did that exploration.

The early explorations of Antarctica were all done and financed by patrons with the idea of creating movies and journals for the rest of people around to participate as well. So I think there's an inherent ability for the explorer to engage the rest of the civilization, the rest of the people, with their journey. And so it's not a solitary journey, but it's a journey of the entire humanity.

CHRISTOPHER CARR: So the idea of movies raises exploration as entertainment. Do you see a future there? Gaming, entertainment, is there going to be a movie about space exploration? We've seen some about ocean exploration.

SARA SEAGER: Well, I'll start. I'll say most of the movies related to exoplanets and astronomy and asteroids, they're all disaster films. Because people love that element of disaster and what can go horribly, horribly wrong.

And I actually just went through-- I watched almost every movie I could find recently on disasters. They all blend together because they have the same elements. And how they portray the scientists and engineers is just hilarious. That's the funniest part for us.

But I think people like the element of danger and excitement, and we've had this ongoing conversation about risk and risk management. And the big problem about sending humans into space or putting people in Alvin to go to the bottom of the ocean is you don't want to kill those people. You don't want that to happen. Even if the people themselves don't mind some level of risk, which all of the astronauts have accepted.

So I think in terms of the movie issue, it has to have the element of danger that has to be amplified from reality. But I really liked the presentation yesterday about the video games and the entertainment there, that was really interesting.

CHRISTOPHER CARR: So I'm going to bring it back to MIT for a moment. This is a question from the audience. MIT has a very strong set of programs, classes, centers, and contests in the domain of entrepreneurship.

Given that MIT's history in exploration is just as strong, should we start similar programs? Is exploration something that can be taught? Should it be promoted at MIT in a way that it's not currently?

DAVA NEWMAN: Yes. Yes, that's why we're here. I'm doing what I can. Absolutely.

We have critical mass. We need to reach across the disciplines, that's the whole purpose of going from ocean to space to the galaxy. Yes, we need the students to tell us their desires-- great to see your visions-- and we just need to keep working on it.

And this is yes to everyone in the audience, and even those that are video streaming. We need to do this all together.

SARA SEAGER: How many undergrads are here in the audience so far? There's not too many here. I was going say we need to partially free them from that constant stream of problem sets.

And in particular in this design and build class, we have seen some students flourish like I never would have imagined. And one of the students in our design and build class-- Mary Knapp-- she actually came first in the contest sponsored by this symposium. And so I think they really do need more time to have a project that's either in the class format or that they do off on their own time, and to be able to come up with ideas. And have some resources and time, especially, to make it happen.

JIM SHIELDS: I think one of the challenges of the educational paradigm is there's so many facts that the faculty and each department wants to make sure that people get exposed to. And of course, they do need to be exposed to it. It pushes out the reason why we're all in engineering in the first place, which is to do things.

And you see this, I think, with some of the more successful STEM initiatives. They are all hands-on related activities, where you give the students an opportunity to do something. And once you can get that excitement and that gratification associated with it, you can then present the education or the learning side of that as training for-- to use the sports metaphor-- training for the competition.

And so I think there's a paradigm there that will be more successful than some of what we've been doing.

CHRISTOPHER CARR: I can personally attest to the importance of design and build. As an undergraduate here, I never worked harder and with more motivation than trying to build our solar car. And spending 80 hours a week or more on occasion toward extracurricular activities, I think that's one of the real strengths of MIT.

SUSAN HUMPHRIS: Can I just add something?


SUSAN HUMPHRIS: I actually think that we have an advantage, though, because we're working with students who are studying their world or the worlds that they look at out there. And to get them engaged in building vehicles or actually going out and doing research in those environments, I think we have a great opportunity to get those students involved. Much more so than some other fields where it's much more difficult to engage the students in actual design, build, do research capabilities.

CHRISTOPHER CARR: Thank you. Sara, I have a question for you. You talked a lot about the transiting method of detecting extra-solar planets. How many planets are we missing, and will we find them in other ways?

SARA SEAGER: Okay, the question is how many planets are we missing? Right now, we're not worried about how many planets we're missing. The whole field is just trying to find as many planets that are as small mass, as low a radius, and that are nearest to Earth as possible. Eventually we'll get around to doing the whole completeness study, but for now we're not too worried about the ones we are missing.

There are, however, at least five different techniques to find exoplanets, and most of them have been successful. So we have 500 known exoplanets, another 1,200 exoplanet candidates, and each of the methods bring something to the table. I only focus on transiting planets because that's the only way with current technology that we can find planets that are down to Earth size, that we can follow-up with other ways to study them in more detail.

CHRISTOPHER CARR: Thank you. And while we're talking about maybe limitations of approach-- but maybe not limitations at this phase-- I have a question for Jim. What happens when or if Moore's law runs out? Will this impact our perception about future advances in exploration?

JIM SHIELDS: A couple of perspectives on that. One is, people have been forecasting the plateauing of Moore's law for probably 20 years. And every time we think we've run out of a way to continue to increase the compute power, we've found another way to deal with it. Either with smaller feature size or parallel processing, whatever.

But if we ever get to the stage where we find limits there, I don't think we're going to be constrained by the limits of human creativity. I think in many cases, the algorithm side of things right now are not very efficient because they don't need to be. If you think about what our predecessors at the laboratory did for the Apollo guidance and control, we took man to the moon on 37k of memory.

CHRISTOPHER CARR: That's amazing. And a follow-up question. You mentioned the average age of people during the Apollo program. What's the average age of Draper teams now, and why has it changed since Apollo, if it has changed?

JIM SHIELDS: The average age of the Draper staff-- we're actually very proud of our staff demographics, comparison to. Most of the aerospace industry talks about the aging workforce, and something like 60% or 70% of the NASA workforce is retirement eligible. We've got a demographic that's nearly flat across all experience levels.

So our average age is probably 40-ish, which is well above the 27 of the Apollo era but it is one that has got a reasonable mix. And one of my particular challenges, in terms of trying to lead the organization, is to try to get our leadership team to realize that Earth rate is not a measure of capability. You don't have to figure out how many times the Earth has gone around the sun to figure out how capable somebody is.

CHRISTOPHER CARR: Thank you. And now a more broad question from the audience. What is the most important question in your field for the next 50 years?

SUSAN HUMPHRIS: Well, that's a tough one in the ocean because there's so many different facets of the ocean that various scientists study. But I would say that probably the one thing that cuts across all of those studies is understanding the time domain of many of the processes in the ocean. And for that, I mean the temporal variability that we see not only in the water column and properties of the water column, but some of the processes that are going on at the bottom of the ocean.

And I would include in that this whole new area of study regarding the evidence that we have for a whole new biome that is living within the sediments and ocean crust of the ocean. That is potentially a huge question, we know really nothing about it. Its diversity, extent, what the limiting factors of life are in those environments, and how those changes. So the whole issue of the temporal variability of the ocean and its ecosystems, I think, is going to be a question that we're going to address over the next 50 years.

DAVA NEWMAN: I think it'll be great to hear from each person.

JIM SHIELDS: I would think that the most interesting challenge is to continue to extend perception and situational awareness of robots, and to get better at understanding how the human and the computer can collaborate. Too much of the debate from exploration is an either/or, and it really needs to be a both. And I think we're on the verge of being able to change the way in which people and computers interact, and I think that's going to have a significant increase in capability. If we think about it that way, rather than setting them up in contrast.


SARA SEAGER: Well in exoplanets, the questions have been around for thousands of years. And we're so excited on the whole because we're going to answer them in the next 50 years. Those are questions since the great philosophers. Do other Earths exist, are they common, and do any of them have signs of life?

DAVA NEWMAN: So I'm going to take it from the perspective of education. I think the biggest challenge is-- again, I'm going to go back to my changing STEM to STEAM-- and so how can we-- it's tough enough to be great at engineering, and how can we understand the multi-disciplinaries? So how can we do this together, the DNA, the search for life, for human spaceflight? How do we weave in the engineering and the science and the arts just for the betterment of knowledge and humanity?

So I want to paint it, a whole circle. Really that multi-disciplinary education, then actually, I think is a big challenge to enable human exploration via ocean all the way to space.

CHRISTOPHER CARR: And I guess I would just add, where do we come from and are we alone? So very close to Sara's.

I want to follow-up. I've got another question here from the audience, this is about maybe algorithms. How can limited AI help solve the time delay issue of telepresence? In other words, how do we have robots and humans interacting there to give a level of autonomy that will permit us to tolerate time delays? Any ideas?

JIM SHIELDS: Well, some of the challenge is going to be better-- I'm sounding like a broken record-- but better perception and sensing. But the issue, I think, is also thinking about more deliberately where you allocate the decisions to be made.

Again, we tend to think about AI as being the machine making all the decisions on its own. And it really is an issue of thinking about how do you allocate the decisions and the supervision, and engineer that from a systems point of view? Too often, we take an autonomous system or a tele-operated system from one activity and try to just drop it into a different context, and context is important. And so in engineering the overall contextual situation, I think is the way in which we're actually going to make progress in the algorithm area.

DAVA NEWMAN: And I'll just add quickly, our algorithms are pretty darn primitive. When I looked at the human brain in uncovering this, we don't even scratch the surface. Our feedforward, feedback loops-- again, it must be a network out.

How are we doing the supercomputing? And I think it has to do with plasticity and adaptability. And boy, we have a lot to study.

CHRISTOPHER CARR: A quick question for Dava. How comfortable is it to wear the pressure suit?

DAVA NEWMAN: So if you're pressurized to one-third of an atmosphere or 30 kilopascals, it's not so comfortable here because we're at one atmosphere. So you're over pressurized to 1.3. The mock-up's not bad because that's just about a 10% increment. You can wear it for four hours, six hours, no problem.

But the thing is, when you jump into the vacuum chamber-- we just have a [INAUDIBLE] vacuum chamber-- well, it's great. One-third of an atmosphere is great when you're really in the vacuum chamber, the real testing system. So still more research and development. But actually in terms of comfort goes, if there's constant pressure on your body, it's actually not bad at all.

You get some shrinkles-- we're working on those shrinkles. Because of course, all the patterning and the lines that you saw, you had to be real careful with seam construction. So we're always looking to new advances. A welding seam technology is really important, because I don't want my buttons to push into me. So kind of come out a little bit straight.

CHRISTOPHER CARR: Thank you all very much. We're out of time, but please join us again after a wonderful lunch, at 1:30 PM, for Peter Diamandis's keynote address.