Brains, Minds and Machines: Keynote Panel - The Golden Age - A Look at the Roots of AI, Cognitive Science & Neuroscience

Search transcript...

MINDELL: Good afternoon. It's my pleasure to introduce and to welcome you to the MIT 150 symposium on brains, minds, and machines. My name is David Mindell, and I'm chair of the MIT 150 Steering Committee, and I just wanted to say a few words about MIT's 150th that provide a little bit of context for today's symposium.

In 1853, William Barton Rogers came north from Virginia to pursue his dream of a new kind of technical education, an education that would mix the world of science and the useful arts, theory and practice, what we have come to know as mens et manus, or mind and hand. Nine years later-- eight years later, in 1861, the governor of Massachusetts fulfilled Roger's dream by signing MIT'S charter on April 10 of that year, creating this unique and innovative educational institution.

Now, 150 years and two weeks later, we're celebrating MIT's accomplishments in a whole variety of fields of ideas and inventions that changed our world and helped define it today. And of the courageous professors, students, graduates, and alumni who have gone forth from this place to make their contributions to the world.

The MIT 150 celebrations include 150 days of events, concerts, other festivals. Today, we're on the 117th day, and we've already begun to see, over the past 117 days, the emergence of tomorrow's MIT, one that's united by ambitious intellectual agendas and focused on both its core research domains and on the Institute's relationship to the wider world.

Within the 150th anniversary, the intellectual [? core ?] is a series that comprises the core of MIT. Thinkers students, researchers, and professors talking about great ideas, contemplating the world, and perhaps even making a little progress on some of the interesting problems. This is the essence of 150 symposia series, the sixth and last of which we're opening today.

The other symposia have focused on economics and finance back in January, conquering cancer through the convergence of science and engineering in March, women's leadership in science and engineering, also at the end of March, computation and the transformation of practically everything, two weeks ago, the future of exploration in earth, air, ocean, and space last week, and today's brains, minds, and machines.

Each of these symposia were chosen for the leading faculty, the exciting ideas, the new ideas, and the symposium's focus on more than one department and more than one school. Of course they in no way cover the full range of the interesting research that goes on at the Institute, but they all represent cutting-edge work that epitomizes what is best about MIT. My thanks particularly today to professors Irene Heim, Tommy Poggio, and Josh Tenenbaum for their willingness to step forward and organize this historic gathering. And to all of our participants and all of you for taking the time to be here to share your insights as well.

On a personal note, I find this symposium particularly exciting, not least because I've spent a good deal of my career studying the prehistory of cybernetics and currently work on topics of human and machine relationships in complex systems, but also because from the very first pre-proposal that we reviewed on this symposium, we saw how it coincides with the new intelligence initiative at MIT and represents a genuinely new synthesis.

I could not think of a better way to culminate the MIT 150 symposium series than with this event. Nothing could better illustrate MIT's seminal history in these areas, its synthetic intellectual culture, and the great promise for the future than what you're about to see in the following two days. I'll now introduce the Dean of Science at MIT, Marc Kastner. Thank you.

KASTNER: Thank you, David. David said it well. This is an extremely exciting topic. I love it in particular because in addition to the deep intellectual challenges, it already-- the new intelligence initiative has already involved faculty members from all of the five schools at MIT. The School of Science, with its brain and cognitive science department. The School of Engineering, obviously. But also the School of Humanities, Arts and Social Sciences, where linguistics, and philosophy, and economics sit. The Sloan School of Business, and the School of Architecture Urban Planning, which houses the Media Lab.

So this is an intellectual challenge which draws together faculty members from all across MIT, and that makes it very exciting. So this problem has a great history at MIT, but also has a great future at MIT. I want to add my thanks to professors Heim, Poggio, and Tenenbaum. And a special thanks to Lore and Pat McGovern and the McGovern Institute for hosting a reception tomorrow. So on with the show. It should be fun.

POGGIO: So on behalf of my co-organizers, Irene Heim and Josh Tenenbaum, and all the MIT 150 organization and people of my group that will be helping you around in the next couple of days, I would like to welcome you to this symposium and to this first panel of today.

We are in a typical gritty MIT classroom. We kept you outside in the heat just to emphasize the point. We are very close to where the old barracks were, Building 20. Back in the '50s and '60s there was a remarkable intellectual activity in the old country that started around the new physics and engineering of electronics and information processing. And if MIT was one of the main centers of this whirlwind of activity, Building 20 was really the center of it, serving as a magical incubator for a number of fields as diverse as information theory, cybernetics, neural networks, linguistics, artificial intelligence, and neuroscience.

Researchers in the building included Claude Shannon, Norbert Wiener, Walter Pitts, McCulloch, Noam Chomsky, Maurice [? Harley, ?] Jerry Lettvin, Marvin Minsky. And the intellectual [? ferment ?] eventually converged in the ambitious attempt to understanding intelligence and replicating it in computers and machines. Marvin Minsky and Seymour Papert to were key in starting the new field of artificial intelligence. Noam Chomsky and David Marr pioneered linguistics, and cognitive science, and [? computation ?] neuroscience.

And I would like to remember in particular Jerry Lettvin, who died about a week ago. He was part of that group. He was one of the giants in this MIT history of research between the brain and the mind. He had a unique culture. He was unique. He was loved by everybody. We all miss him a lot.

Artificial intelligence started 50 years ago, about. Machine learning started growing under the name of neural networks about 25 years ago. And academic research in these two areas is one of the main reasons for the emergence in the last five years, I would say, of remarkably successful applications, commercial and not commercial, like Deep Blue, Google search, [? Kinect, ?] Watson [? Mobileye. ?]

Each of these systems-- computers that play chess, search the web, recognize human gestures, answer to [? Jeopardy, ?] allow cars to see, is at human-level performance in a narrow domain. But none of these systems can be said to be intelligent, these intelligence.

And the problem of intelligence, and of how the brain produces intelligence, and how to make really intelligent machines, is still wide open. So the main thesis to be discussed in this symposium, the way Josh and I thought about it, is that a new effort in curiosity-driven research, basic research, is needed in order to understand intelligence and understand the brain. That this new basic research should, this time, not only rely on computer science, but also integrated with neuroscience and cognitive science. And it should also integrate tightly different aspects, facets, of intelligence, such as vision, planning, language, social intelligence. So I believe that 50 years later, it is the time to try again. This symposium is a way to find out whether we all agree that it is indeed time to try again, and if so how?

So now, I will start the real symposium. And I have the pleasure to introduce the moderator of this extraordinary panel, Steve Pinker. You all know Steve, so he does not need an introduction. You all know him from his books, starting with The Language Instinct, a great book. Steve was a colleague in my department here at MIT, and now we are loaning him back for a few days from Harvard. Steve.

PINKER: Thank you, Tony. Welcome to the golden age. I'll look at the original roots of artificial intelligence, cognitive science, and neuroscience.

Moderating this panel is a daunting challenge that puts me in mind of the famous quotation from John F. Kennedy when he hosted a dinner at the White House for all of the living Nobel Laureates of the Western hemisphere. And he said "this is the most extraordinary collection of talent and human knowledge that has ever been gathered together at the White House, with the possible exception of when Thomas Jefferson dined alone." It's not clear who the Thomas Jefferson figure equivalent would be in this case, although I imagine there might be some people in the audience who would nominate our late colleague, the inimitable Jerry Lettvin.

The second reason this is a daunting challenge is that our distinguished panelists represent a remarkable diversity of interests, from the biology of soil roundworms to the nature of logic, truth, and meaning. Nonetheless, I think there is a common thread that all of them are contributors to what I think of as one of the great revolutions in human thought, which is the scientific and understanding of life and mind in terms of information, computation, and feedback and control.

Well into the 20th century, life and mind remained scientific mysteries. Life was thought of as this mysterious substance called protoplasm, some kind of quivering gel, animated by an élan vital. The mind was thought of as a portion of some realm of the soul or spirit. Or, according to the dogma of behaviorism, something that didn't exist at all, just one big category error.

But then, in the middle decades of the 20th century, ideas of thinkers like Turing, Church, von Neumann, Wiener, Shannon, Weaver, McCulloch, and Pitts gave us a rigorous language in which to understand the concepts of information and computation and apply them to domesticate these formerly mysterious realms, in the process revolutionizing biology and psychology. They gave us what became the insight that the stuff of life is not some magical protoplasm, but rather matter that's organized by information.

And today, when we discuss heredity, we use the language of linguistics. We talk about the genetic code. We talk about DNA sequences being synonymous, or meaningless, or palindromic, or stored in libraries. Even the relation between hereditary information and the actual meat and juices of the organism we explain with concepts from information, namely transcription and translation. The metaphor is profound.

Similarly, the stuff of thought is no longer thought to be some kind of ghostly spirit, nor a mirage or category error, but also can be understood in terms of information. That beliefs are a kind of representation, thinking a kind of computation or transformation, and action a problem of control, in the engineer's sense. These ideas we take for granted now, but I am always struck going back to earlier great thinkers in biology and psychology, how much they floundered without it. When one reads great philosophers of mind, like Hume, or great biologists like Darwin, I often wish that I could reach back over the centuries and tell them a few things about the modern science of information, because one could see that they were flailing around with hydraulic and mechanical analogies that could be so clearly explicated by what we know now about information and computation.

I think it was the 1950s and early 1960s that was a turning point in both fields, and the six people in today's panel were all, in different ways, instrumental in making it happen. I don't think I'm going to offend anyone's vanity by introducing them and inviting them to speak in order of age. Sydney Brenner is a biologist. All of us know that 58 years--

BRENNER: [INAUDIBLE]

PINKER: Everyone knows that 58 years ago Watson and Crick explicated the structure of DNA. But the DNA would be useless if there wasn't some way for the information that it contained to actually affect the development and functioning of the organism. The discovery of the genetic code is something that we owe to our first speaker, Sydney Brenner, working in collaboration with Francis Crick and others.

As if that weren't enough, the discovery of the operation of the genetic code and mechanisms of transcription into RNA and translation into protein, Sydney Brenner was also instrumental in the modern science of development. How the information coded in the genes actually builds a three-dimensional functioning origin, and, for that matter, neuroscience, both through his choice of the lowly roundworm, Caenorhabditis elegans. C. elegans, often known as Sydney's worm, which has exactly 959 cells, 302 neurons, and therefore offers a perfect opportunity to reverse engineer the process of development and the wiring of the nervous system.

In recognition of this accomplishment, Sydney was awarded the 2002 Nobel Prize in Physiology or Medicine. But perhaps an even greater recognition is that he has been immortalized in the Linnaean taxonomy, a sister species of C. elegans, Caenorhabditis brenneri, has been named after Sydney Brenner. Sydney is senior distinguished fellow at the Crick-Jacobs center at the Salk Institute for Biological Studies.

Marvin Minsky is a computer scientist. And he is widely recognized as one of the founders, perhaps the founder, of artificial intelligence, cognitive science, and robotics. He is responsible, among other things, for the first neural network simulator, the first computer simulation of semantic memory, one of the first music synthesizers, first mechanical arm, and the first programmable Logo Turtle.

He has, since then, been a major theoretician on how to apply the computational analysis to problems such as vision, reasoning, learning, common sense, consciousness, emotion, and so on. He has been recognized with the Association of Computing Machinery's Turing Prize, as well as the Japan Prize. But perhaps most significantly, he inspired Arthur C. Clarke when he was writing 2001 a Space Odyssey and served as consultant for that movie, in particular for the computer HAL. Marvin Minsky is Professor of Media Arts and Sciences Emeritus at MIT.

Noam Chomsky is a linguist who revolutionized the study of language by changing the very nature of the questions that the field attempts to answer. Noam pointed out that the most profound mystery of language is first of all that any competent speaker can produce or comprehend an infinite number of novel sentences. Therefore, our knowledge of language has to be captured by a recursive, generative system. Noam explored the mathematics of such systems, has developed a number of theories of what kind of recursive generative system is implemented in the human mind, and inspired the modern science of psycholinguistics that explores how language is processed in real-time.

Noam also set, as a problem for linguistics, the project of figuring out how children acquire a first language without formal instruction in an astonishingly short period of time, suggesting that children are innately equipped with a universal grammar of the abstract principles behind language, an idea that led to the modern science of language acquisition. Noam is also largely responsible for the overthrow of behaviorism as the main dogma in the study of psychology and philosophy, for rehabilitating the philosophical approach of rationalism, and for making the concepts of innateness and modularity respectable in the study of mind.

Chomsky has been recognized among many prizes, with the Kyoto Prize, the Helmholtz Medal, and the Ben Franklin Medal. He is also the most cited living scholar, according to citation counts, and has become such a fixture of popular culture that Woody Allen had him feature prominently in his New Yorker story "The Whore of Mensa". And I believe that Noam is the only MIT professor whose lecture has ever been featured on the B-side of an album by the rock group Chumbawumba. Noam is Institute Professor Emeritus in the Department of Linguistics and Philosophy here at MIT.

Emilio Bizzi is a neuroscientist who made early contributions to our understanding of sleep, but has spent most of his career studying the process of movement. Emilio is responsible for setting the agenda for the neuroscience of movement, pointing out that the movement of animals is not simply a set of knee jerks, or pressing a button and a lever moving, but rather has goal-directedness that a simple muscle motion would be useless if it was executed the same way regardless of the starting position of the body, and rather that movements have to be organized towards positions in space. Movement is therefore an intelligent process at every level of the nervous system.

Emilio is also largely responsible for the maturation of neuroscience as a field, which did not even exist as a name when he entered the field. And was the founding director of the Whitaker College at MIT, and founding head of the Department of Brain and Cognitive Sciences, in which I was privileged to teach for many years.

In addition to his scientific brilliance, Emilio is widely respected as a man of judgment and erudition, and his wisdom has been tapped in many scholarly organizations. Together with prizes such as the [? Empedocles ?] Prize and the President of Italy Gold Medal for Achievements in Science Prize, Emilio has served as the president of the American Academy of Arts and Sciences. Emilio is currently Institute Professor of Neuroscience here at MIT.

Barbara Partee is a linguist. She was a student in the first class of the fabled MIT graduate program in linguistics. And as the first recipient of a National Science Foundation Fellowship in linguistics, her graduate career symbolically inaugurates the appearance of linguistics as an official science. Barbara transformed the field of linguistics, which hitherto had concentrated on syntax and phonology, by putting semantics, the study of meaning, on a formal, rigorous foundation, tying linguistics to the world of logic and the concepts of meaning and truth.

She remains the world's foremost semanticist, and moreover, trained all of the world's other foremost semanticists. When she retired recently, her students presented her with a genealogical tree of all of the people that she has trained. They noted that it has a depth of four, which means that she has trained several great, great, grand students, and it has 176 nodes. Barbara has been recognized with the Max Planck Research Award for the Max Planck Society, and is currently distinguished University Professor Emerita, that's feminine for Emeritus, at the University of Massachusetts at Amherst.

Patrick Winston is a computer scientist. He was one of the first researchers to give a rigorous analysis of the concept of learning in the framework of modern symbolic artificial intelligence, and his work on the concept of learning continues to be influential today. He was also instrumental in transforming artificial intelligence from a bunch of hackers in plaid flannel shirts to a respectable academic field through his textbook, through directing the Artificial Intelligence Laboratory at MIT for most of its existence, and his famous course in artificial intelligence, as well as starting a number of companies.

Patrick has been recognized by a number of teaching awards, including the MacVicar, the Baker, and the Graduate Student Council Award here at MIT. And is widely known and beloved, aside from his contributions to artificial intelligence, in his training of teachers through his famous IAP talk, "How To Speak" which has influenced many of us.

Among the many suggestions for how to keep a class engaged, he suggested that every lecturer should cultivate some eccentricity, whether it be wearing a rope belt, or tugging a lock of your hair, or erasing the Blackboard with both hands. I was influenced by this as a postdoc, and ever since, I have always lectured wearing a gaudy necktie. Patrick is Ford Professor of Artificial Intelligence and Computer Science at MIT.

I have asked the six panelists to spend 10 minutes each, sharing any personal thoughts that they think the audience would enjoy on the birth of the modern sciences of life and mind, and their reflections on the origins, key questions, key discoveries, and open challenges of your field. So we'll start with Sydney. I'd like to ask each of you to speak for about 10 minutes, and then I will ask you to amplify, reflect, ask each other questions, and so on. I'll put a little timer here to just remind you how much time has elapsed.

BRENNER: I've shrunk in recent years. Well, I thought I'd give-- to explain why I'm here, I thought I'd give two classes of reasons. The first will be sentimental, and I'd like to say that my association with artificial intelligence has always been as an interested spectator. I don't think I ever played in the game.

My interest in it came from a very long friendship with Seymour Papert. We grew up together in South Africa. We shared a room in the Department of Physiology. I taught Seymour neurophysiology. He taught me mathematics. And I came to the conclusion he was both the better student and the better teacher.

It was the connection with Seymour, that when I finally grew up, that brought me into association with the Department of Artificial Intelligence. And I sent one of my best students here. I wouldn't say he was a student. One of my best colleagues here, David Marr. And so those are the reasons why I've always been.

And in later years, which I ask you to ponder on, there was a great adventure here producing something called a thinking machine. In fact, there was a corporation named after that. And I suppose that's what we were after. We have to resuscitate thinking machines.

Now I'd like to give you the mental reasons why I'm here. I am very influenced in about 1950, '51 it was, reading an article by John von Neumann called "The Theory of Self-Reproducing Machines". It had been published in a book called The Hixon Symposium of Cerebral Mechanisms. And that's a very interesting book to read through, because you can see all the false leads.

So that paper from Neumann went unattended to at the actual basic logical construct for the way the genes work. And in fact, the whole of this is done without any reference to biology. And indeed, when the biologists discovered that this was the machinery, then nobody had mentioned von Neumann.

In fact, they all paid very great respect to Schrodinger. Schrodinger wrote a book called What Is Life? Everybody claimed to read it. I read it. I didn't understand it. So it had no influence on me. But in thinking back on it some years ago, I came to the conclusion that Schrodinger had made a fundamental mistake. He said the genes contained the program for development, as we would put it this day, and the means to execute them.

What von Neumann said, they didn't contain the means of execution. They contained the description of the means of execution. In other words, the program is not self-reading. You'll have to build a reader for it. And that's, of course, what von Neumann's [INAUDIBLE], and without this, you can't make a self-reproducing machine because it has to transmit to the next machine a description of the means to do it.

And I think that this is the fundamental thing that lies behind this. And so if you like, if you want to say I've got this text in DNA, it's a long sequence, can we read it? Can I look in there and say yes, that's a zebra, and it's going to be able to do these things? And that is if we believe in what we can do.

So when you take this, and you try to analyze the difference, analyze the relationship between what we inherit and how we can perform. And the big argument at the time, that there is a connection between genes and behavior, which gave rise to a whole lot of other problems, such as saying that intelligence is something that is inherited with the genome. Those are not the questions, really, to ask, because I think you must divide the problem into two as, indeed, it is divided into two in von Neumann's view of it.

The first thing is how do the genes specify and build a machine that performs the behavior? And how does that machine perform the behavior? That is a separate question. Of course the two are connected, as indeed they are, but they must be distinguished, because what we are asking is, if we're looking at the behavior, the behavior is represented [? and ?] the genome [? has ?] a description of how to build a machine that behaves. And you see this is very important to get that through, because the deepest problem is how did all of this evolve? Because you can only change the description.

So there are very interesting questions that are attached to this. And in following this line of thought, I thought that the only way to give a scientific theory of a nervous system is to ask, how does the wiring diagram, if I can call it that, compute behavior? Because if we know how this is done, we can look at the deeper computation later, which is how is the script translated into the machinery that builds this?

And in fact, I think a lot of science will now go to what I call the forward question, which is how do we connect the output of a system with its wiring diagram? Which is the thing I think we have to solve. But of course, this is a grand thing, and one of the things you learn is that there is a difference between vision and eyesight. I learned this by describing a friend of mine as a man of great vision but poor eyesight. So we have to have good eyesight to implement anything like this.

And that is the story of the worm. The worm has 302 neurons. Seemed to me it was a finite problem. We could come to the end of description, and then after that, we could deal with the questions that everybody raises when you do modeling, which is the skeptic who stands there and says how do you know there's not another wire that starts in the big toe, runs up at the back, and connects this with this? You have to be able to say we know all the wires. Then you can proceed.

So this was, then, to get to a total analysis of the structure of the nervous system, the structure of the brain of the worm, 302 neurons, which will he finally accomplished. Took 20 years, because we had to many, many other romances with computers in between. We tried to mechanize this, but to do this in the 1960s, was impossible. Nowadays, it can be done, and there's huge activity now in getting these wiring diagrams specified.

In my opinion, sometimes overzealously applied, because it's not going to answer the question that we wanted to answer. If I take a worm of exactly the same genetic structure, will I find exactly the same wiring diagram? Today, you could ask sure, I've just cut a section of this mouse's brain, and I've found these synapses. And you say well, maybe if you delayed your knife a half an hour, would you have found the same synapses? And indeed, if you go to another mouse and look at the same cell, is it going to be the same there? However.

So I think that in approaching this question, I think that we are on the threshold of really asking very serious questions, both in biology and in relationship to the behavior of complex systems. And that's why I think this initiative is very, very important at the time-- important to do now, because I think we all have to take a completely different approach to this in the future.

As you well know, my colleague Francis Crick got interested in consciousness at one stage. And I was asked, some time ago. what did I think of consciousness as a problem? The great physicist, Wigner, thought that consciousness would be the explanation as why you can't predict anything with quantum mechanics, that there would be this added ingredient. My view is that I think consciousness will never be solved, but will disappear.

Now, 50 years from now, people will look back and say what did they think they were talking about here? Right? I'll tell you one other thing that has the same thing and has disappeared. It is a thing called embryological determination. We had many discussions in the '50s and '60s. Is determination different from differentiation? Are they different processes? Nobody talks that way anymore, because we have a completely different view of this.

And this is what I would like to say is going to be changing those views, changing many of the other views of which I'm not equipped to talk about. But I'll tell you the one view that I would like to see very much changed. And that is the view of what a gene stands for. And in doing this, I will just deliver a little parable in my sermon.

It was said that when we sequence the human genome, and we sequence the chimpanzee genome, we will find one extra gene in the human. That'll be the gene for language, and we'll call it the Chomsky gene. But there's an alternative explanation, which is during the revolution, chimpanzees discovered talking gets you into trouble. And so they evolved a language suppressor gene. Then we'll call it the Chimski gene. Thank you very much.

PINKER: Marvin.

MINSKY: I have too many notes. When Brenner started working on his worm, C. elegans, that was a big project. 1960, was it? And to map out the nervous system of this animal as well as the rest of it, you had to make very small slices with a microtome and trace the nerves and other organs in every layer. And that's a tremendous amount of work to do by hand. And he had employees, graduate students, I don't know what, and this process was taking a long time.

At the same time, McCarthy and I had started the Artificial Intelligence Group here and had done a lot of work on computer vision, even by 1960. So I visited Brenner and suggested that he should import a couple of our graduate students who would automate the process of vision. And Brenner said no.

And I said why? And he said well, both of our fields are in their very early stages, and I have these budding genetic biologists or whatever, and you have these budding computer experts. And if you let any of them into my lab, all of my students will realize that your subject is much more exciting and easier. Do you remember that? And it took me a while to realize he was absolutely right.

I don't know where to start. I love MIT, and I've been here in a sense since 1957, because I started at Harvard as an undergraduate, and I could talk about that for two days. But in the course of-- and then I went to graduate school at Princeton. And I'm not sure where I'm going with this.

But every time I've got interested in doing something, I was incredibly lucky. I never had to do any real work. If I had a mathematics problem, well, when I was a sophomore at Harvard, I sat down at a table with a young man named Andrew Gleason who had won the Putnam-- first prize in the Putnam Mathematical Competition three or maybe four years in a row, which sort of excluded everyone else in the world.

And I asked what he was doing. And he said he was trying to solve Hilbert's fifth problem, which is that every continuous-- every locally continuous group-- oh, I forget what it is. Anyway, oh, every continuous group is differentiable. It's a very nice problem. And I said, how long do you think it will take to solve that? And he said nine years.

I had never met anyone with a plan of that magnitude only a few years older than me. And I asked what that was, and he said well, first I'll have to do Dedekind cuts to show that the thing has to have real numbers and blah, blah, blah. Anyway, it took him eight years. And I think, somehow I was very early in my research career, and it seemed to me that eight years was plausible, let's do things.

Anyway, I was just very lucky. I ran into von Neumann at some point when I was doing my thesis at Princeton, which was on neural networks. And I got the idea of doing that from earlier work by McCulloch and lots of people in the 1940s. It turned out that in 1895, Sigmund Freud had written a little paper on neural networks, but no one would publish it. And it wasn't published till around 1950.

Well, I guess the same question came up, because I was in the math department at Princeton and they asked von Neumann is this PhD thesis mathematics? And I'm happy to say that he replied if it isn't now, it soon will be. And anyway, there was something great about the era, and it's hard to talk about how things started because it was so different from it is today.

See, 1950 is shortly after World War II, and the country was very prosperous, and there were lots of research institutions, because there were a lot of monopolies. I mean, a company like General Electric, or Westinghouse, or even CBS were pretty huge. And they-- and Bell Laboratories, of course, was a monopoly. The others only acted like it. And so they would start out different projects. And one, I guess John McCarthy and I in the summer of 1951 or two-- anyway, we got summer jobs at Bell Labs and someone told us not to work on any problem that would take less than 30 years.

They didn't have the tenure deadline either. Now to get tenure you have to be quick, because legally, I think tenure is seven years, but they like it to be six years, because you don't want to keep people hanging. And what's more, if you fire somebody before that, you have to pay about a year termination fee and oh well, blah, blah. So it's very hard to get a research grant for five years, which was sort of the standard that NIH and many of these laboratories had. So any questions?

Anyway, we started this artificial Intelligence laboratory. And the first few-- I'll just stop in the middle of the sentence when time is up, because I never make plans. I could talk--

PINKER: What?

MINSKY: Oh, it's an iPhone. I have one. Mine is [? pink. ?] Anyway, how did all this happen? When I was a child, there were a lot of books in the house, and I got interested in reading them. And it seemed to me that they fell into two classes. One was fiction, and that was novels. This is a joke I've made many times.

And I read a few novels, and it seemed to me that they were all the same, even by great writers from Shakespeare on down. And there are about six or seven sins, or the major ways that people screw up their lives. And so a typical piece of fiction describes a situation where something goes wrong, and you try to fix it, and then something else goes wrong. And then this goes on until finally you cure them all or you die. It doesn't make much difference.

So that's what general literature is about. And science fiction is about everything else. So somehow, Isaac Asimov lived nearby when he had just finished. And just as I ran out of the Jules Verne, and HG Wells, and Aldous Huxley, and John Campbell, and the early science fiction writers, exactly the right people started to turn up, which was Robert Heinlein in 1942, I guess, and Isaac Asimov a little later, and Frederick Poe.

I met Isaac because he lived in Newton, which is nearby, and at this time, or somewhat later anyway, we were building the first robots. Pretty much the-- although a guy named Grey Walter, neurologist in the west of England had made some interesting ones. And I invited Asimov to come and see our robots, and he refused. And this happened a few times.

And finally I said why not? And he said I'm imagining really intelligent robots of the future. And if I came and-- I'm sure that your robots are really stupid. Which they were. They could play a very slow game of ping-pong and a few things like that.

You might wonder why aren't there any robots that you can send in to fix the Japanese reactor that's broken? Reactors, I should say. The answer is that for the last-- there was a lot of progress in robotics in the 1960s and '70s, then something went wrong.

If you go to Carnegie Mellon, I don't want to mention names, or anywhere else, you'll find students are slaving over robots that play basketball, or soccer, or dance, or make funny faces at you, and they're not making them smarter. They don't use or don't realize that they need something like artificial intelligence. So I'd like to see a return back to the old days.

Brenner mentioned consciousness I think, and I agree entirely that it's a very bad word. Because if you look at my recent book, which is badly named The Motion Machine, it says notice that people use the word consciousness for 35 very different things, like remembering what you've recently done, or making certain kinds of choices, or reacting to certain stimuli, and so forth. And so virtually everything we don't understand about the mind when people do things is attributed to this word.

So anyway, that's one of the problems. And yet I see that professional people in various sciences still use the distinction between conscious and not. And as far as I can see, you can't get rid of that word. What's its main use? Its main use is deciding when someone is a criminal. If they run over you by accident, well, that's very bad and you might lose your license. If you run over someone on purpose, that's intentional, conscious, blah, blah, blah.

And so we have this strange situation in psychology that I don't see in other sciences which is to use words that have been around for a long time without-- I'm down on the 10% of my notes, so.

PINKER: Thank you. Noam Chomsky.

CHOMSKY: Well, I kind of like that idea of the language suppressor gene. That actually has a venerable history, which maybe you know. In the 17th century, when Westerners were discovering all kinds of exotic creatures like apes, and [INAUDIBLE], and others, and they weren't sure which was which, there was a lot of discussion about whether that ape, orangutans, you know, apes can speak. They didn't seem to, and there was a lot of debate about why they didn't.

And there was a proposal that there was-- they didn't call it genes in those days, but there was the analog of a language suppressor gene. Louis Racine, who was the son of the famous dramatist, suggested that apes are really more intelligent than humans, and the proof of it is that they don't speak. Because they knew that if they did speak, we would enslave them. So they keep quiet.

This, incidentally, led to the conclusion that a brilliant father can't have a brilliant son. I was in our [? early ?] Building 20 in the golden age. And it actually was a golden age. But I got here in 1955. It was quite an exciting place. A lot of enthusiasm, innovation. A very lively interaction among people of widely different fields. It's a kind of community which I think will be very hard, if even possible, to reconstruct.

I also have many very warm memories of it. In fact, some of them very hot memories. Now those of you who were around in those days may remember that over the summer, Building 20 was unbearably hot. You could barely survive. Morris Halle and I then shared a small office and we decided we would try to get the Institute to put in and air conditioner, a window air conditioner.

So we sent a message up through the bureaucracy, and it finally reached whatever high office it got to, and something came back down to us finally with a message saying you can't put in an air conditioner because it wouldn't be consistent with the decor of Building 20. Those of you who have ever seen a picture of Building 20 would know what this meant. Fortunately, we could find a friendly janitor who for $10 was willing to break the rules and we were able to survive the summer.

Back a few years earlier, in fact, going back to the time of the famous paper, Alan Turing's paper on machine intelligence in 1950. Early '50s, there was a small group of graduate students down the road who were dissatisfied with the prevailing conceptions of how to understand thought, behavior, language, and so on. And we were trying to see if we can construct another way of looking at these topics, which might be more promising and integrated better into the general sciences.

And Turing's comments had a certain resonance. You may recall that in this paper, which is about machine intelligence, he begins by saying that the question whether machines can think is too meaningless to deserve discussion. He didn't explain why, but he presumably meant that it's a question of what kind of metaphor you are willing to accept. So it's like asking do airplanes really fly? Or do submarines really swim? If you want to extend the metaphor, yeah. If not, no. But it's not a factual question.

He nevertheless went on to say that it would be a very good idea to construct hard problems to see if you can design machines, meaning hardware and software, to solve them. And the famous proposal of his was what he called his Imitation Game, later came to be called the Turing Test. He thought that might be an incentive to develop better machines, better software. And for that reason it's good to do it, but we're not asking do machines think?

He also suggested that maybe in the course of time, he said 50 years, people will come-- because of this work, people will come to think about thinking in a different way. You can ask whether that's happened. The machines-- it certainly was an incentive to develop better machines. And that had a certain analogy to the kinds of things we were concerned with.

There is for one thing, because when you look at language seriously, it wasn't really done at that time, unfortunately. You can see right off that it's-- what each of us has internalized is some kind of computational system. A system which, in fact, determines an unbounded array of structured expressions, each of which has a dual interpretation [? that's ?] interpreted by the sensory motor system as some kind of externalization, sound, or we know no other modalities, and it's interpreted at a internal, roughly speaking, kind of a thought system, called sometimes the conceptual intentional system, as having a specific meaning, which then enters into planning of action and so on.

So that's a computational system. And the constant understanding of computational systems had advanced quite considerably by the mid-20th century, in large part because of Turing's work. And that sort of fit naturally. Also, his provisos made some sense for us. There is a standard question, what counts as a language? And I think Turing's response was accurate. The question's too meaningless to deserve discussion for the same reasons as machines thinking.

If you want to call the communication systems of hundreds of species of bees language, okay, you can call it that. That means you're accepting a certain metaphor. If you don't want to call it that, don't. It's not a factual question.

The facts of the matter are these systems differ radically. Every animal communication system known differs radically in its basic principles and structural principles of use [? others ?] from human language. And human language is, in fact, [? here ?] it's different from Turing's case. It is, in fact, a particular biological system, one that each of us has internalized, that, at its core, a computational system with the kind of properties that I mentioned.

And there's no problem here about constructing hard problems to deal with, because it was quickly discovered, as soon as the effort to construct such computational systems was undertaken, that almost nothing was understood. Everything you looked at was a puzzle. And the more you learn, the more puzzles you discover. That was kind of surprising at the time, because it was assumed, the prevailing conception was that everything was more or less known. There aren't any real problems about language.

It's just a matter of-- well, a famous version was [INAUDIBLE], a highly influential philosopher, most influential, whose picture was that language is just a collection of sentences associated with one another and with stimuli by the mechanism of conditioned response. So all there is to do is just trace the individual cases of histories of conditioned response and you get the answers. Among linguists, it was very widely assumed that languages can differ in virtually any possible way with virtually no constraints. And the only problem in linguistics, I remember this as a student, was to collect more data from a wider variety of languages and apply to them the techniques of analysis which were assumed to be more or less understood. And that's the field. But nothing could be puzzling.

On the other hand, as soon as the effort was undertaken to construct an actual computational system with the properties that I mentioned, it was discovered that everything is a puzzle. Nothing is understood. And some of the puzzles are quite simple, and some of them are still outstanding, in fact.

So just to be concrete, take the simple sentence can eagles that fly swim? Okay. We understand that it means we're asking a question about whether they can swim. We're not asking a question about whether they can fly. But that's a puzzle. Why doesn't the "can" relate to the closest verb? Why does it relate to the more remote verb? One which, in fact, is closest in a different sense. It's structurally closest.

So similarly, we can say are eagles that fly swimming? But we can't say are eagles that flying swim? Now that's a reasonable thought. It's asking is it the case that eagles that are flying swim? But somehow we can't say it. The design of language prevents us from articulating a perfectly fine thought except by paraphrases. And there is a question why that should be true.

From a computational point of view, counting computing the closest verb, computing linear order and linear closeness, is far simpler than computing structural closeness, which requires complex assumptions about the structure of the object. So there's a puzzle. And that puzzle had been around for thousands of years, except nobody thought it was a puzzle. It's one of [? really ?] many such cases.

I should mention that in the last couple of years, sort of a substantial industry has developed to try to deal with it on computational grounds. And I won't talk about that. I don't think it gets anywhere. But it's a real puzzle. And furthermore, that principle of using structural distance but not linear distance is all over the place. You find it in structure after structure in all languages, and it seems to just be some universal property of language, and a very puzzling one.

Well, discoveries of that kind are kind of reminiscent of the earliest moments of modern science. Go back to say 1600. For thousands of years, scientists had been quite satisfied with an explanation for such things as why stones fall down and steam goes up. They're seeking their natural place. End of that story.

When Galileo and a few others allowed themselves to be puzzled about that, you start getting modern science. And I think that continues. The capacity to be puzzled about what looks sort of obvious is a good property to cultivate. And it's an awakening, and that began at that time. And it turns out there's enormous linguistic evidence that this is the way things work.

There's also, in recent years, some evidence from the neurosciences. There's some interesting investigations being carried out by a very good group at Milan. The linguist in the group, Andre [? Amaro ?] is well-known here. He's been here many times.

They've been investigating the reaction of people to two types of stimuli. One all new to them. [? Oral ?] systems that meet the conditions, the basic conditions of what's called universal grammar of the general principles of language, and others that violate these conditions and use things like linear order. So for example, an invented language in which to negate a sentence, you take the negative word and you put it in the third position in the sentence, let's say. And it turns out that brain activity is quite different in these cases.

In the case of a normal language, which people have never been exposed to, Broca's area, language areas, are activated in the normal way. But in the case of linear order, it's not. The language areas don't have the normal activation. Now, people may figure it out, but they're figuring it out as some kind of puzzle to solve, not using their linguistic capacities.

Well, all of this ought to be puzzling, and it is. And in fact, as far as I know, the only serious proposal as to how to explain it has pretty far-reaching conclusions. The natural assumption is that at the point at which the computations are taking place in the brain, there just is no order. There's no ordering at all. All there is is hierarchy. So the only thing the brain can compute is minimal structural distance, because there's no linear order.

Well, that looks pretty plausible, in fact, from many points of view. One reason is that if you look at the syntax and semantics of language, for a very broad core class of cases, order doesn't matter. What matters is hierarchy. Furthermore, if you look at the-- so that suggests that one of those two interfaces, the semantic interface with the thought system, just doesn't care about order.

On the other hand, the sensory motor system must have some kind of arrangement depending on what the form of externalization is. If it's speech, it'll be linear order. If it's signed, it'll be various things in parallel. But it has to have some kind of ordering. So it's reasonable to suppose that the ordering is simply a reflex of the externalization, but doesn't enter into the core properties of language, which simply give you the thought, what's used in the thought system.

Now that effectively means, if it's correct, and it seems to be, that in the informal sense of the term design, you know, [? not ?] designer, but in the informal sense of the term design, language is designed for thought and not for externalization. Externalization is an ancillary process. And that would mean, incidentally, that communication, which is just a special case of externalization, is an even more ancillary process, contrary to what's widely believed.

That has significant implications for the study of evolution of language. I won't go into them, but you can think of through. And for the nature of design of language, where [? there are ?] other kinds of evidence that have similar-- that lead to similar conclusions. It says a lot about the architecture of mind, if this is correct.

For example, one class of cases is kind of a ubiquitous phenomena in language, sometimes called displacement. You hear something in one position, but you interpret it somewhere else. So if you say what did John see? You have to interpret the word "what" in two positions. And where it is, where it's kind of an operator asking-- telling you-- asking some question about what person or something like that.

And also in another position as the object of see. Just as in John saw the book, or something, where it's getting its semantic role as the object of the verb. It's got to be interpreted in those positions. It's as if the mind is hearing something like for which thing X, John saw X, where the "for which" thing is an operator ranging over the variable.

That shows up in other ways. That unpronounced element turns out to really be there for the mental processes that are taking place. You can see that in sentences like say, they expect to see each other, where the phrase "each other" is referring back to they. Say the men expect to see each other.

On the other hand, if you had a another noun in between in the right position it wouldn't work. So if you say, the men expect Mary to see each other, then that somehow breaks it up. Each other can't go back to the men. Well what about who do the men expect to see each other? Well, that's like the men expect John to see each other. Each other doesn't go back to the men.

But there's nothing there in the form that you hear. It's just like they expect-- the men expect to see each other. Well, that suggests strongly that the mind is actually hearing, interpreting for which person X, the men expect X to see each other. And that X is like John when you pronounce it. And things like that can get pretty complicated.

So if you take a sentence like say, they expect-- they think that every artist likes one of his pictures best. Like maybe the first one he painted. If you take one of his pictures and you ask a question about it, so which one of his pictures do they expect every artist to like best? The answer would be, well, his first one, even though which of his pictures is not in the right position to be bound by the quantifier every.

And that becomes clearer if you make a different question. Suppose you say, which of his pictures convinced the museum that every artist paints flowers? Well, there's no relation in every and his there, though it's structurally think it through about the same as the one that works. There's really only one plausible interpretation for that, and that is that the phrase which of his pictures is actually appearing twice. It's appearing where it's getting its semantic role as the object of like, and it's appearing where it is being interpreted as an operator arranging over the variable. And these examples proliferate into pretty complicated cases when you look at them closely.

Turns out, interestingly, that all of this follows if you develop an optimal computational system, one that meets the condition of perfect computational efficiency as the least possible computational operations. Then it turns out that what goes to the mind ought to have all these copies. They don't get erased.

On the other hand, what goes to the mouth has them all erased. Except for one to tell you that it's there. That turns out to be optimal computation. But that has a consequence. It means that optimal computation, the core of language design, is oriented towards the meaning. The mapping to the transformation change to what comes out of the mouth is causing communicative difficulties. Anyone who's ever worked on a parsing program trying to figure out how to mechanically interpret a sentence structure and meaning of a sentence knows that one of the hardest problems is what's called filler gap problems.

You hear the word what at the beginning of the sentence, and you've got to find where is the gap, the unpronounced element, that this what is related to. If you pronounced them all, that wouldn't be a problem. Most of the cases would be solved. But efficient computation leads to the consequence that it undermines communicative efficiency. That's pretty much like the case of are eagles that flyings-- are eagles that flying swim? Nice thought, but you can't say it.

And again, optimal operation of the operations is leading to communicative difficulties. And quite generally, there's an interesting class of cases known now where you have conflicts between computational efficiency and communicative efficiency. And computational efficiency wins hands down in every case that's understood, which again suggests that the design of language is really for meaning. It's for somatic and intentional interpretation out of organized speech acts and so on, and that the externalization of it, the fact that it sort of appears in the outside world somehow, is just a secondary process. Communication, tertiary process.

Well, that reorients our conception about how the mind works, but it looks pretty plausible. Well another issue-- kind of [? go ?] long sometimes. Another minute? Yeah. Going back to the '50s, it was of course understood right away that none of this can-- it's impossible for any of these things to be learned by association, conditioning, induction, any simple mechanism. So it must be the case that these examples of the kind that I mentioned, which are known very early by children, we have a little evidence, they must result from the other two possible factors in development.

One of them is some genetic endowment, which is a species property apparently. All humans share essentially the same endowment. No group difference is known. So that's one. And the other is just laws of nature. The kinds of laws that determine, say, that a cell divides into spheres, not tubes. So genetics is involved some, but not much.

The interaction of these two factors ought to be able to yield results of this kind. By now there's a fair amount of evidence supporting that the kinds of things I mentioned illustrate [? it. ?] Of course, languages differ all over the place. That was known too. So you have to show that these principles that you're postulating are going to apply universally.

That does not mean, contrary to a widely held misunderstanding that you can read in journals like Nature and Science and others, it does not mean that there are going to be descriptive generalizations about language that hold universally. In fact, it might turn out there's no descriptive generalizations at all, and you have a very rich genetic structure. That's not the case, but it could turn out, because the things that you see are reflections of internal processes that are going on that meet other conditions, like efficient computation. But it must be somehow that you've got to apply it to everything.

Well, in the '50s, in the golden age, a few languages were studied. The ones that people knew in [INAUDIBLE] at that time. Actually, one of the first extensive studies was on an American Indian language. That's the first dissertation in the department. Actually, it was in the [? EE ?] department, because we didn't have a formal department then. It was on Turkish. There was of course work on English. [? Marcel ?] was working on Russian.

By the 1960s, as the department developed and people came in from all over, the range of languages investigated expanded enormously. In the '80s and since, it's just exploded. There is now intensive investigation of languages, with a very wide type typological variety. All kinds of new problems being discovered, sometimes new answers. Sometimes they lead to further problems and so on.

At the same time, what Steve mentioned before, the study of language acquisition really took off. Actually, Steve's work was part of the early work on this in the '80s. Based on trying to discover how the option's [? a ?] variation of language, which is all that has to be picked up by a child. The rest is fixed. How those are set on the basis of data, and by now quite a lot of work on that.

All of this, I'll just finish by saying that plenty of problems remain. I don't want to overstate, but there's some fairly striking and reasonably plausible conclusions which have far-reaching implications. All of this, however, is restricted to what's going on inside the internal computing system. What's going on inside the mind.

Now somehow, language has to be related to the outside world. And because there are two interfaces, it's going to have to be related in two different ways. At the sensory motor interface, the question is hard but sort of understood. That's the topic of the acoustic and articulatory phonetics, which has been studied pretty intensively. [? Orally, ?] too, for 60 years. And a lot of results. A lot of problems. But at least the problem's kind of in hand.

What about the other end? How does the meaning side relate to the outside world? Well, there is a doctrine that's widely held that's supposed to give an answer to this. It's sometimes called the referentialist doctrine. It holds that the elementary semantic units, let's say words for simplicity, they relate to the outside world by a relation of reference, picking something out, or denotation for predicates, picking out a class of things.

So like cow, the word cow picks out cows where cows are something that a physicist could identify in principle. The only problem-- and that's very widely held. It was known in classical Greece that that's not correct. They didn't have much of a deep analysis, but examples showing it doesn't work.

In the 17th century, in what we ought to call the first cognitive revolution, many of the significant questions were raised, but are still puzzling today. It was widely understood that that doesn't work. As Hume, David Hume put it, summarizing a century of inquiry, that the way we categorize things-- the things we categorize, he said, are fictitious, created by the mind using Gestalt properties, notions of cause and effect, notions of psychic continuity for sentient objects. John Locke studied this.

And other properties that a physicist cannot identify. They are imposed by the mind on reflexive experience. And those are the entities, if you like, sort of mental entities that we construct. And they're somehow related to the world. Now, this poses a huge problem. There's no animal communication system known that has anything remotely like this. There, the referentialist doctrine apparently holds.

So if you take say, cries of a vervet monkey, each cry is key to some identifiable physical event. Like leaves are moving, we interpret that as meaning an eagle's coming, all the vervets run and hide. I don't know. How they interpret is another question. But or else it's an internal sort of hormonal phenomena, like I'm hungry or something like that.

Every communication system that's known works like that, but human language doesn't work like that at all. You take a look at the words of the language, none of them work like that. And that's a real problem. For one thing, it's a tremendous evolutionary problem. Like where did all this come from? Totally unknown, and maybe unknowable, but it's also a problem for the nature of language and for the acquisition of language.

Like how do children find these properties which they get to understand very quickly? In fact, things like children's fairy tales exploit them effectively. Well, these are major mysteries that remain on the horizon right at the core of what language and thought are about. And there are plenty of other ones. So I'd just suggest again that the capacity to be puzzled is a good one to cultivate.

PINKER: Thank you very much, Noam. Emilio Bizzi.

BIZZI: Well, one of the-- can you hear me? Yeah? One of the goals of artificial intelligence, as I understand it, was to build intelligent machines. And there's a class, of course, of intelligence machines that move like robots. And so what I'm going to discuss is movement, not in machines, but movement from a biological perspective. And then toward the end, I will discuss the interaction between artificial systems like robots and biological systems that move. But for the initial part of my discussion here, I like to point out something that I can illustrate with an example.

Let's say that I want to reach this pair of glasses. Well, this is something extremely simple. We do it hundreds of times per day. We have done it for thousands and thousands of times. So what's the big deal about it? Well, this simple gesture here points out an important fact. And that is that what seems to be a very simple action, in reality, the neural processes that subserve, that make it possible for this action to be expressed, are of great complexity.

And this complexity is, to a certain extent, mysterious. We know a little bit about it. We know that some of the problem, some of the computations, are beginning to be seen. But there are an enormous amount of things that have to do with actions that at this point remain fairly mysterious.

So let me start with what we have glimpsed recently in the last few years. And that has to do with-- and I'll go back to this example. If I want to touch this pair of glasses, then I have to do a number of things simultaneously. I have to move my eyes, my eye muscles, my neck muscles, my trunk muscles, and of course, then there arm muscles.

Now, from an anatomical point of view, all these muscles are made of elements, which are the muscle fibers. And each group of muscle fibers receives a fiber from the central nervous system, and particularly from, in this case, from the spinal cord. So what we have here is that to implement this simple gesture, the central nervous system has to control an enormous [? piece, ?] an enormous number of elements. That is the muscle fibers that make up the muscles.

So this is in a sense a gigantic problem, because we have an extremely tough computational problem. How to arrange this distribution of signals to a very vast space. Well, in the last few years, a number of investigators have identified elements of the architecture of the central nervous system that deals with movement that has indicated a modularity. And this modularity has been identified predominantly in the spinal cord. It's maybe also the upper part of the brain stem. But in any case, most of it is in the spinal cord.

And what it does, it means that there are groups of interneurons that manage to put together in a unit a group of muscles. What does that mean? It means that the number of degrees of freedom, this vast space, has been reduced dramatically in such a way that the descending fibers from the cortex that convey from the brain down to the spinal cord the information for movement, all they have to do is to combine. They activate these various modules in the spinal cord, combine them, and provide coefficient of activations for each one. And this view of how the brain manages to be so effective in producing movement is derived from experimental work that in this laboratory, in my laboratory, and in other laboratories, has been performed in the last 10 years.

Now there are other things that are more mysterious. When I do this simple movement here, this movement is learned, has been learned somehow during the course of life. The site of learning, motor learning, is certainly the motor cortex of the brain. So the frontal areas of the frontal lobe. Now, these areas of the frontal lobe of course connect it with the subcortical nuclei, with the cerebellum, and so on. And certainly all these areas, in conjunction with the connectivity of the spinal cord, represent the circuitry that encode the signals, the memory signals that are necessary to do this simple action.

But here there is a catch. When I do these simple actions, I can do them like in front of you now, but I can do it by moving my body in a different posture. I can be reclining, and nevertheless I can accomplish the same goal. So this is-- it's very tough to understand how the signals that have been memorized, which specify particular muscles that have to be activated in that particular context can, in another context, still be just as effective. So this is a question of generalization. And we-- at least I don't understand how the central nervous system manages to do this process of generalization so effectively.

There are also other computational problems that are quite difficult to see how they are implemented in the central nervous system. And that is I can reach this pair of glasses. If there is an obstacle, I can go up this way, I can go down this way. So this is a question of planning. How [? batteries ?] are extremely good at planning their action depending on the environment. And how, again, this memory that guides the movements, how can that be translated into [? when ?] a signal is modified in order to accomplish pathways that each time are potentially different. It depends on the environment.

Now there are other things that are somewhat less mysterious. We know that among the vertebrates, a certain amount of learning goes through by imitation. And recently, some light has been shed on this process of learning by imitation. Neurons have been found in the frontal lobe that discharge when the subject sees an action. And when the subject repeats that action, the similar action. So these are called mirror neurons, and they are an important feature of the model system that seem to provide the basis for learning by imitation.

Now although there are all these tough computational problems that need to be understood, I am optimistic that in the next few years we will make a lot of progress. And the reason is that there are many laboratories scattered in various parts of the world that are pursuing the issues of robotics, humanoid robotics. That is, what they are trying to do is to put these properties that I've described to you, generalization, planning, learning by imitation, and so on in machines.

And I don't know how far this effort-- how much these people have accomplished. But nevertheless, the fact that there is this intellectual attitude toward implementing in machines [? lay ?] solutions to try to find the computational solutions that provide these properties to machines is a way to really understand, is a way to understand. When you start to make things, beginning to face the computational problems, and understand.

There is also-- and I'm finishing [INAUDIBLE]. There is also a new field, which is somewhat promising. And that is to place sets of microelectrodes into the brain of-- it has been tried with humans, and is usually tried in animals, and connect the output of the signals that come out of these microelectrodes and connect it with machines. So this is a brain-machine interaction, which obviously has tremendous importance, both in understanding the workings of the central nervous system, and also the the practical significance in rehabilitation of amputees and so on.

Now for the time being, there are technical problems that prevent the quick implementation of brain-machine interactions. And those that have to do with the brain reacts to the contact with these permanent [? contact ?] with these electrodes in the brain. They are essentially rejected after a while. But it's conceivable that more different types of different technologies could be used in order to get around this problem. So this is the two reasons why I have a certain amount of optimism that for the future, this [? methodic ?] intelligence will be understood. Thank you.

PINKER: Thank, you Emilio. Barbara Partee.

PARTEE: For me, the adventure began just 50 years ago, here at MIT in 1961. The Chomskyan revolution had just begun, and Noam Chomsky and Morris Halle had just opened up a PhD program in linguistics. And I came in the first class. I want to start by thanking Chomsky and Halle for building that program. And I thank MIT and the Research Laboratory of Electronics for supporting it. I'm indebted to Chomsky for revolutionizing the field of linguistics and making it into a field whose excitement has never waned. Chomsky redefined linguistics as the study of human linguistic competence, making linguistics one of the early pillars of cognitive science.

At the center of the Chomskyan revolution was syntax, generative grammar, a finite specification of the infinite set of sentences of a language. That launched an extremely productive research program, but it didn't include semantics. Chomsky considered meaning scientifically intractable, and he wasn't alone. When linguists did soon tried to add semantics generative grammar, there were two big obstacles. A lack of good formal tools, and the absence of any non-subjective notion of meaning.

It was the UCLA logician Richard Montague who revolutionized the study of semantics in the late 1960s. He built on successes in formalizing the syntax and semantics of logical languages, the work of Frege, Tarski, and others. Montague himself developed a higher order typed intentional logic with a model theory, and with its help, he was able to formalize a significant part of the syntax and semantics of English within a clearly articulated theoretical framework. The result was what we now call formal semantics.

Emmon Bach has summed up Chomsky's and Montague's cumulative innovations thus. Chomsky showed that English can be described as a formal system. Montague showed that English can be described as an interpreted formal system. For me, as a young syntactician at UCLA in the late '60s, Montague's work was eye-opening.

Linguists had been building tree-like semantic representations. Okay for capturing certain things like scope ambiguity, but otherwise leading to endless arguments about competing representations. We linguists never dreamed of truth conditions being relevant for linguistics. But Montague's work, and David Lewis's, showed that truth conditions are crucial for giving semantics a non-subjective foundation. A linguistically exciting part was that with such a rich logic, we could get a close match between syntactic constituents and semantic constituents. And meanings of sentences could be built up recursively and compositionally from the meanings of their parts, of their syntactic parts.

One brief illustration. Bertrand Russell had argued that natural language syntax is logically very misleading, since it puts John and every man into the same syntactic category of noun phrases. Whereas in first order logic, John walks, and every man walks, must have radically different syntactic structures. Montague showed that natural language syntax does not have to be logically misleading at all. All noun phrases can be treated uniformly in higher order logic as so-called generalized quantifiers. And English subject-predicate sentences, like John walks, and every man walks, can then be interpreted in a similar manner straightforwardly. Generalized quantifier theory is now one of many rich topics in formal semantics, and it can explain many things that we couldn't explain with just syntax.

I started working on Montague grammar in about 1970 to try to integrate it into linguistics by finding ways to put Chomsky's and Montague's work together. From the early '70s, collaborative work among philosophers, logicians, and linguists in the US and Europe built in many directions. One important result is that with a serious semantics to share the load, syntax doesn't have to do all of the work that we once imagined. Syntactic and semantic advances now often develop in tandem, each informing the other.

By the late 1980s, formal semantics was well established within linguistics departments in the US. In Europe, it may be in linguistics or in philosophy. Among middle generation leaders of formal semantics, I'll just mention two who are involved in this symposium and who were my PhD students at UMass in the 1980s. Irene Heim, head of the Department of Linguistics and Philosophy here, became MIT's first formal semantics faculty member in 1989. And Gennaro Chierchia is now Head of Linguistics at Harvard. Both have made seminal contributions that have helped shape the field.

I have to mention one seeming paradox which relates to open challenges as well as to past progress. I stressed how the Chomskyan revolution made linguistics an early pillar of cognitive science. Yet Frege's anti-psychologistic stance, shared by Montague, was crucial in the foundations of formal semantics. Frege argued that truth conditions, and not mental ideas, have to be at the core of the meaning of a sentence. And the work of generations of linguists and philosophers has shown the fruitfulness of that approach.

First, we have to formalize what the semantics of our language is, what our sentences say about how the world is, then figure out how our knowledge of it, such as it is, is acquired and exploited. This stance might seem to exclude formal semantics from cognitive science, but I believe on the contrary, it makes the contributions of semantics to cognitive science all the more interesting.

Human language is a remarkable achievement. Part of what's remarkable is how we implicitly recognize and navigate the social construction of meaning. When we converse, we simultaneously exchange information and negotiate meaning. David Lewis's classic work on convention was an early milestone in exploring the relation between individual competence, what's in the head of one language user, and social intelligence, an important dimension that is still probably under-explored.

As for newer directions, the best understood parts of formal semantics concern the semantic compositions of parts from wholes. No, of wholes-- sorry. Of wholes from parts, of course. Yeah. The semantics of-- I proofread this several times. What we might call the semantics of syntax.

There has also been great progress on formalizing parts of pragmatics involving the interaction of meaning, language use, and context, including the study of how context both affects and is affected by interpretation. And studies in language processing now include formal semantics and pragmatics. And game theoretic approaches are having a growing influence.

Computational formal semantics is now a subfield of computational linguistics, contributing to both theoretical and applied goals. And there's promising early work on universals and typology in semantics with innovative fieldwork uncovering semantic and pragmatic subtleties of various indigenous languages. And just as in syntax, such languages prove to be as semantically rich and complex as more familiar European languages.

There are many challenges facing formal semantics and its relatives. These are still young fields. I'll mention just two that I think are important, both for the field and for the goals of this symposium. First, the semantics of [? sentencial ?] structures is increasingly well-understood, but lexical semantics is really hard, and formal methods are weakest here. It's in lexical semantics that we most clearly cannot equate the language with our knowledge of it. As Hilary Putnam observed years ago, the fact that he doesn't himself know anything that would distinguish a beech tree from an elm tree does not mean that beech and elm are synonyms in his language.

The lexicon has lots of linguistically important substructure, but it's also the part of language that interfaces most strongly with non-linguistic knowledge, with encyclopedic knowledge, with our commonsense ontology, with visual imagery, with cultural associations, and on and on. And connected with that, another challenge is how to build formal semantics into real-time processing models, whether we think of psychological models of how we do it or computational models that might do it in an applied way. Models that involve the integration of linguistic, and not specifically linguistic knowledge.

This concerns not only lexical semantics, but also all kinds of context dependence. Implicit content, meaning shifts, and nonliteral uses of language. I know there is progress. I'm not an expert in those areas. But I believe this is a domain where major interdisciplinary innovation and cooperation may be crucial.

Just an illustration of the challenges. I'm pretty sure that the most successful search engines do not do any formal semantic analysis of questions that we ask or of texts that they are searching. I've heard of some beginnings of some nice work in that direction, but large scale, all-purpose, fast search engines, or quote "intelligent" machines that really know and use semantics as we study it are probably still far in the future.

And in conclusion, I would say I would suppose that really knowing semantics is a prerequisite for any use of language to be called intelligent. So if there is to be a new basic research effort to understand intelligence and the brain, semantics is ready to contribute to it and to gain from it. Thank you.

PINKER: Thank you Barbara. Patrick Winston, I think you're going to be a motile organism, as I recall. Would you like to use the podium?

WINSTON: I think I'll not use that. I've never been very good at introductory statements, so I haven't prepared one. I'll ask myself a few questions instead. My first question is, well, what do you think so far? And that's a difficult question for me to answer, because several of the speakers, you may have noticed, ran out of time.

I was particularly disappointed that Marvin was unable to give us a detailed account of his assessment of the progress that's been made in artificial intelligence in the past 20 years, so I decided to dedicate a little of my time to doing a simulation of Marvin, and here it is. And that concludes my simulation of Marvin's account of the past 20 years.

Now it's certainly the case that many people would contest the view that there's been no progress, but I don't think anyone would contest the view that there could have been more progress in the past 20 years than there has been. And I think it's informative to think about why that might be, because we have no lamp to guide us into the future except by the lamp of what has [? passed ?] before.

And I think that in my view, what went wrong went wrong in the '80s. And the first thing that went wrong is that we all discovered that artificial intelligence technology was actually useful, and that led to a kind of commercial distraction, from which the field has never recovered. So that's one problem.

Another problem, of course, is that happened to be the decade in which the Cold War ended. And as a consequence of that, there were shifts in funding patterns that contributed to the problem, because sponsors were no longer as interested in being princes of science, but rather became much more interested in short-term goals, demonstrations, and the like.

But I think the most insidious problem was what I'd like to label the mechanistic Balkanization of the field. What happened in that period was that artificial intelligence, as a field, matured and began to break up into communities that were coagulating around mechanisms rather than problems. So it was in the '80s that we began to see conferences dedicated to neural nets, conferences dedicated to genetic algorithms, conferences dedicated to probabilistic inference. And when you dedicate your conferences to subjects of that kind of mechanisms, then there's a tendency to not work on fundamental problems but rather those problems that the mechanisms can actually deal with.

This tended to affect not only the conferences that were being held, but also the kind of jobs that were offered by universities. Those job descriptions tended to have phrases in them like neural nets, probabilistic inference, genetic algorithms, and the like. I think it was in about 1995 that I had a horrible nightmare as a consequence of this. I was on the search committee for the Electrical Engineering and Computer Science Department at the time.

And in my nightmare, we had put out a call for people interested in making an application to work in MIT as a professor in the Electrical Engineering and Computer Science Department. And Jesus applied. And after due consideration of the application, we decided to send the standard letter. Thank you for your interest in our department. We regret that with over 400 applicants, we cannot interview anyone. And besides, this year we're not looking for a theologian. It was because of that kind of Balkanization, which focused everyone on particular rather narrow elements of the field.

So what to do? Some people say well, Winston, the trouble with you is you're just a hopeless romantic about the past. But that's not entirely true, because I'm also romantic about the future. It's just right now that may not be so hot. And I think the future does offer some hope. And you might say why? And I think the reason is because we now are gradually becoming less stupid about what to do.

So you might say, well, what is it that you think we should do? And then my response would be well, I think we should take the right approach. And then you would say what is the right approach? What do you call your approach? And I would reply, well, I call my approach the right way. And then you might say, well, what does your approach have to offer? And then I would say that my approach, what my approach has to offer is a focus on the fundamental differences between our human species and other primates. And in particular to others of the Homo genus.

And you might say, well, what are those properties that separate us from, for instance, the Neanderthals? And who knows? But if we want a clue, the best thing to do is to ask people who study the question. So I've been much influenced by, for example, the cultural-- rather the paleoanthropologist Ian Tattersall who tells us that we, as a species, coexisted quite happily with the Neanderthals for about 100,000 years. And during that time, we made approximately the same kind of stone tools. And during that time there was approximately no progress in our making of stone tools.

But then about 60,000 years ago or so something magical happened to the human species. It probably happened in southern Africa. And it probably happened in a neck-down population of maybe many hundred, but certainly not more than a few thousand individuals. And once we got that new property, it spread throughout the human population. And within 10,000 years we had either wiped out or outcompeted the Neanderthals, wiped out a lot of species along our path throughout the world and never looked back.

And so what is it then that was that magic property? Once again, who knows? But we can speculate. And I've heard Noam say things like it was the capacity, perhaps, to take two concepts and put them together to make a third concept, and to do that without limit and without disturbing the concepts that got merged together. I think that's probably right.

And I would hazard to layer a piece on top of that, because I think that that capability, in turn, makes it possible for us to describe events. And once we can describe events, it's natural to develop a capacity to stringing them together into stories. And once we have stories, then we can take two stories and blend them together to make another story. And once we have that, we have at least a kind of creativity. And once we have that, we have a species that's much different from any other.

So I've come to believe that that story understanding is one of the great things that's enabled by the inner language that's underneath our communication language. And I think that the past 50,000 years has been a period of time in which we have become increasingly awash in stories.

We are soaked in stories from our birth to our death. We start with fairy tales. We continue on through religious parables. We go on to literature and history. We end up perhaps with law school or medical school. And throughout that whole period of education, we are essentially accumulating and learning to manipulate the stories that come to us in our culture and in our environment.

So that's what I think ought to be done. And now you might ask an actual question, what kind of property do you need in order to study these particular questions that derive from this approach? And the answer is that there's only one property that you really need, and that is, I think, naive optimism in the face of overwhelming contraindication. But that's okay. I don't mind that, because at the very least, I think that through the study of stories that we will develop at least sharper questions to ask in the next generation.

I also think that it's possible to do the kinds of things that I do better today than I was able to do them some years ago, because I think I've gradually become less stupid about the steps that need to be taken in order to make progress. So here are the steps that I think you need to take in order to make progress. First of all, you need to characterize the behavior. Next, after you've characterized the behavior, you need to formulate a computational problem. And once you've formulated a computational problem, then you could take step three, which is to propose a computational solution. And all of these things are the natural objectives of classical psychology.

But then there's a fourth step. Once you have the proposed solution to the computational problem, then the kind of thing that I do I could refer to as exploratory development, building systems that you believe will have the behavior that you have characterized in virtue of the computational solution that you have proposed. And when you do that, you discover, to your horror and amazement, that you left things out, that your solution is incomplete, and you have to go back through that cycle again, and again, and again.

So I think that's four steps. The computational problem-- rather the behavior, the computational problem with all of the constraint necessary to solve it, a proposed solution, exploratory development, and as one of my students, [? Leonard ?] [? Federov, ?] mentioned to me this morning, that isn't enough either, because at step four, you might be in the same situation that the Romans were when they had catapults. You can improve them, but you didn't understand them until you had a notion of Newtonian mechanics. So the fifth step, then, is you have to look back over the first four steps and crystallize out the principles that made everything possible.

So that's what I think ought to be done. And that's the sort of thing that I do. But as I say that, I think it would be easy to suppose that I think everything happens in symbol land, merely because we are a symbolic species. But I don't think everything happens in symbol land, because I think we make use of the stuff that was there before. In particular, we have to solve a lot of problems with our perceptual apparatus.

So I could say, for example, to you, Sebastian kissed Erin. And then I could ask did Sebastian touch Erin? And you would immediately reply yes, of course, because you can imagine it. And you don't feel like you're doing syllogistic reasoning. You feel like you're imagining the situation and reading the answer off of the imagined situation with you with your perceptual apparatus.

And I think that's right. I think we know a lot of things latently and we don't have to be told them. We don't have to be told that if you kiss somebody you touch them, because we can figure that out on the fly. And then, of course, we can cache it in our symbolic systems if we want, but the point is that we can compute that stuff when we need it. So we know vastly more than we actually know.

Here's another example that I am very fond of. I borrowed it from [? Shimon ?] [? Oman. ?] It's the example of, well, what am I doing? It's not a trick question. I'm drinking. Now what am I doing? I'm toasting. Now what am I doing? I'm admiring this fine [INAUDIBLE] glass.

But then for the next example I have to ask you to exercise your own power of imagination, because I don't have a slide that does it. What I want you to imagine is that I'm a cat, and my mouth is pointed to the sky. And up there above my mouth is a leaking faucet. What am I doing? I'm drinking. Even though that kind of drinking doesn't look, visually, anything like the kind of drinking I did with my first example, yet somehow we label those both as drinking.

So my view is that it must be the case that this kind of drinking and that kind of drinking is telling the same story. That there's a story about thirst. That there's a story about motion. There's a story about liquid moving through space into your mouth. And that's the story of drinking, and you can recognize it from a single frame as a drinking story even though it's very different from other frames, from other videos you've seen that tell the drinking story.

So that's what I think. And the last question I'll ask myself is what I've tried to contribute here today. And the answer is I tried to talk a little bit about why I think we haven't made progress in the field to the extent that we should have, what I think we ought to do now in order to make progress, and the steps that I think need to be taken by people who are trying to make that progress. And an emphasis, too, on the idea that we're not just a symbolic species [? or ?] symbolic nature, and that our [? interlanguage ?] capability make possible an interaction with our perceptual apparatus that I think would not be possible in a dog or a chimpanzee.

And finally, I think as part of the contribution package, I'd like to say that I think that if we do all this, then we'll have a much better understanding of how we work. And I think that will lead to what I would like to call, with perhaps a touch of irony, unthinkable value to the species.

PINKER: Thank you, Patrick. Tommy Poggio tells me that we have another 15 minutes, and I would hate to let this opportunity pass with this extraordinary collection of speakers without some exploration of perhaps some themes that cross-cut some of the presentations. I'm going to take the liberty of asking a couple of questions myself to try to draw out some connections among the fascinating presentations we just heard.

And I'll start with Sydney Brenner. You have inaugurated the study of the nervous system of a creature with 302 neurons. Many of us are interested in the neuroscience of a creature with [? 100 ?] billion [? 302 ?] neurons, give or take a few 100,000. And we have heard, this very evening, suggestions that the genetic code contains information that is exploited in the development of the capacity, say, to use human language together with the thoughts that are expressed in language.

You referred jocularly to the Chomsky gene, the single hypothetical gene that turns us into a human, but more seriously, do you have any thoughts on how we might eventually narrow the gap between systems like C. elegans, where we clearly have an enormous amount of understanding, and the organism that many of the rest of us are interested in and the possible information in the genome that results in the innate unfolding of cognitive abilities?

BRENNER: Well, I think it has to go beyond these hardwired systems. And I think there is a biological invention that occurred, which allowed us to expand the possibilities of the system. If I can give one example of this, I think that it would be extremely interesting to ask is it just going to be more of the same stuff that somehow culminates in intelligence and all the other properties associated? Or is there some revolutionary change in [? principal? ?]

So I think the important thing is in that once we get away and have a way of building bigger nervous systems, we have the potential, then, to have not fix it. So there are many such systems in biology. The immune system is. It says you can learn things from the past, which is [? true ?] by animals being selected because of things, but you must have a way of meeting unforeseen circumstances.

And I think we have to have something like that. It needn't be coded like the immune system. We have to have a flexible system in which we can achieve the result but we need additional input of experience, if you like. So perhaps our genomes only decide how to make a baby and all the rest of it has to follow. But [? they ?] give you the capacity to do these things. And that is the-- that, I think, is the very interesting thing.

On the other hand, I am still very interested in the accumulation that comes from the processes of evolution. And evolution, of course, naturally selects for organisms and, so to speak, guarantees their survival. In fact, it could be argued that the trouble with us, as human beings, is that we stopped biological evolution, [? where ?] sitting inside of ourselves is a quarter of a million year old genome, totally unsuited, maladapted to the environment we need to create-- that we've created as part of history.

And I think we should be clear about that, because there is still, if you like, deep down inside there is an animal. And the animal is still very much like a rat or a mouse. Of course it doesn't do the same things, but it is designed, a system designed for function, for survival. I think what we've been able to do by the expansion of other parts of the brain is, in fact, in many cases, to be able to suppress it.

I think there's a very good example. You can see this very clearly. And that's why I think the basic part of the brain that we can still explain from the genome and all these other things resides in the hypothalamus. The hypothalamus is Freud's id, all right? Now what happens is the frontal cortex sends inhibitory stimulus down to the hypothalamus and says stop behaving like a beast.

So all those things which we regret in our constitution, like greed, lust, desire, avarice, those are encoded, really, in the genes that affect the hypothalamus. What we have is this inhibitory thing. So that means that we can suppress this and sometimes we don't want to or can't and so on. So I think that the layering of the system, which it distinguishes that. And therefore, you will have to work in this field here that looks at this.

Everybody knows that you can take-- if you take a rat and let it smell a cat, has to smell it, that's encoded in the genome and it undergoes a complete panic. Of course you can condition that to the vision of a cat. Then you show the animal a cat, complete panic. You take it out and you just destroy 3,000 cells in the hypothalamus and you put it back in the cage. You can show it the cat. It'll saunter out of this, walk in front of the cat. The amusing thing, now the cat is perplexed.

However, I'll just give you that. And I think a lot of this is in this, and I think this is where biology can tell you what the ground process is, what's encoded in our genes. Many things are like that. But I think the rest of it is all the other stuff that we've seen. So we have, if you like, a machine within a machine. We have [? then ?] added on top of this a machine which is capable of flexibility, [? and ?] is capable of changing it.

So that's why I don't believe that tracing out all the connections-- I think that's a form of insanity. And then we'll be able to discover exactly this. I think that is a form of insanity. I mean, today, nobody can describe to you in detail all the physical events that occur in a computer chip when you take a picture in your camera which has the telephone attachment to it. And nobody wants to describe it anymore, because the basic principles are there. We've got a [? buffet. ?]

So I think we have to look at this [? still ?] with the whole thing of layers of organization. And we must learn to know what we can do and what others can do together with us. That's why it's got to be multiple discipline.

PINKER: Thank you. I have a question for both Noam and Marvin that I think is on the minds of many people in this room, and I know it's been expressed in some of the questions that I've received by email prior to this event. There is a narrative in which the new direction of both artificial intelligence and cognitive science is one that makes a great deal more use of probabilistic information that is gleaned from enormous amounts of experience during learning.

This is manifested in branches of cognitive science such as neural networks and connectionism, Bayesian inference models, application of machine learning to intelligence. Many of the models, both of Tommy Poggio and Josh Tenenbaum, in the classic work from the golden age, and indeed in many of the models since then, including the models of generative grammar and models of semantic memory, probabilities don't play a big role. Is the narrative that says that the direction of the field is in making use of massive amounts of statistical information via learning-- well, maybe I'll ask you to complete that sentence. What is the-- well, I'll let you complete the sentence. Noam?

CHOMSKY: It's true. There's been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. These successes, to my knowledge at least, there's no time to go into details, but the successes that I know of are those that integrate statistical analysis with some universal grammar property, some fundamental properties of language. When they're integrated, you sometimes do get results.

The simplest case, maybe, is the problem of identifying words in a running discourse, something apparently children do, you know? They hear a running discourse, they pick out units. And the obvious units are words, which are really phonological units.

And there's a natural property that-- I wrote about it in the 1950s. I mean, it was just taken for granted that if you just take a look at the-- if you have a series of sounds and you look at the transitional probabilities at each point, what's likely to come next, when you get to a word boundary, the probabilities go way down. You don't know what's coming next. If it's internal to a word, you can predict pretty well. So if you kind of trace transitional probabilities, you ought to get something like word boundaries.

Actually, I wrote about it in 1955. And I assumed that that's correct. Turns out to be false. It was shown, actually, by Charles Yang, a former student here, [? got the ?] PhD in the Computer Science Department, that if you apply that method, it basically gives syllables in a language like English.

On the other hand, if you apply that message under a constraint, namely the constraint that a word has what's called a prosodic peak, you know, like a pitch stress peak, which is true, then you get much better results. Now there's more recent work, which is still in [? press, ?] by [? Shukla, ?] [? Aslund, ?] and others, which shows that you get still better results if you apply that together with what are called prosodic phrases.

And it turns out that a sentence, let's say, has units of pitch, and stress, and so on, which are connected, related to the syntactic structure. Actually in ways which were studied, maybe first seriously, by another former student, Lisa Selkirk a colleague of Barbara. But when you connect, when you interact prosodic peaks with transitional probabilities, then you get a pretty good identification of word boundaries. Well, you know that's the kind of work that I think makes sense. And there are more complex examples, but it's a simple example of the kind of thing that can work.

On the other hand, there's a lot of work which tries to do sophisticated statistical analysis, you know, Bayesian, and so on and so forth, without any concern for the actual structure of language. As far as I'm aware, that only achieves success in a very odd sense of success. There is this notion of success, which has developed in computational cognitive science in recent years, which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

So for example, if you were to study bee communication this way, instead of doing the complex experiments that bee scientists do, you know, like having bees fly to an island to see if they leave an odor trail and this sort of thing. If you simply did extensive videotaping of bees swarming, okay, and you did a lot of statistical analysis of it, you would get a pretty good prediction for what bees are likely to do next time they swarm. Actually get a better prediction than bee scientists do, and they wouldn't care, because they're not trying to do that. And you can make it a better and better approximation by more videotapes and more statistics and so on.

I mean, actually, you could do physics this way. Instead of studying things like balls rolling down frictionless planes, which can't happen in nature, if you took a ton of videotapes of what's happening outside my office window, let's say, bees flying and various things, and you did an extensive analysis of them, you would get some kind of prediction of what's likely to happen next, certainly way better than anybody in the physics department could give.

Well, that's a notion of success, which is, I think, novel. I don't know of anything like it in the history of science. And in those terms, you get some kind of successes. And if you look at the literature in the field, a lot of these papers are listed as successes. And when you look at them carefully, they're successes in this particular sense, not the sense that science has ever been interested in.

But it does give you ways of approximating unanalyzed data, analysis corpus, and so on and so forth. I don't know of any other cases, frankly. So there are successes where things are integrated with some of the properties of language, but I know of none in which they're not.

PINKER: Thank you. To my tremendous regret, and I imagine of many people in the audience, I'm going to have to bring the proceedings to a close, and we'll have to be left with Patrick's interpretation of what Marvin's response would have been to the question. I'd like to thank Sydney Brenner, Marvin Minsky, Noam Chomsky, Emilio Bizzi, Barbara Partee, Patrick Winston for an enormously stimulating session. And thanks to all of you.