20th Annual Killian Award Lecture - Noam Chomsky

Search transcript...

[MUSIC PLAYING]

Good afternoon, and welcome. I'm Kim Vandiver, the chairman of the MIT faculty. We gather here today to honor two people-- James R Killian, Jr. and Professor Noam Chomsky. The Killian Award is the highest award that the MIT faculty bestows upon one of its own, named after the late Jim Killian, 10th president of MIT.

He came to MIT as a student in the mid-1920s. He joined the staff of Technology Review in 1926, and he retired as chairman of the MIT Corporation 45 years later, in 1971. He brought great honor and distinction to MIT through his many years of national service. I'll mention two examples, which are just the tip of the iceberg.

In the late 1950s, he was the first full-time science adviser to a president of the United States, and provided important leadership in the period that set the nation's course to putting a man on the moon. He also served as chairman of the Carnegie Commission on Educational Television, which recommended the formation of the Corporation for Public Broadcasting.

He was also a very genuine and likeable person, as I discovered quite by accident one day about 20 years ago, when I was a new graduate student at MIT. I sat down at a table in the Lobdell dining room with an elderly gentleman that I didn't know, and we struck up a conversation, one that was to begin a long series of enlightening conversations I had with him until the time of his death in 1988. His easy way with students and with other members of the MIT community are one of the things that define Jim Killian to us.

This award is named in his honor. We bestow it upon a faculty member each year who has also brought honor and distinction to MIT. Professor Chomsky has been doing so for 37 years as a member of our faculty. Professor Chomsky, would you come to the stage, please?

In preparing for today, I did one small piece of research on Professor Chomsky, which I'd like to share with you. With the help of the MIT library staff, I was able to determine how often he is cited in the Arts and Humanities Citation Index.

[LAUGHTER]

As of the last time the statistics were compiled, we discovered that he is in the top 10. It happens that none of the others are still living.

[LAUGHTER]

He is the most cited living author. I thought you might like to know the names of the company he keeps. They are, in order-- Marx, Lennon--

[LAUGHTER]

[APPLAUSE]

Shakespeare--

[LAUGHTER]

Aristotle, the Bible--

[LAUGHTER]

Plato, Freud, Chomsky, followed by Hegel and Cicero.

[LAUGHTER]

[APPLAUSE]

I would like to read from the award. It says, "The president and faculty of the Massachusetts Institute of Technology have the honor to present the James R Killian, Jr. Faculty Achievement Award to Noam Chomsky, and to announce his appointment as Killian Award Lecturer for the academic year '91, '92, in recognition of his extraordinary contributions to linguistics, philosophy, and cognitive psychology. One of the foremost thinkers of this century, he revolutionized the study of language and of the human mind. His books, monographs, and articles constitute major milestones in the history of linguistics. An inspiring teacher, he has trained scholars who are among the most important contributors to the discipline. His presence here has greatly enriched MIT." Professor Chomsky, the microphone is yours.

[APPLAUSE]

He's lucky he didn't read what those citations say, but--

[LAUGHTER]

The title of this talk, the announced title, has three parts-- refers to language, the cognitive revolutions. And then there's the quotes around cognitive revolutions. The relation between language and cognitive science is natural. That's a reasonable combination. Modern linguistics developed as part of, what's sometimes called, the cognitive revolution of the 1950s-- actually to a large extent here in building 20, thanks initially to Jerry Wiesner's initiatives.

It was also a major factor in the development of the cognitive sciences ever since their modern origins. So those two words are sort of obvious. The quotes are intended to express a lingering skepticism. And when I say cognitive revolution from here on in, you're supposed to hear a so-called cognitive revolution. I'll explain why as I proceed.

The skepticism that I feel does not have anything to do with the actual work that's done in the various areas of cognitive science-- so vision, language, cognitive development, a few others. In these areas, there have been quite significant advances over the past 40 years. The questions in my mind arise at a second order level, that is, thinking about the nature of the disciplines that are concerned with what were traditionally called mental acts or mental faculties. And I want to come back to a few of those questions. Some of them are substantive. Some of them are historical.

First, a few words about the contemporary cognitive sciences and the cognitive revolution, as they look to me. And let me stress that others don't see it this way at all. This is not intended to be a survey, but rather a rather personal point of view.

George Miller, who was one of the leading figures and founders of the modern field, in a recent article, he traced the modern cognitive sciences back to a meeting in 1956, held in Cambridge. I can't remember whether it was here or at Harvard, but one or the other. It was a meeting of the Institute of Radio Engineers, which had a series of papers on experimental human psychology, which made use of information theory and signal detection, and other then pretty new ideas.

It had a paper of mine on language, which outlined some of the basic ideas of generative grammar. It had another paper on problem-solving and reasoning. It was a paper by Herb Simon and Al Newell, a simulation of theorem proving in elementary logic. And he argues that that confluence of research papers essentially initiated the modern field.

Well, there were some shared ideas among the participants. One shared idea was a kind of shift in perception or perspective toward the disciplines, a shift which is not terribly surprising now, but was pretty striking at that moment. The shift was away from the study of behavior and the product of behavior, such as words or sounds or their arrangement in text, a shift from that focus toward the study of the states and properties of the mind and the brain that enter into human thought and action.

From the first point of view, which overwhelmingly prevailed at the time, behavior and its products were regarded as the object of study. From the second point of view, what later came to be called the cognitive sciences, behavior and its products were of no particular interest in themselves. They simply provided data, data which would be interesting insofar as it served as evidence for what really concerns us now, the inner workings of the mind, where the phrase mind here simply is a way of talking about relevant properties and states and processes of the brain.

The data of behavior and the text, and so on, from this point of view, are to be used alongside of other data, whatever might turn out to be useful, say, electrical activity of the brain. But it has no inherently privileged status, and it's not the focus of interest. That's a shift. It's a shift away from what was called behavioral science, or structuralism, or stressed statistical analysis of texts, and a shift toward something that might properly be called cognitive science. In my personal view, it was a shift from a kind of natural history to at least potential natural science.

Now, that shift was highly controversial at the time, and it still is. And in fact, if you were to use such measures as amount of government funding, it would still be very much in the minority. But I think it was the right move. Well, that was one kind of shared interest from independent ones of view. There was no communication among the people prior to the meeting.

A second shared interest was an interest in what we might call computational representational systems. That means a way of looking at mental activities and mental faculties, at what the brain is doing, to looking at them as a kind of a software problem, that is, a study of the mechanisms, but viewed from a particular abstract perspective. Analogies can always be misleading, but one might compare this approach to, say, chemistry about a century ago, where there were notions like valence or periodic table or organic molecule, and so on, but they were not grounded in the basic physics of the time.

The results, however, and the ideas did provide guidelines for what turned out to be quite radical changes in physics that were necessary to accommodate these ideas. Chemistry was never actually reduced to physics. It would be more accurate to say that physics was expanded, in a certain sense, to chemistry.

Similarly, computation representational theories of the brain attempt to establish states and properties that have to be accommodated somehow in the brain sciences, and might well provide, insofar as they're correct, they might well provide guidelines for inquiry into mechanisms. It's possible, again, that they may provide guidelines for what will have to be radical changes in the way that the brain and its processes and states are conceived. How two approaches to the natural world will be unified, if ever, can never be guessed in advance really. It could be reduction, as in, say, the biology and chemistry case, more or less. It could be what you might call expansion, as in the physics, chemistry case, or it could be something else.

Crucially, there's nothing mysterious about any of this. It seems, to me at least, to be just normal science, normal science at a typical stage before unification of several approaches through some topic in the natural world before they're unified. Well, again, that was and remains highly controversial.

Assuming it, anyway, how can we proceed? You can look at the brain or the person from a number of points of view. Let's begin by taking what you might call a receptive point of view. So we think of the brain, say, as being in a certain state. A signal reaches the brain. It produces some kind of symbolic representation of the signal internally, presumably, and it moves to some other state.

If we focus on the interstate transitions, we're looking at problems of learning or maybe, more accurately, growth. If we focus on the input signal and the symbolic representation of it, we're looking at perception. If we focus on the initial state of the system prior to any relevant signals, we're looking at the biological endowment, the innate structure. We could trace the processes back further. At that point, we'd be moving into embryology. And one should stress that these distinctions are pretty artificial, actually.

Typically, the systems that we're looking at are like physical organs. Or we might say more accurately, like other physical organs, because we are studying a physical organ from a particular point of view. They are like physical organs, other physical organs, in that, say, like the digestive system or the circulatory system or the visual system, or whatever, which, again, we often look at abstractly, as one would expect in the study of the system of any degree of complexity.

We can call these particular systems mental organs, not implying, again, anything more than that they are involved in what were traditionally called mental states, mental faculties, or mental processes. Well, like other physical organs, these mental organs quite typically reach a kind of a stable state at a certain point. That is, the input signal that's causing an interstate transition moves the system around within some kind of equivalence class of states, that are alike, for purposes at hand. That's obviously an idealization, but a reasonable one.

When you look at the stable state after the point at which there isn't a significant change, then you're very clearly studying perception, when you look at the signal or result relationship. So for example, if you take the visual system, if you look at the initial state, the state prior to relevant signals, you're looking at the basic structure of the visual system. If you look at the interstate transition, you're looking at the growth of the visual system, the changes in the distribution of vertical and horizontal receptors in the striate cortex or the onset of binocular vision after a couple of months of age in humans, and so on.

If you look at a stable state, you're studying adult perception. Adult perception, of course, has some kind of access to systems of belief about the world. Not much is known about this, but it seems reasonable to assume that that involves some kind of computational procedure that specifies an infinite range of beliefs, which are called upon when needed, although here, we move into a domain of almost total ignorance.

Turning to the language system, you can look at it the same way. The initial state of the system, the initial state of the language faculty, the study of it is sometimes called universal grammar, these days. The study of the initial state of the language faculty is the study of that, is a study of biological endowment. It's an empirical fact, which seems pretty well established, that this element of the brain is pretty uniform across the species. The variations appear to be quite slight apart from extreme pathology.

And it also appears to be unique in essentials. There's no other known organism that is anything like it. Also, it's rather different from other biological systems in many respects. Systems of discrete infinity, which this plainly is, are pretty rare in the biological world, so it appears.

If you're looking at the interstate transition as signals, some relevant information comes in, you're studying language acquisition. It apparently reaches a stable state at about puberty. This stable, steady state incorporates a computational procedure, which characterizes an infinite set of expressions, each of them having a certain array of properties, properties of sound, properties of meaning, properties of structural organization. That computational procedure constitutes person's language.

To have such a procedure in the appropriate part of the brain is to have a language or, as we sometimes say, to know a language. When I speak of a language, I simply mean the computational or generative procedure that specifies the properties of an infinite range of linguistic expressions, such as, say, the one I just produced. Since your language is similar enough to mine, when the signal that I just produced reaches your brain, your generative procedure constructs an expression, an expression with these properties, which is similar enough to the one that I form, so we understand one another, more or less.

Unlike the system of belief that's accessed in visual perception, in this case, we happen to know quite a lot about the generative procedures that are involved in the use of language, and even about their innate, invariant basis. And these, in fact, are the areas of major progress in the past generation. Well, what about perception of language? There are several common assumptions about this.

One major assumption is that there is something called a parser. A parser is something that maps a signal into a symbolic representation, paying no attention to any other aspects of the relevant environment, of the environment. Of course, it is assumed, if people are rational, that this parser accesses the generative procedure. That is, it's assumed that you use your knowledge of English when you interpret signals.

Note, however, several assumptions. First, one assumption is that such a thing exists, that a parser exists. There is some faculty of the mind that interprets signals independently of any other features of the environment. That's not at all obvious. The existence of an internal, generative procedure is far more secure and far more obvious than the question of the existence of a parser. And it's also far better grounded assumption, far better grounded empirically, and with much richer theoretical structure. This is contrary to common belief, which usually considers the two the other way around.

Nevertheless, let's assume that a parser exists, though it isn't obvious. It's further assumed that the parser doesn't grow the way a language grows, from some initial state to a stable state. Now, that assumption, again, is not very well founded. It's based mainly on ignorance. It's the simplest assumption, and we have no reason to believe anything about it, so we might as well believe that. There is some kind of evidence for it, indirect perhaps, but interesting.

There is a forthcoming book coming out from MIT Press, by Robert Berwick and Sandiway Fong, a former student of his, on what you might call a universal parser, a fixed, invariant parser, which has the property that if you flip its switches one way, it interprets English. And if you flip its switches another way, it interprets Japanese. And if that's the right idea, then there is a universal parser, which is unaffected by environmental factors, apart from the properties that serve to identify one or another language, that is, the array of flipping of switches. I'll come back to that.

There's a further comment. So that assumption, the assumption that the parser doesn't grow, while strange, could be true. And there's only this kind of indirect evidence for it. A further common assumption is that parsing is easy, something that you do very quickly and easily. In fact, that's often held to be a criterion for the theory of language.

It's often held to provide-- the criterion is supposed to be that the theory of language must provide generative procedures that satisfy this requirement, that parsing is easy and quick. Well, that one is flatly false, uncontroversially. Parsing often fails completely and quite systematically, and it's often extremely difficult. Many categories of what are called parsing failures are known, even with short, simple expressions. A parsing failure just means a case where the parser works fine, but it gives some kind of an interpretation which is not the one that's assigned to the signal by the language. So it's wrong in that sense.

It's perfectly true that when-- and very often, you can easily find categories of expressions, even pretty simple ones, where the parsing system just breaks down. People don't know what they're hearing. It sounds like gibberish, even though it's perfectly well formed and has fixed, strict meaning, so on and so forth.

Now, it's perfectly true that one uses expressions that are readily parsed, but that verges on tautology. What it means is we use what is usable. Other resources of the language, we just don't use. There's a related assumption. It's often alleged that language is well adapted to the function of communication. It's not clear that that statement is even meaningful, or any similar statements about biological organs. But to the extent that one can give some meaning to that statement, this one, again, looks just false.

The design of language appears to make it largely unusable, which is fine. We just use those fragments that are usable for us. It doesn't mean that the design made it usable somehow.

There are similar assumptions in learnability theory, formal learnability theory. It's often claimed, as a starting point, that natural languages must be learnable. In fact, natural language is often defined. Natural languages are defined often as the set of languages that are learnable under standard conditions. Now, notice that that could be true. We don't know that that's false, as we know that the comparable statement about parsing is false. It could be true that languages are learnable, but it's not a conceptual necessity.

Language design might be such. That's our language faculty could be such, that it permits all kinds of inaccessible languages. These would be realizable in the brain, just as, say, English is, but not accessible to humans. Again, that wouldn't stop us from doing what we do. We would just pick up the accessible ones.

In fact, there is some recent work, quite recent work, which suggests that natural languages are learnable. But if true, that's an empirical discovery and, in fact, a rather surprising one. Well, I've said nothing so far about production of language. There is a reason for that. The reason is that apart from very peripheral aspect, it's almost a complete mystery. We can learn a good deal about the mechanisms that are put to use when we speak, and we can study at least certain aspects of perception, the mapping of signals to percept, using the internalized mechanisms, the generative procedure of the language.

But no one has anything sensible to say about what I'm doing now or what two people are doing in a normal conversation. It was observed centuries ago that the normal use of language has quite curious properties. It is unbounded. It's not random. It is not determined by external stimuli, and there's no reason to believe that it's determined by internal states.

It's uncaused, but it's somehow appropriate to situations. It's coherent. It evokes in the hearer thoughts that he or she might have expressed the same way. That collection of properties-- we can call them the creative aspect of language use-- that set of properties was regarded by the Cartesians as the best evidence that another organism has a mind like ours, not just a body, like a dog or a monkey or a robot.

Actually, if you look back at their argument, it was quite reasonable. The science happened to be wrong, but it was very reasonable. And within the framework of scientific reasoning, we might put matters in different terms. But we haven't made any progress in understanding these critically important phenomena, the phenomena that concerned them and that rely much-- the provide much of the basis for the traditional mind-body distinction.

In my opinion, in fact, contemporary thought about these topics, usually phrased in terms of notions like the Turing test, involves a considerable regression from the much more scientific approach of Cartesian psychology. I'm not going to have time to elaborate properly. I merely mention this to provoke some possible interest or maybe outrage. We'll see.

Well, I've mentioned several kinds of shared interest in the early modern cognitive sciences, say, back to the 1956 meeting that George Miller pointed to. One common interest, again, is in computational representational systems, looking at the brain in terms of software problems. For linguistics, at least, this turned into the study of the generative procedures that are put to use in perception and presumably, an expression of thought speaking, although the latter again remains a complete mystery.

There's another shared interest. It's what we might call the unification problem. Now, that's a problem that has two different aspects. One aspect is the relation between the hardware and the software, the relationship between computational representational theories of the brain and the entities that they postulate, and what you might call implementation. And as I suggested, I think you can regard that as similar to the problem of the relation between, say, chemistry and physics 100 years ago or genetics and chemistry 50 years ago, and so on. That's one problem, the unification generally, one aspect of the unification problem, the deepest aspect.

A less deep aspect has to do with the relation among the various domains of cognitive science, say, language and vision, whatever may be involved in problem solving. This is the question, what do they have in common? Notice that that's much less profound a problem, if it turns out the answer is nothing, just the way it is. On the other hand, we'd be upset if the answer to the first one was no answer.

Well, at this point, some of my skepticism begins to surface. So let's take the first one, the deep problem, the serious, scientific problem-- the relation between roughly the hardware and software, chemistry and physics. Here, there are two different approaches you can identify. One approach== let's call it naturalistic-- says, it goes like this. It says, the mental organs are viewed as computational representational systems, and we want to show that they're real, that their statements in these theories are true, that is, that there are certain states and the properties of the brain which these theories exactly capture, in that's what you try to show.

The mental organs can also be viewed in other ways. For example, you can look at them in terms of cells or atoms. Or more interestingly, for the moment, you can look at them in terms of electrical activity of the brain, so-called event related potentials, ERPs.

Now, rather surprisingly, there's some recent work which shows quite dramatic and surprising correlations between certain properties of the computational representational systems of language and of ERPs, evoked potentials. For the moment, these ERP measures have no status apart from their correlation with categories of expressions that come out of computational representational theories. That is, they're just numbers picked at random, because there's no relevant theory about them that says, just look at these numbers and not at some other numbers. In themselves, in other words, they're curiosities.

Still, it's interesting, because they do-- or there are correlations to rather subtle properties that have emerged in the attempt to develop computational representational theories which explain the form and meaning of language. So that suggests an interesting direction for research, namely to try to unify these quite different approaches to the brain, and to place each of them in an appropriate theoretical context. For the moment, that's primarily a problem for the ERPs, but if you can pursue it, it could be quite an interesting direction. And one can think of lots of consequences.

Well, the same is true of the relation between CR systems, computational representational systems, and physiology, at least in principle. Here, it's just in principle, because in practice at present, we don't really have much to look at there except from the theories of the computational representational systems. But here, whatever the difficulties may be, we seem to face ordinary problems of science before the typical situation when alternative approaches don't-- there's no understood way to unify them or connect them.

Well, there's a different approach to this unification problem, which actually dominates contemporary thinking about cognitive science-- not necessarily the actual work, but the thinking about what it amounts to. This approach is to divorce the study of mental organs from the biological setting altogether and to ask whether some simulation can, essentially, fool an outside observer. So we ask whether the simulation passes some rather arbitrary test, called the Turing test. But Turing test is just a name for a huge battery of possible tests.

And so we pick out one of those at random, and we now ask whether some, say, extraterrestrial invented organism or some actual programmed computer passes the test that has fooled somebody. And people write articles in which they ask, how can we determine empirically whether an extraterrestrial or a programmed computer or machine can, say, play chess or understand Chinese? And the answer is supposed to be, if it can fool an observer under some arbitrary conditions, called the Turing test, then we say we've empirically established that.

Actually, you may have noticed, last November, there was a big splash about a running of the Turing test, over at the Boston Computer Museum, together with the Cambridge Center for Behavioral Studies and a quite distinguished panel, that a bunch of programs which were supposed to try to fool people. There was, I think, a $100,000 prize offered, a big story in the Boston Globe, front page story in the New York Times. Science had a big splash on it.

I wasn't there, but I'm told that the program that did best in fooling people into thinking that it was a human was one that produced clichés at random.

[LAUGHTER]

Apparently, it did pretty well. I don't know. It may tell you something about humans, but I'm not sure what. As far as I can see, all of this is entirely pointless. It's like asking how we could determine empirically whether an airplane can fly, the answer being if it can fool someone into thinking that it's an eagle, say, under some conditions. But the question whether an airplane can fly and whether it's really flying or for that matter, the question whether high jumpers were really flying at the last Olympics, that's not an empirical question. That's a matter of decision, and different languages make decisions differently.

So if you're speaking Japanese, I'm told airplanes, eagles, and high jumpers all are really flying. If you're talking English, airplanes and eagles are really flying, but humans aren't, except metaphorically. If you're talking Hebrew, eagles are flying, and neither airplanes nor humans are. But that's just a matter of-- there's no substantive issue involved. There's nothing to this to determine, no question to answer.

Simulation of an eagle could have some purposes, of course, say, learning about eagles or maybe solving some problem in aerodynamics. But fooling an observer under some variety of the Turing test is not a sensible purpose, as far as I can see. As far as I'm aware, there's nothing like this in the sciences.

Incidentally, if you go back to the 17th and 18th centuries, scientists were very much in-- who we call philosophers, but they didn't distinguish-- they were greatly intrigued by automata, by machines, for example, automata that simulated, say, the digestion of a duck. But their goal was to learn about ducks, not to answer meaningless questions about whether machines can really digest, as proven by the fact that they can fool an observer who's just, say, looking behind a screen or something, however they may be doing it.

Well, this is one of the respects in which, in my opinion at least, there's been regression since the 17th and 18th century. Chess playing programs are a perfect example of this, in my opinion. Well, anyway, that's two approaches to the first question of unification, the sort of deep one.

What about the more shallow question of unification, that is, the question of unification among the various branches of the cognitive sciences? Is there any real commonality among them? Well, that doesn't seem entirely obvious. So take what you might think of, what you might call, physical organs below the neck, metaphorically speaking, meaning not the ones we call mental. Say, take the circulatory system and the digestive system, the visual system, and so on. These are organs of the body in some reasonable sense, viewed abstractly, and that's a normal way to study any complex system.

The study of these abstract organs does fall together at some level, we assume, presumably at the level of cellular biology. But it's not at all clear that there's any other level, that is, that there is some kind of organ theory at which these systems are unified. Maybe, but it's not obvious. Well, the brain is scarcely understood, but to the extent that anything's understood about it, it appears to be an extremely intricate system, heterogeneous, with all kind of subparts that have special design, and so on.

And there's no special reason to suppose that the mental organs are unified, in some kind of mental organ theory, above the level of cellular biology, that is, above the level at which they may fall together with the circulatory system, and so on. If there is no intermediate level, no such unification, then there is no cognitive science. Hence, there was no cognitive revolution except for similarities of perspective, which could be quite interesting, in fact.

Well, there are various speculations about this question. And in fact, they go right back to that 1956 meeting. One general line of speculation is that the mind is what's called modular, that is, that mental organs have special design, like everything else we know in the physical world. The other approach assumes uniformity, that there are, what are called, general mechanisms of learning or thinking, which are just applied to different domains. They're applied to language or to chess, and so on.

And as I say, these differences didn't exactly surface, but they were more or less implicit at the 1956 meeting. The paper on generative grammar took for granted modularity, that is, a rich, specific genetic endowment, the language faculty's special system, and no general mental organ theory. The Newell-Simon approach, the study of the logic machine, that took for granted the opposite position, that is, that there are general mechanisms of problem solving which are quite indifferent as to subject matter-- in our case, theorem proving, but equivalently, chess or language, or whatever.

And you can trace these different intuitions back quite a way, quite back 100s of years, in fact. They're very lively today in the discussions about connectionist work, for example. My own view is that the evidence is overwhelmingly in favor of the highly modular approaches. There are only a few areas where we have any real insight, and these invariably seem to involve quite special design. Possibly, there are some unifying principles, but about these, if they exist, nothing much is understood. Again, that's a minority view, but I can only tell you the way it looks to me.

Even internal to the language faculty, there seems to be quite highly modular design. So let's distinguish roughly between what we might call grammatical ability, meaning the ability to produce properly formed expressions, and conceptual ability, that is, to produce expressions that somehow make some sense, and pragmatic ability, that is, the ability to use expressions in some way appropriate to the occasion. Well, these capacities have been discovered to be dissociated developmentally-- like one can develop and the other is not-- and selectively impaired under injury.

And their properties are quite different. And they seem to be radically different from what we find in other cognitive systems. So even internally, just at the most gross look, we seem to find modularity. And when you look more closely into subsystems of language, they just seem to behave rather differently.

I don't see why this should surprise anyone. It's exactly what we find about every other biological organism, so why shouldn't it be true of the brain, which is maybe the most complex one around? But it's considered a very controversial position. Again, I stress, as far as I know, all the evidence supports it, and it shouldn't be surprising.

Well, just how modular is the language faculty? That's an interesting question. It doesn't seem so special that it's linked to particular processing systems. Here, a study of sign language in the last decade or so, maybe 20 years, has been quite interesting. Rather surprisingly, it seems that sign language, the normal language of the deaf, sign language is localized in speech areas of the brain, in the left hemisphere and not in the visual processing areas-- which you might expect, since you see it. You don't hear it. That's shown pretty well by aphasia studies and again, ERP studies.

Also, it turns out that children who are acquiring sign language in a natural setting go through steps-- at least in the early stages, which have been studied-- that are remarkably similar to the acquisition of spoken language and timing and everything else. They even override obvious iconic properties of visual symbols. So pointing is overridden and given a symbolic interpretation.

Furthermore, acquisition of sign language turns out to be quite different from acquisition of communicative gestures. There's even one case on record of spontaneous invention of sign language by three deaf children, who had no environmental model and no stimulus at all, as far as anyone can be determined. And that's the perfect experiment. The system that these three kids constructed has very standard properties of natural language, natural spoken language, and the developmental stages also appear to be similar. That looks like about a pure expression of the language faculty unaffected by experience, and it suggests that however modular the system may be, it's not so modular that it's keyed to a particular mode of articulation or input.

What's now known suggests that the language faculty simply grows a language, much as other organs of the body grow. And it does so in a way largely determined by its inner nature, by properties of the initial state. External events doubtless have some kind of an effect, like I'm talking English, not Japanese. But a rational Martian scientist who is looking at us would probably not find these variations very impressive. And the language that grows in the mind appears to be, at least partially, independent of sensory modality.

Well, at this point, we're moving toward serious inquiry into modular architecture of mind and brain, in some of the few areas where it's possible to undertake it, and with some surprising results. Well, one last element of skepticism about the cognitive revolution has to do with just how much of a revolution it was.

In fact, to a far greater extent than was realized at that time, or is even generally recognized today, the shift in perspective in the 1950s recapitulated and rediscovered some long-forgotten ideas and insights from what you might call the first cognitive revolution, which pretty much coincided with the scientific revolution of the 17th and 18th centuries.

Now, the Galilean-Newtonian revolution in physics is well known. The Cartesian revolution and its aftermath in psychology and physiology is not very well known. But it was quite dramatic, and it had important effects. It had important consequences. It also developed some of the major themes that were taken up again since the 1950s, primarily in the areas of language and vision, which are perhaps the most successful branches of contemporary cognitive science, as well.

Now, this first cognitive revolution, though it had a great deal of interest, and there's much you can learn from it, it did face barriers. And in fact, they were insuperable barriers. So for example, in the study of language, it came to be realized soon enough that human language is somehow a process, not just a dead product. It's not a bunch of texts. It's some process that goes on.

Furthermore, it's a process that makes infinite use of finite means, as Wilhelm von Humboldt put it in the early 19th century. But no sense could be made of these ideas, and the inquiry aborted, aborted for well over a century. By this century, the formal sciences had provided enough understanding of all of this, so that many traditional themes could be productively engaged, particularly against the background of other advances in science and engineering and anthropological linguistics. And one could understand quite well what it meant to make infinite use of finite means.

Well, let's turn to a quick look at some of the kinds of questions that arise and some of the kinds of answers that can be offered with respect to language specifically. So take something that everybody knows. Start with something really trivial, very simple. So take a simple phrase, like, say, a brown house. You and I have generative procedures which are more or less similar, and these generative procedures determine what we know about this expression. For example, we know that it consists of two words. We know that those two words have the same vowel for most speakers.

We know also that if something or other, if I point to something, and I say, that's a brown house, what I mean is its exterior is brown. If somebody paints a house brown, we know that they painted the exterior brown. If they painted the inside brown, they didn't paint the house brown. If you see a house, you see its exterior. So we can't see this building, as a matter of conceptual necessity. If we were standing outside, we might.

And the same is true of a whole mass of what are sometimes called container words, like box or igloo or airplane. You can see an airplane, for example, if you're inside it, but only if you can look out the window and see the wing, or if there's a mirror outside which reflects the exterior of the airplane, or something like that. Then you could see the airplane, otherwise not.

The same is true with invented concepts, even impossible concepts. So, say, take a spherical cube. If somebody paints this spherical cube brown, that means they painted the exterior brown, not the interior. So all of these, a house that's [INAUDIBLE] and all these things are exterior surfaces, which is kind of curious to start with.

However, they're not just exteriors. So suppose two people say John and Mary are equidistant from the exterior regarded as a mathematical object. Suppose they're equidistant from the exterior, but John is outside it, and Mary's inside it. We can ask whether John, the guy outside, is nearer the house, or whatever it is, and there's an answer, depending on current criteria for nearness. But we can't ask it about Mary. We're not near this building, no matter what the criteria are.

So it's not just an exterior. It's an exterior plus something about a distinguished interior. However, the nature of that interior doesn't seem to matter very much. So it's the same house if you, say, take it and fill it with cheese, or something like that. You change the exterior, it's the same house. It hasn't changed at all. And similarly, you can move the walls around, and it stays the same house.

On the other hand, you can clean a house and not touch the exterior at all. You can only do things to the interior. So somehow, it's a very special combination of a abstract, though somehow concrete, interior, with an abstract exterior. And of course, the house itself is perfectly concrete. The same is true of my home. That's also perfectly concrete, but in a quite different way. If a house is a brown, wooden house, it has a brown exterior surface, but it doesn't just have a wooden exterior. It's both a concrete object of some kind and an abstract surface, as well as having a distinguished interior with weird properties.

Well, proceeding further, we discover that container words, like, say, house, have extremely weird properties. Certainly, there can't be any object in the world that has this combination of properties, nor do we believe that there is. Rather, a word like house provides a certain quite complex perspective for looking at what we take to be things in the world. Furthermore, these properties are completely unlearned. Hence, they must be universal, and as far as we know, they are. They're just part of our nature. We didn't learn them. We couldn't have learned them.

They're also largely unknown and unsuspected. Take the most extensive dictionary you like, say, the big Oxford English Dictionary, that you read with a magnifying glass, and you find that it doesn't dream of such somatic properties. Rather, what the dictionary does is offer some hints that allow a person who already knows almost everything to pick out the intended concept.

And the same is true of words generally. And in fact, the same is true of the sound system of language. They are largely known in advance in all their quite remarkable intricacy. Hence, they're a universal, a species property.

The external environment may fix some details, the way the Oxford English Dictionary can fix a few details. But language acquisition, and in fact, probably a good deal of what's misleadingly called learning, is really a kind of biological growth, where you fill in details in an extremely rich, predetermined structure, that just goes back to your nature-- very much like physical growth or, as I think we should say, other aspects of physical growth.

Well, if you look at the structure of more complex expressions, then these conclusions are just reinforced. Again, let me take pretty simple cases to illustrate. So take the sentence, say, John ate an apple, and drop out the word apple, and you get John ate. And what that means, everybody knows, is John ate something or other. So if you eliminate a phrase from the sentence, you interpret it as something or other. John ate means John ate something or other, more or less.

Take the sentence, a little bit longer, but not much-- John is too stubborn to talk to Bill. Well, that means John is so stubborn, that he won't talk to Bill. And just as in the case of John ate an apple, drop the last phrase, Bill, and we get, John is too stubborn to talk to. And by analogy, or by induction, if such a thing existed, people ought to interpret it as meaning John is so stubborn, that he won't talk to someone or other.

However, it doesn't mean that. John is too stubborn to talk to means John is so stubborn, that nobody is going to talk to him, John. Somehow it all inverts if I drop the last word.

Suppose you make it a little more complex. John is too stubborn to expect anyone to talk to Bill. Well, that means John-- you have to think maybe a little bit here already, because parsing is not easy and quick. John is too stubborn-- it means John is too stubborn for him, John, to expect that anyone will talk to Bill. Now I'll drop Bill again. Now we get John is too stubborn to expect anyone to talk to. And reflect for a moment, and you'll see that the meaning shifts radically. It means John is so stubborn, that somebody or other doesn't expect anyone to talk to him, John. Everything shifts.

Now, all of this is inside you. Nobody could ever learn it. Nobody could possibly learn it. None of it is mentioned, even ever hinted at, in fact, even in the most comprehensive grammar books. It's known without experience. It's universal, as far as we know. In fact, it better be universal, or it means there are genetic differences crucially among people. It just grows out from our nature.

A huge variety of material of that sort has come to light in the last 30 years or so as a direct consequence of something new, namely the attempt to actually discover and to formulate the generative procedures that grow in the language faculty of the brain, virtually without experience. That had never been done before. It was always assumed that it's all obvious. How can it be anything complicated here?

If you go back, say, to the early '50s, the problem about language acquisition was supposed to be, why does it take so long? Why does it take so long for such a trivial thing to happen? Why do children need so much exposure to material to do this simple process of habit formation? As soon as you try to describe the facts, you discover that it's quite the opposite. There's extremely intricate things that are known even in the simplest cases. There's no possible way of picking them up from experience, and consequently, they're universal. They're part of our nature. These are just part of the generative procedures that the brain determines.

Well, what do these generative procedures look like? Apparently, they look something like this. A language can be looked at as involving a computational procedure and what you can call a lexicon. A lexicon is a selection of concepts that are associated with particular sounds. The concepts appear to be pretty much invariant. There are things like house, let's say. And they have very weird properties, which means that they must be invariant, because if they have very weird properties, they've got to be unlearned, which means that they're going to be invariant, just part of our nature.

The sound variety also seems quite limited. On the other hand, the association between a concept and a sound looks free. So you can say tree in English and [INAUDIBLE] in German, and so on. So that tells us what one aspect of language acquisition is. It's determining which of the given set of concepts, with all their richness and complexity, is associated with which of the possible sounds, with their limited variety. Notice that that's a pretty trivial task. So you can imagine how a little bit of experience could help fix those associations, if the concepts were given, and the structure of the sound was given.

Well, the lexicon also contains what are sometimes called formal elements, things like inflections, like tense or plural or case markers of the kind that you find, say, in German or Latin, and so on. And languages do appear to differ quite substantially in these respects. However, that seems to be mostly illusion. At least, the more we learn, the more it looks like illusion.

So for example, take, say, cases. English doesn't have them. You've got to memorize them when you study German and Latin. But that looks illusory. It appears to be the case that English has them fine, in fact, exactly the way Latin has them. It's just that they don't happen to come out the mouth. They're not articulated. They're in the internal mental computation, and their effects, which are quite ramified, are detectable. But that part of the computation just doesn't happen to link up to the vocal tract. So the computation is running along, but where it gets spelled out of sounds, it just didn't have to look at this stuff.

Well, that's another aspect of language acquisition, that is, to determine what aspects of the computational system receive an external articulation. There are also some differences among these formal elements that also have to be fixed, but they appear pretty narrow. Notice, incidentally, that if you have a very intricate system with just a few possible differences in it, which could be right at the core somewhere, making those changes may produce something that looks phenomenally quite different. Just look at it from the outside, and they look radically different, even though it's basically the same thing with really only trivial differences. And that's pretty much what languages seem to look like.

Well, what about the computational system? That's the lexicon. What about the computational system? It's possible that it's unique, that is, it's invariant. There's just one of them. There's nothing at all to be learned. That's not certain, but it's at least possible, in fact plausible.

If this picture is on the right track, notice we're very close to concluding that there's only one human language, at least in its deeper aspects. And incidentally, going back to this rational Martian scientist, that's what he would have guessed, in first place. Otherwise, acquisition of these highly intricate systems, on the basis of fragmentary experience and the uniformity of the process, would just be a miracle. So the natural assumption is, well, there's basically only one of them, and it's fixed in advance.

If, in fact, acquisition of language is something like a chicken embryo becoming a chicken or humans undergoing puberty at a certain age, then it's at least within the realm of potential scientific understanding, not a miracle. And that's the way it seems to be. The more we learn, the more it looks like that.

More precisely, there does seem to be a certain variety of languages, apparently a finite variety. That is, it does seem to be the-- or it's a good guess now that each possible language that the language faculty makes available, each possible language is fixed by answering a finite number of fairly simple questions. You can look at the system as being like a complex, wired up thing which is just given, with a switch box associated with it and a finite number of switches. You can flip them up. You can flip them down. You do that on the basis of the answer to one or another of these questions, and they've got to be pretty simple questions.

If we could understand the way that works-- bits and pieces of it are understood-- if we suppose we could understand it, then we ought to be able to, say, deduce Hungarian by setting the switches one way and deduce Swahili by setting the switches a different way. The questions must be easily answered for empirical reasons. Children do it highly efficiently and very fast and with very little evidence-- pretty much the way they grow.

It follows that languages are learnable. That is, if this is correct, it follows that, as I mentioned before, languages, in fact, are learnable, which is an empirical discovery, and a pretty surprising one. No reason why it would have had-- no biological reason why it would have had to be true. But it looks as though it may be true, largely because the variety is so restricted.

Well, assuming that to be true, we now have the following situation. Languages appear to be learnable, which is surprising. But languages appear to be unusable, which I don't think is surprising at all. What that means is that only scattered parts of them are usable. And of course, those are the parts we use, so we don't notice it.

Now, actually, this unusability property may be somewhat deeper than what I suggested. Since the origins of modern generative linguistics, there have been attempts to show that the computational system is constrained by certain very general principles of economy, which have a kind of global character to them. And there some recent work that carries these ideas a long step forward. It's still pretty tentative. In fact, it's unpublished. But let me sketch some leading ideas that indicate where things might go.

Suppose we think of a linguistic expression in something like the following way. The language faculty selects a certain array of items from the lexicon. That's what it's going to talk about. And it begins computing away, using its computational procedure-- which I'm now assuming to be invariant across languages. It computes in parallel, just picks out these items, and goes computing along with them. It occasionally merges pieces of computation, so you bring them together, then maybe it picks something out and starts going in parallel again.

At some point in the computation, they've all been merged. It just keeps computing. It proceeds merrily on its way, in other words. At some point after they've merged, the system, the language faculty, decides to spell it out, meaning to provide instructions to the articulatory and perceptual system. Then the computation just keeps going on, and it ultimately provides something like a representation of meaning, which is probably to be understood as instructions for other performance systems.

Well, proceeding in this way, the computation ultimately proceeds to paired symbolic expressions, to paired outputs, instructions for the articulatory perceptual apparatus, on the one hand, and instructions for language use, or called somatic sometimes. So these are the two, let's call them, interface levels of the language faculty. The language faculty seems to interface with other systems of the mind, performance systems at two points, one having to do with articulation and perception, the other having to do with the things you do with language-- referring to things, asking questions, and so on, roughly somatic.

So let's say that's what happens. It looks like it does. Now, of these various computations, only certain of them converge in a certain sense. The sense is that the two kinds of outputs, the phonetic, or instructions, the articulatory perceptual system and the somatic, actually yield interpretable instructions-- like they might compute along and end up with an output that doesn't yield interpretable instructions. In that case, if either one of them fails, we say that it doesn't converge.

Now, looking just at the convergent derivations, the convergent computations, those that yield interpretable paired outputs, some of them are blocked by the fact that others of them are more economical. That is, they involve less computation. Now, here, you've got to define amount of computation in a very special and precise sense. But it's not an unnatural sense.

Well, if you can do this, it's going to turn out that many instructions that would be perfectly interpretable just can't be constructed by the mind, at least by the language faculty, because the outputs are blocked by more economical derivations. What is the linguistic expression, then? Well, the linguistic expression is nothing other than the optimal realization of certain external interface conditions. A language is the setting of certain switches, and an expression of the language is the optimal realization of universal interface conditions, period.

All the work would now be done by special properties of the nature of computation and very special notions of economy, which do have a kind of global character. They have a kind of a least effort character, but of a global sort. Well, it turns out that quite a variety of strange things can be explained in these ways, in terms of a picture of language that is really rather elegant. It's guided by conditions of economy of derivation, with very simple and straightforward operations.

They're not only elegant, but they're pretty surprising for a biological system. In fact, these properties are more like the kind, what one expects to find, for quite unexplained reasons, in the inorganic world. Well, these economy conditions, as I mentioned, have a sort of a global character to them, though not entirely. They have a tendency to yield high degrees of computational complexity that would render large parts of language unusable. Well, that's not necessarily an empirical objection. We then turn to an empirical question.

Are the parts that are rendered unusable the parts that can't be used? We know there are plenty of parts that can't be used. So if it turns out that parts of language involve irresolvable computational complexity, irresolvable by a reasonable device, and those are the parts you can't use, fine. That's a positive result, not a negative result. Well, what we might discover then, if this continues to look as promising as I think it does now, what we might discover is that languages are learnable, because there isn't much to learn, that they're unusable in large measure, but that they're surprisingly beautiful, which is just another mystery, if it turns out to be true. Thanks.

[APPLAUSE]