Brains, Minds, and Machines: Nature and Nurture

Search transcript...

KANWISHER: Welcome to this morning's panel on nature and nurture. I'll illustrate the issues under discussion today. To do that I'll begin with the case of Oscar and Jack, two identical twins described by the Minnesota Study of Twins Reared Apart. Oscar was raised a Catholic Nazi in Germany, and his identical twin brother, Jack, was raised a Jew in Trinidad. At their first meeting, when they were in their 40s, both wore two-pocket shirts with epaulettes, wire-rim glasses, and mustaches.

Upon getting to know each other, it turned out that both stored rubber bands around their wrists, both thought it funny to sneeze in a crowd of strangers, both flushed the toilet before using it, and both liked to dip buttered toast in coffee.

Cases like this intrigue us because they make contact with a question that laypeople and philosophers alike have pondered for centuries. To what extent are people's personalities and their minds already determined at birth, and to what extent are they shaped by experience?

Today this age-old philosophical question has been transformed into a rich scientific enterprise which has already turned up fascinating and surprising discoveries. And these discoveries, in turn, have recast the classic philosophical questions into sharper focus as questions at the level of psychology, biology, and computational theory.

So computational theorists ask, before any data are even collected, what kind of structure or knowledge and what kind of learning mechanisms must be built into a system to enable it to learn from experience? So clearly you need some kind of sensory detection devices and some kind of storage mechanism to be able to learn. But do you need specific mechanisms, for example, just for learning language, as Chomsky proposed, or might more general mechanisms suffice? We'll hear perspectives on this question from Josh Tenenbaum and Terry Sejnowski.

Psychologists collect data to ask empirically what kind of structure and knowledge is, in fact, built in innately and how much comes from experience. Most obviously, we can ask what very young infants perceive and understand.

Our panelist Liz Spelke has done some of the most important work in this area, raising simple behavioral methods to a high art to discover and characterize the perceptual and cognitive world of infants. We can also find adults who, because of one or another circumstance, never got the relevant experience.

For example, our colleagues Dick Held and Pawan Sinha have just published a paper this month in which they tested people who are born blind but had their sight restored from cataract surgery in adulthood. And they tested them in order to answer the question posed by the British empiricists long ago. That is, would a blind subject, on regaining sight, be able to immediately recognize an object just by touch? The answer is no.

So we can also test people who live in cultures with no formal language and no instruction about number or geometry to try to find out what these people understand in the absence of formal training. Liz Spelky has been testing members of a remote Amazonian tribe to answer these questions, which go back at least as far as Plato.

Finally, neuroscientists do experiments to find out how much the brain's machinery for perception and cognition is genetically determined, how much of it is instructed by experience, and what exact roles genes and experience play in wiring up brains.

So in my field we find a stunning degree of specialization in adult human brains with discreet regions that carry on remarkably specific functions like face recognition, like aspects of language, and like understanding what other people are thinking. And when you look at that organization, which has the same spatial structure in any normal adult, it just makes you think that there must be some role of genes in setting up that very regular structure.

However, at least in one case, we know that experience plays a critical role. And that is a region that's in the bottom surface of the left hemisphere, approximately in there about a half an inch. And that region response very strongly to letters and words in an orthography you know and not in an orthography you don't. And that functional property must be wired up by that person's individual experience.

So the same questions about the roles of genes and experience and wiring up brains can be investigated in animals in stunning mechanistic detail, because in animals you can control experience, you can surgically redirect developing nerve fibers, and you can switch specific genes on and off at particular stages of development in particular brain regions. And we'll hear about some spectacular results from this kind of work from Mriganka Sur.

OK. So to briefly introduce the first speaker, Liz Spelke is a cognitive psychologist at Harvard. Before moving to Harvard Liz was a professor in my department here at MIT, the Department of Brain and Cognitive Sciences, where her colleagues still miss her very much. And before that she taught at Cornell.

Among her many honors, Liz was named among Time Magazine's America's Best in science and medicine. She received the William James Award from the American Psychological Society. She's also a member of the National Academy of Sciences and the American Academy of Arts and Sciences.

Liz is in my personal pantheon for the most important cognitive scientists of all time because of her ability to reach for the very biggest and deepest questions in our field. And by conducting stunningly clever and watertight experiments, answer them.

OK. So we'll start with Liz.

[APPLAUSE]

SPELKE: Thank you, Nancy, for that way-too-generous introduction. I also miss my colleagues at MIT, though I have to say my strategy for dealing with this is to mentally create a single department that spans the two and interact with all of you as much as possible.

Well I want to introduce my part of this symposium by going back to a moment in history about 55 years ago when, I think, the question of how we tease apart the roles of innate endowment and experience in the development of our abilities left its 2,500-year-old home in philosophy and moved to science. And in particular I want to point to the work of two investigators who I think played a key role in this development, Eleanor Gibson at Cornell and Dick Held here at MIT.

Now, the area that they focused on concerns a question that was fundamental to philosophy from the beginning. How do we experience a world at a distance from ourselves when all of the information that we get from that world comes only as patterns of stimulation on our proximal sensory receptors? That is, what are the origins and development of our capacity to perceive space?

Now, Eleanor Gibson focused primarily on visual space perception, and she investigated space perception in human infants in a variety of ways, of which I'll mention just one. An ingenious series of experiments that she developed on a device that she called the "visual cliff." So this is a center board-- an opaque center board flanked by two perfectly transparent surfaces, each of them a fully good surface of support, but presented so that only one of them looks like it would support us. The other looks as if there's a gap over which one would fall if one were to crawl forward, because underneath the Plexiglas a textured surface was placed much further away.

So she placed babies on the center board of the surface, encouraged a parent to go to one or the other side, enticing the baby with toys, and see which way the baby would crawl. And when the parent went to the shallow side, babies readily crawled across. On the other hand, when the parent went to the deep side, babies looked at the parent with deep skepticism despite the fact that this surface-- that they could feel that the surface would clearly support them, as this baby is doing right here. Suggesting that, at the time that locomotion begins, babies are already perceiving surfaces at a distance from themselves by means of vision and, in fact, this information seems to be taking priority.

Now, one limitation of this work is that you can only test babies once they start crawling at about seven months of age. So to get around that limitation Gibson developed a systematic program of comparative research. She devised cliffs on which she could test animals of a variety of different species from goats to rats to cats to turtles, I believe, and a whole bunch more, and discovered, I think, two interesting things.

The first thing was, there were detailed similarities in the patterns of performance of all these species, suggesting that these abilities didn't evolve independently in different phylogenetic lines but rather are largely homologous across animals. If that's the case, then studies of other animals can probe the role of experience in their development.

So Gibson did one series of heroic studies on goats where she participated in the birth of a goat and as soon as it dropped out of its mother she ran over, plunked it on a cliff it to see which way it would go. And indeed the goats, like the seven-month-old infants, crossed on the shallow side, not the deep side.

Now, other animals couldn't be tested in that way because they don't use visual information to guide their locomotion at birth. That's a behavior pattern that matures later.

So for those animals, in particular rats, Gibson devised experiments using controlled rearing methods. She took two groups of rats, reared one with the normal alternation of dark and light, and read the other group entirely in the dark until they reached the age where they would use vision, and then she tested both groups on the cliff. They both crossed on the shallow side, they both avoided the deep side, no difference between them, supporting the conclusion that this aspect of visual space perception emerges without any prior visual experience, prior relevant experience of a three-dimensional visual world.

But that's not to say that space perception isn't modulated by experience. A beautiful program of research by Dick Held-- that I see, from the clock, I do not have time to describe-- has shown that experience importantly modulates these innate perceptual capacities, experience, coordinating one's perception with their action experience and growth of mechanisms for perceiving particular cues to depth, which emerge early in infancy, each on their own unique developmental timetable.

Held also was able to relate that timetable to events in the development of the nervous system, particularly primary visual cortex. And he was able to relate findings of plasticity in visual cortex in other animals to findings of plasticity in humans.

So in this way, I think, one question about the origins of our knowledge of the world, the origins of the knowledge that guides our perception of space was decisively resolved by these investigators and their collaborators. So let me turn to the origins of the capacities that are more central to this symposium, those to give rise to human intelligence.

Now here are at least three questions about those capacities that research on visual space perception doesn't resolve. First of all, where do abstract ideas come from? Ideas that capture entities that we can't perceive with any perceptual systems, like the beliefs that Rebecca Saxe was talking about, or number that Susan Carey was talking about, or the points and lines of geometry-- points that are so small they have no thickness and are, in principle, not detectable by any perceptual system.

Second question, what makes humans smart? We seem to have roughly the same perceptual systems as other animals. There don't seem to be huge differences in our basic capacities for action relative to other animals, but when it comes to central cognitive abilities we leave other animals in the dust. How do we do that?

Finally, how do we learn new concepts and theories? Particularly, concepts like the concepts of classical mechanics that Chomsky referred to on the first night, or of the integers-- the rational and the real numbers that Carey talked about-- that require in some way that we overturn our earlier ways of reasoning about the world.

Well, what I want to suggest in my rapidly-diminishing remaining time is that we can make progress on all of these questions by pursuing the research agenda that led to progress on space perception. For me this involves focusing on the cognitive capacities of young infants in relation to four comparative enterprises.

One, comparisons of cognitive abilities across human development; another, across phylogenetic development as we can infer that from comparisons across species; a third, across people and animals with systematically-varying experience of the world including animals in controlled-rearing experiments, people reared without formal education, people reared without access to a full natural language; and finally, comparisons across levels of analysis from computation to brain systems to neurons.

Now to cut to the chase, I think that this research provides evidence for at least five early-developing systems of knowledge, each of them a cognitive system that goes beyond perception and allows us to represent objects that are out of sight, agents that not only act but do so by virtue of hidden intentions, and abstract entities of number and geometry.

Now each of these systems is distinct from the others. And the evidence for that comes from studies probing the inferences that infants making in each of these domains. Consider the systems for representing objects and agents. These are both entities that move, but infants make inferences about their movements in accord with different principles. Contact mechanics in the case of objects, goal-directed teleological principles in the case of agents.

Each system is also innate. And the evidence here, like the evidence for visual space perception, is that it's exhibited by animals and, in some cases, human infants on first encounters with the entities to which it applies. So in some cases, like number, it's possible to do experiments with newborn infants, presenting them with a sequence of sounds and an array of visible objects that either agree in number or disagree in number.

Infants respond consistently differently to those arrays depending on whether they correspond or not in their numerical magnitude, despite the differences in the modalities in which they're presented in their spatial versus temporal formats. Evidence for quite an abstract representation of numerical information at birth.

In the case of geometry, a case that Patrick Winston talked about yesterday that we've also studied, we have the problem that Gibson did with the visual cliff. There's evidence that when infants begin locomoting, they do so by representing the shape of the surrounding surface layout. But that's a system that only kicks in to guide locomotion, so it can't be studied in very young infants. But, like Eleanor Gibson, one can conduct experiments on other animals and Giorgio Vallortigara has been doing this, among other people.

My favorite studies are on chicks who navigate at birth and use the shape of an environment the first time they encounter it. But the prettiest studies that Vallortigara did were controlled-rearing studies where he takes chicks and rears two different groups of chicks either in a geometrically-structured environment or homogeneous spherical-- well, a cylindrical environment with no informative geometric structure, and then compares what these animals do when they're first presented with a rectangular room. As in the case of the dark and light-reared rats, performance is high and identical in the two cases.

Finally, these, I believe, are true systems of knowledge in the sense that each system guides children's learning about entities within its domain and adults' reasoning about those entities.

So one ability you can find in infants is an ability to take arrays of dots-- of elements, combine them together to a sum, and compare that sum to another array. Kindergarten children can also do this, and their ability to do that is correlated with their learning of symbolic mathematics, suggesting a relation between these studies that using transcranial magnetic stimulation of the sort that Rebecca Saxe described yesterday provide further evidence that adults draw on these systems when we perform symbolic arithmetic problems.

So I think these findings tell us that core knowledge systems are sources of human intelligence. But they can't fully explain it, for three reasons. First, each of these systems exist in non-human animals, yet animals don't use them in the ways that we humans come to.

Second, each of these systems has limits that intelligent humans transcend. So Patrick talked yesterday about a limit on the geometry system. It represents left-right relations between walls that are long or short, but not between walls that are blue or white.

And finally, even our simplest, uniquely-human systems of language seem to develop slowly in children.

And I'm over time, so I won't give you an example of this, but I refer you to the example that Susan Carey talked about yesterday, development of natural number and counting, skipping over the case of geometry which, I think, illustrates the same point.

So what is happening over development? I think what's happening is that children come to combine the representations from their core systems productively so that they can create the new knowledge systems that are unique to us. At least some of these developments, I think, are independent of formal education because we find them not only among educated adults, but among adults in cultures where they've never gone to school or used many of the symbolic devices that are highly familiar to us.

There's at least some evidence that some of these abilities depend in some way on natural language. They seem to be systematically not-fully-there in people who, for reasons of deafness, are deprived of the input that we need to build a full natural language. That leads me to hypothesize that these systems may in some way draw on the combinatorial power of natural language to produce the combinations of core knowledge that they require.

Now, that's just a hypothesis. It may turn out to be false, but let me conclude with just three bets. First of all, I think that the effort to answer ancient questions about the nature and origins of our knowledge is under way. I think that it's productively pursuing these questions by using the approaches that were developed, in part here at MIT, some 50 years ago for studying the development of perception. And I think that, by leveraging new findings on the cognitive capacities of infants together with new findings and approaches to the study of learning, we have a good chance this time around to answer them.

[APPLAUSE]

KANWISHER: OK. Our next speaker is Terry Sejnowski who has pioneered the field of computational neuroscience, working in this area way back before most of us had no idea what it was. His lab uses both experimental and modeling techniques to study the biophysical properties of synapses and neurons, and the population dynamics of large networks of neurons.

Terry now holds the Francis Crick chair at the Salk Institute for Biological Studies. He's also a professor of biology at UC San Diego, where he's co-director of the Institute for Neural Computation. He's the president of the Neural Information Processing Systems Foundation and the founding Editor in Chief of the journal Neural Computation. Terry's also an investigator with the Howard Hughes Medical Institute, a member of the Institute of Medicine, the National Academy of Sciences, and the National Academy of Engineering.

[APPLAUSE]

SEJNOWSKI: OK. Well, thank you very much. I really appreciate being back here. I did a post-doc at Harvard Medical School in the department of biology and I really enjoyed coming over to MIT and talking to colleagues here.

So what I want to do in this talk is to give you a couple really brief examples of cases where we can combine information from many different sources-- biological, psychological, and computational-- to try to get some insight into what is happening as the brain develops.

And Sydney Brenner has already set the stage. What he told us was that if you wanted to understand how behavior emerges from DNA you have to really look at this from the perspective of development, of how the genes, by virtue of their transcription, translation, and ultimately the creation of the structures within the cell, guide the development and the specialization of the neurons in the brain. And then how the environment, in turn, influences the actual expression of the gene.

So it's a two way street between genes and environment. And the biologists now who are studying that in great detail have given us the genome, have given us the tools that we need in order to work out those interactions.

Now a really good place to start, as we've heard from Liz, is the early development of language. And what we know is that, even before infants begin to produce speech sounds, they already are listening and hearing, gathering information about the world. And specifically, from Pat Kuhl's work, we know that they figure out very early on what the sounds of their language are going to be. The brain comes equipped to learn whatever language group they happen to be born into.

So there's a sensory learning phase, but following that there is a babbling phase where the sensory motor learning takes place, where the baby learns how to produce the very sounds that it has heard. And in this review by Allison Doupe and Pat Kuhl, this particular sequence of events, a sensory learning phase and then a sensory motor learning phase is re-capitulated in a model system, which is bird song learning.

Now there are 7,000 species of birds, of which about 4,000 are song birds. That is to say that they produce a species-specific song. And, as you can see here in this spectrogram, that the specific frequencies and the time course are very well-specified. They're different for different species.

Now, the zebra finch has been very well-studied. It's kind of the model system for bird song learning because of the fact it's easy to rear in the lab, but it also has a very stereotype song. In a very intriguing experiment, which is very relevant for this symposium-- which was published in Nature in 2009 by [? Feder ?] et al-- they took a young zebra finch-- Now, the zebra finch are born in the spring and they listen to the tutor-- usually their father or specific-- and then in isolation they can reproduce the song after going through the equivalent of a babbling phase. So there's a sensory learning phase and a sensory motor learning phase.

But what would happened, for example, if you took a bird very early and before there was any tutor you just isolate it? Well, it produces a scratchy sound which doesn't sound all like a fully formed birdsong. And let's just call that the first generation.

But now what you do is you rear these birds together, and their children then picks up the scratchy sound. But as you can see, through a sequence of generations, it eventually converges to something that sounds very similar to the original zebra finch-specific sound.

So that suggests that there has to be some innate knowledge of what the species-specific sound should be like, but it doesn't emerge immediately. It's not like, right out of the egg. It really does require some experience and some development.

Now here is an example of a zebra finch song and I'm going to play for you what it sounds like just so you can see what the notes are and the syllables. So a few introductory notes and then a very stereotyped set of motifs, each of which has a syllable. And you can see that's repeated very accurately. That emerges over months.

Well, how are we going to understand how that emerges? Where is the information coming from? Where is it stored in the bird brain? The song bird experts talk about a template that's formed somewhere in the bird brain and that the bird has to match that.

If you look at the areas of the bird brain that correspond to places where we know that the motor system is forming the sounds and where the auditory system is picking up the sounds, there is a corresponding set of areas in the human and the song bird. And this suggests that, if we could understand these mechanisms in the song bird, that may help us understand something about what's emerging in the human brain, which is much more difficult to study.

So let's see what we know. Here is a model now that was developed in my lab by Kenji Doya back in 1995 in which we took each of these areas and what was known from recording from individual single neurons, and then tried to construct a computational model which would explain, first of all, how the actual motor system produces the sounds starting with single syllables generated in an area of the brain called HVc studied by Michael Fee here at MIT, and then finally producing the sounds. And we know that this pathway is the key pathway in the adult.

However, there's another pathway in the forebrain, which is very important for the acquisition of birdsong. And if you lesion any of these areas you don't produce normal birdsong. So it's clear that you need these in order to develop the song, but it's not needed for the final production.

And the basic model that we developed was a very simple one. It's a reinforcement learning model and it's based on gradual refinement of the song. This is what you hear. The song first starts out with a very inaccurate version of the syllables and gradually over time it refines and crystallizes. Well, here's the basic idea. The basic idea is that one of these pathways, from LMAN, the forebrain, to the motor [? nucleus ?] here, produces random perturbations. It just changes the song a little bit.

Well, the bird produces the song with these new synaptic connections, listens to it, compares it to the template that's stored somewhere in the auditory system, and if it's better it keeps the changes. If it doesn't improve, erase them and then do the process over again. This is a very weak learning system. It's called reinforcement learning. Because all you're getting back is whether it's better or worse.

But surprisingly, what we found was that this system refinement actually, within just a few thousand trials, can actually get you from something that sounds very scratchy to something is actually a very good approximation to what the zebra finch produces.

Now I want to give one more example, a very brief one just to show that there are other-- reinforcement learning actually is quite capable of producing some very powerful learning [? processes ?], even though it's a very weak learner, and this is honey bee learning. Honeybees are very, very good at going out in the spring and forging. And they are able to learn from the shape of the flower, from the color and the smell, which flowers have the best nectar. And it's very important because this changes very rapidly in the spring.

Now, back in 1997 Randolph Menzel's lab discovered a particular neuron in the bee brain called VUMmx1. It has a neurotransmitter call octopamine and they proved that this neuron was absolutely essential for classical conditioning, for pairing the smell and the visual stimulus with the reward. And this is an essential part of the process of learning how to control the motor system in order to be able to land on flowers that have good nectar.

Well, back in 1994 Peter Dayan, Reed Montague, and I developed a very, very simple model which used that neuron-- a model of the actual output of the neuron in order to be able to guide both the learning and the motor system. And the key idea here is that this neuron is trying to predict the reward that you're going to get from a particular sensory stimulus. If I land on a yellow flower will I get a good reward, a high reward, or a poor reward? And as the bee goes and chooses it can use that information from previous experience to guide its behavior.

Now, this model reproduced many aspects of bee behavior, including risk aversion behavior-- a lot of things that bee psychologists have been studying for many years-- with just a very simple, as you can see, a few model neurons, the key here is somewhere in the brain there is a way to predict future reward.

Now, what if we replace this very simple visual sensory input, this one neuron here which represents a particular feature of the visual stimulus? Suppose that we replace that with a powerful memory system of a kind that is seen in mammals, a cerebral cortex, a multi-layered hierarchical system of the kind that Tommy Poggio has been studying, for example, in the visual system. Rather than have a single neuron, if we could represent in full detail all the details of the world and then use that in order to be able to predict very subtle predictions for future reward. Well, that's a hypothesis.

Could it be that the power of our abilities to go and make cognitive choices in the world is being driven by a very simple reinforcement system, like the temporal difference learning algorithm that we used in the bee?

Well, this has been tested. Not in a model of the cerebral cortex, but in a very simple model of playing the game of backgammon. So Gerry Tesauro, who was at IBM for a post-doc, decide to see whether or not he could teach [? us ?] a multi-layered network which was computing the value function, predicting the reward, based on the state of the board for backgammon-- whether or not he could train a network by self-play, playing itself, in order to play the game and be able to discover some of the strategies that humans have adopted in order to be able to play at a very high level.

Well, the game, his network, played itself hundreds of thousands of times. And it learned very slowly. At the beginning it had to learn how to bear off the pieces so that you could win. In order to win you've got to get all your men off. It then gradually learned how to develop blocking strategies in the middle game to prevent your enemy from going through and winning the game. And ultimately, it also learned some very sophisticated strategies, and specifically the details about when you should change from being in defensive position to being in offensive position. It's something that's very subtle that even takes humans a long time in order to be able to figure out what the right moment is.

So when experts came and started playing against the network, TD Gammon, they were astonished at how well it was playing. It was by far the best game-- backgammon-playing program in its time. And in fact, it reached the point where the experts-- it played the experts-- [? Bob ?] [? Raberti ?], in particular-- to a standstill. And he was quickly taking notes because he went back and actually analyzed some of the moves that it was making that, it turned out, were better than human moves. In other words, he was learning from the program.

And in a sense this very weak learner, reinforcement learning, coupled with a very sophisticated knowledge system was able to discover. And was, in a way, being creative about the game of backgammon. Which is a very limited game, but let me remind you that there are about 10 to the 40th different game positions. And even with a million games played against itself, you're only visiting a very, very small fraction of all the possible game positions, so you have to be able to generalizing in order to play well.

You have to be able to figure out from previous experience how to deal with new experience. And ultimately that's really what brains are really good at, how to deal with an uncertain world where you have to be able to make judgments.

So the last thing I want to tell you is that we have, in our own brains, a system which is called the dopamine system. And Wolfram Schultz, who's here on the right, recording from these dopamine neurons, discovered that they are also predicting future rewards.

And these dopamine neurons project very widely throughout the prefrontal cortex and the striatum. And it may well be that the very same system, the very same ancient reinforcement system that was present in almost all species that helped guide those species to survival, is also guiding you right now in helping you to make the right decision. Thank you

[APPLAUSE]

KANWISHER: Mriganka, do you want to set up while I'm-- OK. Thanks, Jerry.

So our next speaker, Mriganka Sur is a Paul E. Newton professor of neuroscience and he's chair of the Department of Brain and cognitive science at MIT and director of the Simon's Initiative on autism in the brain at MIT.

Mriganka studies the organization, development, and plasticity of the cerebral cortex using experimental and theoretical approaches. He's discovered fundamental principles by which networks of the cerebral cortex are wired during development and change dynamically during learning. Mriganka is a member of the American Academy of Arts and Sciences, the National Academy of Sciences of India, and he's a fellow of the Royal Society in the UK.

Mriganka's the closest thing I have to a boss, and having watched him in action over many years, one thing I find most remarkable about him is his uncanny ability to push aside the cobwebs in any murky discussion and get to the crux of the matter in any topic, whether it be neuroscientific, cognitive, or administrative. Mriganka.

[APPLAUSE]

SUR: Thank you, Nancy. I'm going to talk to you as a neuroscientist and I'll give you my punchline right away, that nature and nurture, or genes and environments, at heart and at least at certain points in development, are inseparable. Each needs the other for affecting brain structure, function, and behavior.

So they've known for 100 years that the cerebral cortex of the brain, for instance, which is 80 percent of the brain in us-- which is what I study in my laboratory studies-- we have known for 100 years, since the German anatomist Korbinian Brodmann, that the cerebral cortex has different regions that look different.

And the research program, at least in one large chunk of neuroscience for the 100 years since, can be summarized in one sentence, that individual area of the cerebral cortex have specific functions. And we know that to be so from a variety of kinds of evidence that demonstrated that individual areas have specific pathways that bring information in and take information out, and they have specific processing networks.

And I remind you that the brain is made of neurons which convey information via electrical impulses, and they make synapses with other neurons in order to make networks of neurons that together compute information in ways that are much more than 100 or 1,000 neurons times 1. So each neuron makes thousands upon thousands of synapses with hundreds of neurons. And networks are the engine of the brain.

So how does the cortex get to be so? How did the specificity arise? And in developmental neuroscience this question then is, how is the brain wired? And then related to that and arising from brain wiring, how does brain wiring create function?

So to understand how brain wiring and how nature and nurture interact in order to create the cortex, we need to place development in the context of the timeline of development. And in the human embryo, within a few weeks after conception, the central nervous system begins to form. And right when the central nervous system begins to form, genes begin to be expressed in the primordial central nervous system that lay out the demarcation off different regions of the brain, including the cortex.

And I'll begin with a very simple statement. The brain wires itself. You might think, it is so simple, why does it bear repeating? I believe it is the single most important reason why brains compute the way they do and why brains differ from computers. And when builders of computing machines understand this, we will have made some progress.

So the brain wires itself and the brain wires itself using mechanisms that depend on developmental time. And the earliest events in brain wiring, which is the demarcation of brain regions, cortical areas, and subsequently the long-distance pathways that bring information to or take information away, form very early. And we take it for granted that it's mainly guided by genes.

I hasten to add that there is no such thing as a gene that acts independent of an environment. There is always an environment. That only issue is, what is it? And early on the environment is the environment of other genes and the molecules they make, and the milieu of the developing nervous system.

But then as development proceeds, the outside world can begin to have an impact via the pathways that have been laid down such as, for example, vision which is transduced by the eye and converted into electrical impulses from eye into brain, can then begin to have an impact on the networks that arise in the visual cortex and begin to process vision. So networks often form too late, even into adulthood and the kind of things that Terry talked about. And we take it as an intuition, and it is demonstrably true, that networks are influenced by experience and plasticity.

So here are some propositions on plasticity. This is what I'm going to emphasize for the next few minutes before I close.

Plasticity is changes in synapses, cells, and networks as a result of experience or learning. Plasticity encodes the past and influences the future. Plasticity is prominent during sensitive periods of brain development. There is no one critical period or sensitive period for all of brain development. Different modules, different brain systems such as lizards, have difference sensitive periods. We need to understand that and keep that in mind as we talk about genes and environments. Plasticity underlies learning and memory, as you've heard from Terry.

But the two key propositions for my next five minutes are, plasticity shapes neuronal responses and creates brain representations, and plasticity involves rules which are implemented by specific mechanisms, including genes. You cannot have plasticity without genes, gene expression, and gene function [? acting on ?] specific synapses and in specific ways in brain networks.

So the visual system illustrates how cortical networks create new properties from [? simpler ?] imputs. Vision starts with the eyes, but is really created in the brain. A key property created in the primary visual cortex is the selectivity not to a blip of light or to light all over, but to an oriented line segment.

Orientation selectivity is a key property of vision. A very simple question that my laboratory asked many years ago was, how does orientation selectivity come about? What are the networks that underlie orientation selectivity?

Orientation selectivity, shown here for two neurons, is the property that a neuron such as this one responds to light of this orientation at about 10 o'clock, 2 o'clock, moving up and to the right. This neuron likes a orientation that is 90 degrees away, moving up and to the left.

There are-- not only are individual neurons sensitive to orientation, there is a systematic organization of orientation shown here by the new technique of two-photon imaging, in which we can label every cell-- every cell in a volume of tissue that's about two human hairs wide. Each human hair is about 80 to 100 microns.

And you can see here that, as we show this animal-- in this case a ferret-- a [? grading ?] of a certain orientation, neurons at certain locations like certain orientations. For example, neurons that are horizontal like-- neurons here like horizontals. Neurons here like vertical orientations. It tells you at a glance that a neuron in the visual cortex is selective. And the selectivity is of the essence for vision and for visual processing.

The selectivity is expressed furthermore in orientation maps, such as you see here in which each of these color-coded regions represents regions that have neurons with one orientation selectivity, or another, or another. And this region here is this pinwheel center. Such maps come about from networks that wire up to create orientation selectivity in individual neurons, and a self-organized map across thousands upon thousands of neurons that represent orientation in this manner.

How does this come about? 20 years ago, we did a simple experiment where we asked, is the nature of vision important for creating the networks that process vision? So vision starts with the eyes and progresses into visual areas. Audition, or hearing, starts with the ears and progresses through brain pathways into auditory areas.

We rewired the brain. We made visual inputs go to the hearing structures of the brain, thereby providing the auditory cortex with the electrical activity and the pattern of activity that is characteristic of vision.

What does that do? It rewires visual networks. It rewires auditory cortex networks such that they become visual-like, within limits. So I just told you that, in the normal visual cortex you have orientation-selective cells, we have orientation maps.

In the rewired auditory cortex, shown here, that was never made to see. It was never made to see. The rewired auditory cortex ends up developing orientation-selective cells and a crude map of orientation [? with ?] the pinwheel.

And so this provides powerful evidence that the nature of electrical activity-- because that's all that the cortex knows about the outside world. It doesn't know about light or shade or sound. What it knows is, the electrical activity that comes down these pathways that drive the neurons in the cortex.

Yet that electrical activity ends up rewiring the connectivity between neurons such that while in the normal auditory cortex you have a certain pattern of connections which differs greatly from the pattern of connection in the normal visual cortex. The rewired auditory cortex ends up looking much more like the visual cortex than like the auditory cortex, yet with certain constraints.

How can this be? How can it be that electrical activity alters synapses and connections? And the answer speaks to some of the deepest issues in our field. And it speaks of the deepest relationship between nature and nurture.

A synapse has 1,500 molecules, that are 1,500 genes that make a synapse develop, change, and function. And these genes act on different time courses under the influence of activity. And we can image each synapse, we can make mice in which certain genes are missing, we can RNAi specific genes to ask what happens when they are missing.

And when we image the effect of such manipulations at the level of single synapses-- now, this here is a two-photon image of a living mouse brain in which these are living synapses that are twitching. And each of these is less than a micron, and we can image them at high resolution. And I've speeded up the video about 50 times.

You can see that synapses twitch. Large synapses twitch a little bit less than these teeny tiny synapses. I guarantee you, there are synapses in your brain that are actually twitching at this very moment.

Why do they twitch? They twitch because activity drives them and the more they are driven, the less they twitch. The less they are driven, the more they feel, I'm not getting it right so I need to change.

In fact, during development there are hundreds upon hundreds of genes that are going up and down in the brain. And during critical periods of development of vision, using other manipulations we can do massive genetic screens and ask, what are the genes that go up? What are the genes that go down, in this case in the mouse brain during a critical period during which visual connections from the two are either forming in the primary visual cortex?

Here are a few hundred genes. The details are not important. Some are going down, some are going up. And so electrical activity and what is happening during this time is that vision is happening. Vision is driving the development of cortical networks and cortical synapses that process vision. So electrical activity regulates the expression of hundreds-- I would say even thousands-- of genes during the critical period.

And in fact, the complement is also true, that many genes mediate the effect of electrical activity and experience. You need genes and the protein they make in order to encapsulate the effect of, say, the pattern of vision onto the circuits that process vision. And one of these genes that we have recently studied is a gene called [? Arc. ?]

What does this have to do with understanding brain function and, importantly, dysfunction? Because one of the goals of our field is to not only explain normal brains, but also abnormal brain function. And we believe that these insights of the immutable relationship between genes and environments or genetics and plasticity speaks hugely to the vast majority of neuropsychiatric conditions. One example of that is autism.

So autism is a developmental disorder. It's marked by a triad of symptoms. Sustained impairment in social interaction, impairment of communication, restrictive patterns of behaviors. It's diagnosed between one and three years of age. And that already tells you that it's likely a disorder of synapses, because that's when synapses are developing, are being shaped hugely. It's a strongly genetic disorder. It's also highly heritable. The two are not the same thing, but hundreds of genes contribute to autism risk and there are single-gene rare variants that are extremely useful.

And in the last minute I will give you a little bit of insight as to how understanding how genes shape plasticity is fundamental to understanding disorders such as autism or schizophrenia or bipolar disorder, disorders-- neuropsychiatric conditions that are fundamentally disorders of synapses and circuits.

We study a disorder called Rett syndrome. It strikes girls 20 to 1 over boys. It's a devastating disorder. It's a disorder that is manifest after the first one or two or three years of life. The gene was discovered ten years ago by Huda Zoghbi. It's a gene called MECP2. It's an amazing gene. It makes a protein that influences hundreds of other genes. It's an epigenetic regulator.

To make a long story very short, working with Rudy Jaenisch of the Whitehead Institute, we have demonstrated that the mouse in which the gene is missing recapitulates many elements of Rett syndrome. And the fundamental pathophysiology, what has gone wrong, is a large number of molecules that meet at the synapse, and when this gene is missing, synapses that [? are ?] excessively plastic. So that brain systems don't mature enough.

We can show this in animals. We can show this in neurons derived from human beings who have Rett syndrome using the amazing technologies of stem cells and neurons derived from skin cells. The skin has the critical property, as we all know, that when we cut our skin it heals. Not so in the brain. The large majority of brain cells, once you lose them, you have lost them. But this property of skin cells, being able to heal, can be used to take skin cells from a individual with a disorder and derive neurons that share the same genetics.

And so here are neurons in a dish derived from an individual. And by electrical stimulation of these neurons, you can see that one can measure the connectivity and the strength of synapses using certain molecular genetic [? chicanery ?].

Out of all this comes the idea that understanding plasticity mediated by genes leads to the development of therapeutics. And one of them is in clinical trials at Children's Hospital Boston. So the deepest truth about nature and nurture is that you need nurture to influence nature via the genes that rely on activity and experience. And you need nature, you need genes, to implement nurture. Thank you.

[APPLAUSE]

KANWISHER: Thanks, Mriganka. Our next speaker is Josh Tenenbaum. Josh studies learning, reasoning, and perception in humans and machines with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities.

Josh is professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT and a member of the Computer Science and AI Lab, also at MIT. He's the recipient of the New Investigator Award for the Society for Mathematical Psychology, the Early Investigator Award from the Society of Experimental Psychologists, the Distinguished Scientific Award for Early Career Contributions to Psychology from the APA, and just a few weeks ago, the Troland Research Award for the National Academy of Sciences.

Josh is not only a star cognitive scientist, but a true Renaissance thinker in an era when most of the rest of us mere mortals just can't pull this off anymore. There's just too much to know and our fields move too fast, so most of us just try desperately to hang on by our fingernails in our one little zone of expertise. Not Josh. Over the years I've discussed a very wide range of topics with Josh, extending far beyond cognitive science, and never once have I come upon a topic that Josh has not already read extensively on and already thought deeply about. It is a true privilege to have him as a colleague. Josh.

[APPLAUSE]

TENENBAUM: Thank you, Nancy, for that generous introduction and for listing all of those awards, most of which I wouldn't have, I think, if Nancy hadn't written an even-more-generous letter to all those august bodies.

So I'm going to talk about the perspective we get on these nature-nurture questions from trying to both, as Nancy said, reverse engineer the origins of human cognition, particularly distinctively-human cognition, and engineer more human-like artificial intelligence.

And what we see is that attention, that Liz and Terry both illustrated very well-- on the one hand, most of cognition is, in some sense, clearly learned. The adult state is not present at birth, it develops over the first few months and years of life in key ways that are a systematic function of the experience of the organism. And Mriganka showed that at the neural level, too.

On the other hand, it's not learning in anything like the simple sense that the British empiricists imagined. There is a huge gap between the information coming in through your senses and the rich abstractions-- the abstract concepts that your mind somehow comes to on top of those. And some other source of information has to filling in the gap. You could call that, perhaps, nurture-- or, sorry, nature.

And what we've learned is that, in some sense, to tackle these nature-nurture questions we need to develop new tools over the last few years in both human learning and machine learning which are leading us in exciting directions. I want to try to give a sense for those.

But first I want to start with a classic picture that's been around, arguably, well, maybe even since the beginning of the 20th century, but really at the core of modern cognitive science, neuroscience and AI, starting from the 1980s. We could call this the standard model of learning. It's a view, computationally, that says learning is statistics on a grand scale.

We have masses of data and high-dimensional spaces, that's the space of sensory inputs-- you know, say, the million retinal ganglion cells and cochlear nerve fibers and so on. And in that high-dimensional space are patterns, clusters, correlations, principal components. And the math for finding those patterns can be written in our language of symbolic math.

But it can be implemented in systems of networks, networks of neurons. Like artificial neural networks and real networks, using the kind of mechanisms of plasticity at the cellular level that Mriganka has been talking about, and looking at larger anatomical structures like the hippocampus, where, if you saw Matt Wilson's talk yesterday, these mechanisms underlie actual learning and memory phenomena in rats, humans, and many other animals.

The same kinds of learning out of statistics underlies many of these intelligent technologies that have transformed our lives right now that we take for granted, say, in face detection and our next generation of pedestrian detection in cars that you heard about yesterday, various systems online for recommending new friends or new products. Most recently and famously, real triumphs like IBM's Watson that beat the world champions in Jeopardy. Or maybe most famously around the world, Google's ability to search information in really human-like ways.

So what's missing? Well, consider how a child learns a new concept. Like, here's an example from an experiment we do in our lab, trying to replicate the situations of concept learning with adults and children of various ages. I can give you just one or a few examples of a new concept here, these tufas. And now you can go through and tell me which other ones are tufas. You didn't have to do what seems like compiling a massive data set and training for long epics.

But just a few examples. You can say this one here-- let's just try to make sure everyone's awake. Is this a tufa, yes or no?

AUDIENCE: Yes.

TENENBAUM: This one?

AUDIENCE: No.

No.

Yes?

TENENBAUM: OK you get the idea. You're not percent percent sure, but you can get it almost perfectly from just a few data points. What do you have that just statistics doesn't have? It seems like some common sense way of organizing the world of objects and their relations. And if you look at, say, the occasional but notable very un-human-like failures of, say, Google or Watson, they seem to be missing this kind of common sense.

Another angle into this which, I think, is maybe the deepest one-- well, we saw some of this in the work that Liz talked about and Rebecca Saxe's work. So Rebecca already showed you this famous Heider and Simmel video here where we don't just see this in terms of shapes-- two triangles and a circle-- but it looks like objects. They see kind of physical forces being exerted, and there's even a social story. That big triangle was bullying the little triangle, the circle's hiding. Now he's ominously going in-- you know, cue the scary music.

You put on top of even just this very simple perceptual input which is, if you want to think of it mathematically, something like 10 numbers to characterize the position, the xy position, the orientation of those shapes-- 10 numbers just changing in time. On top of that you see objects and also intentional agents with mental states, beliefs, desires, plans, even feelings that your mind just inescapable comes to.

Now here, this is-- that's what adults see. This is a similar kind of stimulus from developmental psychologists Southgate and Csibra, who studied this in 13-month-olds. And again, we see something similar, not just two spheres moving around on a plane, but-- I'll repeat this again-- but something that looks like chasing. The big guy is chasing the little guy and not particularly well, either. Right? It seems like he thinks he can fit through some holes, which he can't, right?

So what's going on? This is something which 13-month-olds can perceive. I'm not sure if 3-month-olds can perceive this, but there's probably reason why they studied them in 13-month-olds instead of three months. And the development of this kind of intuitive psychology has been a rich field. And then this is something which adults percieve.

So what's changing? Where's this come from? Now if you notice, the title up here is intuitive theories. And one of the things that we've learned, again taking our cue from developmental psychologists, is that we can think of these systems of common sense as intuitive analogues of scientific theories.

So again ask yourself, in the classic moments a scientific discovery how did, say, Darwin come to his theory of natural selection by observing finches in the Galapagos, or Mendel come to his theories of genetics from pea pods, or Newton come to, say, the universal law of gravitation from observing the orbits of planets? Well, it wasn't by doing massive statistics on high-dimensional data, but by positing something about the causal structure of the world, exploring various hypotheses about the underlying causal mechanisms, and using the data to choose between them and navigate a space through a search of causal theories.

And the new theories of learning that cognitive scientists and AI researchers are developing do, essentially, this. They write down in the form of-- there are many different languages for this, but here I'm writing these down as a kind of computer program with probabilities in them. They're called probabilistic programs. They are ways of formally describing causal processes. And then we have tools that we drawn on from Bayesian statistics-- this is just Bayes' Rule here, which says we can evaluate different causal models in light of the data to see, in some sense, which provides the best explanation of the data.

And that's a new take on learning which we can use, for example, to tackle not only problems of scientific discovery but-- say, go back to this thing. So what is your mind doing here when you look at this new world of objects? Well, in a sense we would posit that you're organizing this into something like a kind of intuitive evolutionary tree, putting some kind of taxonomy on there. And now with that [? hypothesis ?] [? space ?] in your mind, looking at these few examples over here of tufas, you can right away lock in and narrow in on the concept that's being learned and figure out how to solve this key problem that Terry said is at the heart of learning, which is generalization, going beyond these few data points and figuring out which other ones are tufas and which ones aren't.

So now we have to say, well, what seemed to enable this is this tree-structured [? hypothesis ?] [? space ?]. Where does that come from? Is that nature? Well, it's possible. And it's been proposed, for example, by the cognitive anthropologist Scott Altran, that this tendency which you see not only in everyone in this room but across cultures-- to organize the world of natural kinds, plants, animals, and so on into trees-- is some kind of cognitive universal, something innately determined.

Now the trick there is that not all domains of thought are tree-structured. Many different domains have different kinds of structures. And so, if you think that this abstract structure of a domain has to be innately specified, then it might lead to a proliferation of different cognitive modules, and it's hard to see how we would have innate specification of things that never even existed in any cognitive system, like quantum mechanics, through most of our evolutionary history.

So here's a, I think, a very insightful quote from Chomsky which says, it's quite natural to think if different domains of cognition are structured in fundamentally different ways, that that would be innate and it would [? hard ?] to see what kind of common mechanism of growth could lead to those different sorts of fundamentally different cognitive structures. And I think he's basically right here.

Although, one of the things that we've discovered recently-- this is work that Charles Kemp did as part of his PhD in this department a couple of years ago-- is how to take different mechanisms of growth for growing fundamentally different kinds of cognitive structures, put them together at a higher level of abstraction to what you might call a universal grammar for cognitive structures, that can then trigger the appropriate mechanism of growth. And, for example, to learn that, say, natural kinds are best organized into something like a tree.

But from analogous data in a different domain, say, how Supreme Court justices voted on different decisions or other political data, learn that, say, the domain of politics is best organized into something like a one-dimensional spectrum, reconstructing in a sense, the left-right or liberal-conservative spectrum. Or that, say, a domain of images of faces is best organized into something like a low-dimensional space varying on abstract but important perceptual properties, like the ethnicity of the face or the gender, the masculinity.

In case you can see here, this is-- you can probably clearly see the white-black dimension, but if you want to go-- this is sort the more masculine faces over here which you might think of as these guys being the tennis team and these guys being the football team. The very same kind of system by combining more primitive structures like, for example, the chain and the ring can infer the cylindrical or latitude-longitude map of cities of the world just from their distances.

Now this is very exciting to us in terms of how it casts a new light on nature-nurture questions. But to come back to the most important things that we started with-- and let me just check about-- OK. Yeah. Since I'm almost out of time, that's a perfect place to be.

This is the frontier of where we're at and, I think, the really hard, interesting challenge. We call this the 3-6-12 project because what we really want to do is to see if these kinds of ideas can help us reverse-engineer the earliest and most fundamental stages of knowledge, the core knowledge systems that Liz and her colleagues have elucidated.

It's an incredibly exciting project to be working on because Liz and her colleagues, the whole field has mapped out so much about what is known at different stages, and yet so much is not known about how to really describe what this knowledge actually consists of and how to understand the transitions over even just the first few months of life. It's so hard that all we can do so far is to make a start just at trying to describe what those systems of knowledge are like, the core systems of intuitive physics, for example, or intuitive psychology.

Just to give you a sense of what we mean by this and the kind of work that we're doing, here's another demo. So I've got here a table with red and yellow blocks. And imagine that this table is bumped, and the blocks move around in some way, and one of them falls on the floor. The question is, which is going to hit the floor first, do you think? A red or a yellow block? So again, what do you say? Red or yellow?

[INTERPOSING VOICES]

TENENBAUM: Good.

AUDIENCE: Yellow.

Red.

Yellow?

Yellow.

[INTERPOSING VOICES]

[AUDIENCE LAUGHTER]

TENENBAUM: One more.

OK. Good. Pretty much everybody says the same thing, as you can hear for yourselves. That's the kind of experiment that we do. So what's going on in your head there? Right? Is it some kind of gigantic statistical inference?

Well, intuitively what it feels like-- to me, at least-- is that you're running very quickly some kind of mental analog simulation, like a little model of physics in your head. And we've tried to formalize that using one of these probabilistic programs where the program now describes something like the rules of Newtonian mechanics, but with probabilities because you're really not sure about many things. Exactly where the blocks are and exactly how they might move.

And as an example of how you can test this quantitatively, we can show people stacks of blocks like these. Here's a stack which most people have to-- the task is to judge stability. So this is a stack which you say is probably stable. This one, probably stable. This one looks quite unstable. Here some intermediate cases.

And we can map out a gradient of stability. This is just a sketch of how the model works, more or less what I was saying. You run a few quick simulations of a kind of probabilistic Newtonian mechanics, and then here we can map out a gradient of stability from one end down here to the other end. This is what our model predicts, this is the judgments of human subjects across 60 of these towers, and there's a very high correlation, suggesting that, at least at some competence level, this is the kind of thing your mind is doing.

We've even, most recently-- this is something I'm quite excited about. I've been able to take this kind of computational method and bring it into contact with the empirical methods that Liz pioneered and developed, the ability to look at infants-- what infants know about the world from their looking times. Liz and others have shown that, basically, the longer the infants look at some display, it's a measure of their surprise and thus an implicit reflection of what they know or expect about the world.

And we've been able to take very simple versions of those kinds of physical objects scenarios, map out quantitatively infants looking longer or less-long at different situations as a function of different stimulus variables that would make it more or less surprising under this kind of probabilistic intuitive physics, and show that a simple probabilistic model is able to predict that with pretty high quantitative correlation.

I should say the hard empirical work here was done by [? Tagalus ?] who is a student in [? Luca ?] [? Benotti's ?] lab. And it's really the first time that we've been able to take a principled probabilistic computational model and relate it to the most influential and salient kinds of empirical evidence for infants' abstract knowledge.

The same kinds of ideas we're trying to develop, in large part in collaboration with Rebecca Saxe, as models of intuitive psychology. So here the probabilistic program, instead of describing something like Newtonian mechanics, describe something like rational action.

The classic philosophy of mind model that says a rational agent takes actions as a function of beliefs and desires-- here we turn that into a probabilistic program where there's some kind of rational planning mecahnism. AI researchers have figured out-- and this could be a whole talk on its own-- how a rational agent-- it could be a robot, it could be an online agent, it could be an economic agent-- takes some notion of utility, that's desires, and some notion of expectations, that's beliefs, and puts those together to plan a sequence of actions that, in some sense, maximizes expected utility.

And what we do is, just like in the physics case, we want to take that program and, in a sense, run it in reverse to observe the actions and infer the beliefs and desires. And I should stop.

But again, it's given us the ability to take this classic problem out of philosophy and make it into, essentially, the subject of quantitative psychophysics. So just look at this and say, oo-ah, to the nice data and the model [? fits ?]. That's about all I can say here. But the challenge is, now that we can describe these systems in adults, to ask how they might actually be learned-- and this is really just an incredibly hard problem which were only at the beginning of.

But a key lever which we hope to be exploring over the next few years is based on the ability to, say, use what are essentially universal programming languages for these rich probabilistic models, for example, the Church Language-- developed by Noah Goodman, Vikash Mansinghka, Dan Roy, and others in our group-- that allow us to express all these models in some kind of unified form, and then to search over this space of causal theories in a principled way. It's a very hard problem, but it's the kind of thing which we think we can make progress on.

So I'll just conclude by saying, the hard problem is to explain, how does our mind get so much from so little? We think that to do this we're going to need new theories of learning that go beyond just the idea of statistics on a grand scale, to something like probabilistic inference over programs that describe the underlying causal processes of the world.

It gives us a new take on these classic dichotomies of nature versus nurture. We don't any more, I think, have to fight battles or be constrained to narratives that put statistics and probability on one side, the math of data and the math of knowledge, logic, symbols on the other side, but really understand how these can interact in rich ways to build the kind of abstract knowledge that humans have.

And the most exciting part of this is to be able to now empirically and computation bring these ideas together in mapping out the first few months and years of infants' knowledge. And it could be the basis for a new, more human-like AI. Thanks.

[APPLAUSE]

KANWISHER: Great. Thanks, Josh. Thanks everyone. Those were really riveting. I didn't quite do my job to cut you guys off because I couldn't bear to. Every one of them was so interesting. So we've seen a whole-- quite a space of different issues here.

And just to get the discussion going, I would love to hear you guys interacting with each other on some of the great range-- with Liz saying very clearly that many of the core domains of knowledge, from object to space to number to a concept of agents, are really built in innately, to Terry emphasizing the power of reinforcement learning to get us off the ground. And of course these are not mutually exclusive, but there's a very different ethos here.

So, I don't know. Maybe, Liz, you could jump in on some of this.

SPELKE: Sure. Actually I want to start out with a terminological issue because I think that there's two senses of nature and nurture that are running through the discussion. One that was emphasized by both Mriganka and Terry is, interaction of genes with their products. A different one is, interactions of a whole developed but inexperienced organism with a world outside itself that it perceives and acts in and learns about.

And one can agree that genes and their products interact inseparably in the development of anything, and still also believe that one can illuminate systems of knowledge by teasing apart-- by asking questions like, what is available at the beginning of one's encounters with an external world? How flexible are our encounters with the world? What are the range of different paths that development can subsequently take as we alter the external environment?

So I think we want to keep those issues separate. I'm not-- I mean, they're all important, but one can give different answers to those different questions. For what Josh was saying, one thing that-- you talk about the big question, how can we learn some things really quickly? I think that's one of the big questions. And the other big question is, why is human learning so flexible?

And we need a theory of learning that can both account for how it could be that children can learn so rapidly-- look at the explosive growth of vocabulary starting at the end of the first year of life or taxonomic knowledge of different kinds of objects and artifact functions and so forth. Kids are learning so rapidly, but also so flexibly, so that they can come to be competent actors in these widely different human cultures that we observe across space and also across historical time. How can you do both of those things?

And I think an older version of the nativist-empiricist controversy said we were doomed to impale ourselves on one or the other horn of this dilemma. Either we were going to think there's a lot of innate structure, in which case you can account for how learning could be rapid but not how it could be flexible, or we're going to think we have flexible learning systems, which accounts for flexibility but has a much harder time accounting-- as Josh was saying-- for how learning gets off the ground.

I think one thing that's interesting that's happening here is that we may not have to choose between having rich initial structures and having rich and smart learning mechanisms.

TENENBAUM: Yeah. I think that's a great way to put the challenge. Because there are actually several different bodies of mathematical theory which suggest that this should be impossible, right? There's various ways that people have tried to analyze, theoretically, the possibilities for learning and generalization And to say the ability to generalize from very sparse data could only be true if you have strong initial constraints,

SPELKE: Mm-hm.

TENENBAUM: Right? And in some sense that has to be true, right? If you have a very flexible hypothesis space there's too many possibilities. And a small amount of data can't really rule many things out or really guide where you should go.

And what's actually really quite an open question, is how to take what seems to be-- what we and others seem to be finding with things like, for example, hierarchical Bayesian models where you're not just learning at one level with a fixed hypothesis space, but you have something like in the structural [? formodel ?] these higher levels-- hypothesis space is about hypothesis space. Those are priors on priors. They seem to somehow get around that dilemma. They seem to be able to learn surprisingly early the right kind of inductive constraint. And somewhere, at the higher levels of the model it's in there in the hypothesis space and hypothesis spaces. You seem to be able to trigger the inductive constraint, and to trigger it very early. This is something that Noah Goodman calls a blessing of abstraction.

In some sense, the lesson that I take to turn things around from your work and much of the developmental psychology that's come for the last couple of decades is, how early and how abstract children's knowledge is. And this is the most remarkable thing. It's not just that children know things. It's that they know the most abstract things.

They know about physical objects, what-- most of the people in this room may not know this, but we have a term of art in our field called "Spelke object." Sorry to embarrass you. A Spelke object isn't a bottle or a cup or a computer or a phone. It's just a thing. It's a thing that doesn't wink in and out of existence, and when it's out of sight it's not out of your mind, right? It moves smoothly in space and time.

And the reason why we call it a Spelke object is because Liz and her colleagues show that infants even, say, at I think three months at least-- is the most un-controversial claim. Liz would probably say, in some important sense, from birth.

SPELKE: Controlled-reared animals.

TENENBAUM: Yeah.

SPELKE: [INAUDIBLE]

TENENBAUM: So not just humans. Well, right. But many species have that kind of a notion of an abstract physical thing from very early on, before they understand about cups or bottles or computers or phones or anything to do with those. Similarly, this idea of an abstract concept of an intentional agent, right? An agent that seems to have goals and that moves around in rational or efficient ways to achieve its goals. Work by Csibra and [? Gyorgy ?] and many others have suggested that.

So how could it be-- I mean, it's incredibly useful to have these abstract concepts of physical object and intentional agent. That's what you need to provide the constraints to learn about specific kinds of objects and specific agents and their goals.

But how could that abstract knowledge get there so early? That's what motivated a lot of the classic nature-nurture debates to say, well, if abstract-- if there's any way to get abstraction it's got to be built up slowly, layer by layer. That's the classic empiricist view. And you can see that in some of the most successful current machine learning models like, for example, the deep networks that Jeff Hinton and his students [? built ?], where they do really build up increasingly abstract features, layer by layer. And it seems like, in some sense, the cortex seems to do that.

But yet, from the cognitive level and the computational level we're getting a different story, this more sort of top-down kind of view. And it's a great tension to try to figure out how to reconcile those.

SEJNOWSKI: Could I jump in? You know, I see this going back and forth over my head here.

TENENBAUM: Sorry.

[LAUGHTER]

SEJNOWSKI: So I think the answer is evolution. In other words, evolution is a form of very slow learning in which basic facts about the causal structure of the world, the spatial structure of the world, numbers, are not going to change. And so it makes sense to have those already integrated into the prior structure of the networks there that is being developed.

Richard Feynman-- someone once asked him, how many empirical facts do you need before you come up with a theory? And he thought about it for a while and he said, it would be nice to have one.

[LAUGHTER]

Which is a really amazing-- it's a statement saying that it's possible to come up with ideas de novo, in a sense, without having any facts but just what could be. Now in terms of Liz's comment about maybe there's different types of nature-nurture that we need to be thinking about in the same way there are different types of intelligence and different types of consciousness.

I actually see this as a hierarchy in the sense that if you look at, for example, the genetic story you see that playing out. Signals at the surface of the cell are regulating which genes are being expressed. But you also see it at the level of the circuit. You see, as Mriganka was showing, is that the actual spatial temporal patterns on the synapses are regulating the nature of the network. So the network is being regulated by activity. And you see that at the level of behavior. So, I mean, it's really nature-nurture all the way down.

SUR: I have a small comment. I think, in thinking about the development of cognitive systems, it's useful to ask, to what extent is a cognitive system like an organ of the mind? And to what extent is it like a computing machine?

So you don't have to teach a limb to grow. A limb needs food and water, a tree needs food and water, but it grows. So to that extent, that is the trivial example of an environmental influence. The environment provides certain basic nutrients, but doesn't teach anybody to do anything. That is built in. That is, I think, what Terry or, I'd say, even Chomsky might say is what evolution has provided the genetic programs to do.

The real issue, I think, in brain development or in the development of how neuroscience interacts with cognitive science or neural development leads to cognitive development is to ask, to what extent is that information actually obtained from the world or from other neural systems during the act of growing, which then is instructive rather than just permissive.

And I think that the evidence is not as clean as Liz would have us think. Liz believes that everything-- that neural development is just permissive. That when a chick is born is can do certain things and it doesn't need any instruction from the environment, therefore this is hardwired. I think that the jury is out on this. This is a much more complicated question.

For a chick to avoid a pitfall you need many other systems including vision, including navigation, including the vestibular systems, that in turn require not only instruction from the outside world but also the spontaneous activity and the drive within developing brains that are evolutionarily as well as instructively learning to then produce an instinct.

So are we only instinct or is that instinct shaped by environments, needs more precise experimentation.

SPELKE: This is exactly the reason why I think we need to distinguish between a nature-nurture question that focuses on what capacities do we have by virtue of who we are inside versus what capacities do we have by virtue of the external worlds that we've interacted with. Because I would not deny anything that you said about the development of the chick.

I would not argue that because a chick, on first exposure to a geometrically structured environment, is sensitive to the geometry and responds to it every bit as well as a chick who's been running around in it for a while-- I would not say that that's a hard-wired ability in your sense. I would say that it's an ability that chicks are going to have independent of the environments that they live in, and that's ready to go and allow them to get information from the environment on first encounter with such an environment. So that I think is the distinction.

But I do think your distinction between the growth of the hand and the computer programs is interesting because I would say that these core systems have elements of both.

TENENBAUM: Yeah.

SPELKE: On the one hand, they have their own intrinsic development. They are the result of an intrinsic developmental process that's highly robust over variations in the external world at the moment when they start to function, but on the other hand, I think there are, at the foundation-- and this is where I ran out of time in my talk, but-- I think they're at the foundation of the development of new systems of knowledge that are unique to humans and indisputably learned in a way that the hand is not. They're providing us with the information that we need in order to construct Euclidean geometry or natural number or classical mechanics.

TENENBAUM: Or stories, to take a theme that Patrick talked about.

SPELKE: Or stories, yeah. So in that sense I think they have interesting qualities of both the informational systems and bodily organs.

TENENBAUM: I mean, it's not an accident-- I take the same metaphor or the same set of questions as a background. The reason we titled that talk, "How to Grow a Mind" is because that's the striking thing. It seems like the brain is-- what it is is it's a computational organ, right? I mean, if there's anything we've learned from cognitive science and neuroscience that most of us can agree on and think it's productive, it's that. So what-- how to understand that.

To go back to this point you've been making about the two different kinds of nature-nurture, there's a really interesting, if rather speculative, example from the literature that came up in this class that you and I taught a couple of years ago, right? Where could this innate concept of a Spelke object come from? Well it could be-- or, of the sense, how could evolution have wired it in? What circuitry of the brain would be genetically specified that would represent objects and that could get something so abstract from the gene to the circuit level?

Well, one idea that's been proposed by some is to take a phenomenon of what's going on prenatally in the retina, these retinal waves that people like [? Carla ?] Schatz and others have found that are basically waves of spontaneous activity that, while the fetus is inside the womb-- so they're not getting any external input to their eyes, but there's these just spontaneous waves of activity sweeping across the retina which have, basically, the property of these abstract physical objects. They don't wink in and out of existence. It's not like random white noise. They move coherently in space and time. They're like proto-objects. So you could imagine mechanisms of neuroplasticity, real nurture mechanisms at the level that Mriganka was talking about there which, when combined with this other interesting fact that somehow evolution gave us is able to learn a concept of an abstract physical object that has exactly the properties that the real world does. Right? It's almost like evolution gave us a little simulation of the real world that we can run without getting any direction sensory input. And we don't know that that's right, but ideas like that are very intriguing possibilities for how, again, we could go beyond the way that philosophers understood nature-nurture, to a more mechanistic and richer understanding.

KANWISHER: I want to bring this closer to neurons and genes because you guys have pointed out the detailed neural mechanisms for reinforcement learning and for how brains get wired up. And to me one of the kind of gaping mysteries when I try to confront these issues is, how do you get from a gene to a circuit? Right? And I guess part of the problem is, we don't even really understand how circuits implement behavior in the first place. So that's one challenge.

But just to kind of heighten this mystery, let's go back to the case of autism that Mriganka mentioned. Right? So in this context one of the central things about autism is, it's not an across-the-board cognitive impairment, right?

So it's this very specific set of preservation and loss, cognitively, where many people with autism have normal IQs are very good at many things, and yet have these striking disruptions of basic abilities to understand aspects of other people. And also autism is one of the most heritable developmental disorders. And further, it's now becoming clear with research in just the last few years that the key causal steps that lead to autism happen in the first year of life. So how can a specific set of genes lead to those very specific cognitive outcomes? Do we have even the foggiest idea of how to paint that picture?

SUR: Well, we are beginning to put together a picture. So one of the things that you and cognitive scientists and cognitive neuroscientists have emphasized is that the brain is modular, that there are modules for specific functions of the brain and mind. Not only is there a face module but there are probably modules for theory of mind or there are modules for the different elements, say, for social cognition, which is one of the key targets of autism. And certainly there are modules for communication, such as for language.

So yet autism is highly heritable and has a strong genetic component. So the obvious implication is that there are specific genes that make specific proteins that somehow get expressed in certain key brain regions that are then at risk when the gene is mutated or in some way is not making its protein.

Unfortunately, we don't know of a single gene of autism yet that is only expressed in one brain region. So one of the things that I pointed out very briefly was that the most progress that we have made so far has been in the most devastating forms of autism, in which a single gene is mutated and this single gene leads to an autism-like phenotype. But in every instance that has been looked at-- say, in Fragile X in which the gene FMR1 is mutated, say, Rett syndrome in which the gene MECP2 is mutated, say, tubular sclerosis in which the gene TSC is mutated and so forth-- these genes are expressed everywhere.

So how can it be that a gene that's expressed everywhere nonetheless leads to the [? tricha ?] of autism phenotypes? There are two possibilities. One possibility is that in the human brain these genes are selectively expressed. It's just that in the mouse brain, which is our model of autism, they are non-specifically expressed. And that is a possibility. Or in some combination thereof that this gene is expressed in some way, that model has other genes that together end up combinatorially being actually highly-specific to specific brain regions. That is a possibility that I think is worthy of consideration.

But another possibility is that, in fact, autism has very wide-spread symptoms. But only the ones that rise above threshold, only the tip of the iceberg, ends up being the [? tricha ?] of autism. Like, if you look in the right way there are deficits in vision, there are deficits in audition, there are deficits in all kinds of perception, that are nonetheless not classified in the autism diagnostic categories. And I think this is also true.

That if you look at the animals-- OK, you can look in the visual system and you can ask, is there a deficit? And there is, if you look in the right way. It's very subtle. There is a deficit in plasticity. There is a deficit in that the synapses are not maturing as rapidly as they should. That's what we see in the mouse model of Rett syndrome, yet humans with Rett syndrome do not have visual deficits, or at least not yet.

KANWISHER: Right.

SPELKE: Very interesting--

KANWISHER: Hang on one second, Liz, we'll take your comment in a second, but since we're already over time and we're supposed to take questions from the audience, let's have-- if there are a couple questions, raise your hand. There's one here. You can take the mic to this person, and while the mic is being delivered to this person halfway up the aisle, let's get to Terry.

SEJNOWSKI: So just to follow up, I think Mriganka has really outlined some of the possibilities. I really enjoyed Phil Sharp's presentation last night and I remember him saying that he went to a society for neuroscience meeting. And for those of you who haven't gone, it's 35,000 people, a hundred parallel sessions. And he went to a schizophrenia session, spent four hours, and didn't come away with a single experimental insight into how he was going to go about trying to understand it.

And it illustrates, first of all, the state of the field, but also the fact that we still don't know what the key insights are going to be. But one thing we do know, which is that if you look at the prefrontal cortex, although it has the same structure in terms of the layers, in fact there are key differences in terms of both the inputs and the functions, and there's a gradient.

For example, the dopamine projections that I was telling you about are very, very rich and dense in the prefrontal cortex, but there's a gradient. And by the time you get to the visual cortex it's very weak. And it's clear that the prefrontal cortex is important for planning and for organizing your thoughts, and maybe that's why the dopamine system is so dense there.

And similarly it's very interesting, schizophrenia-- although it's a thought disorder, there are specific-- and I was at the schizophrenia meeting recently and there was a whole session on the visual system. So it turns out, schizophrenics have distorted visual processing. They can't group things together. The context isn't integrated with the object. And this causes them to really have difficulty in just organizing a scene.

And this is, of course, what people in [? the ?] [? computer ?] [? version ?] have been trying to work out. How do you do that global organization? And it's clear that the circuits in the visual system are impaired in their own way, but in a different way from the way that the prefrontal circuits are organized.

KANWISHER: Great. OK, let's take the question down here. Did somebody bring you a microphone? Oh, sorry. Did the microphone-- oh. Go ahead. Sorry.

[LAUGHTER]

I can't see you very well.

AUDIENCE: Thank you. This question is for Sur Mriganka. You said that the brain wires itself and that computer builders need to understand this. Could you talk on this some more and do you have any specific ideas for architecture?

SUR: Well, the brain is built on both specificity and plasticity. There are specific mechanisms that are at play throughout development. Particularly early on we have to build the brain from instructions that are in the genome, hence genes play a very important role. And as development proceeds, mechanisms come into more and more prominence in which the environment is richer within the brain as well as information from the outside world can play into mechanisms of plasticity. Which, nonetheless, involve molecules and genes.

So intelligence, to my mind, is how to make sense of the world. If there is one phrase in which I would like to capture all of the last two days of the symposium it is that intelligence, or how we define biological intelligence or human intelligence, is how do we make sense of the world as it exists and as we encounter it?

So therefore, it makes sense to me as a developmental neuroscientist that the mechanisms for making sense of the world should be derived from that world that we find ourselves in. And those mechanisms are either evolutionary or developmental. And so in the deepest sense, the mechanisms by which the world is reflected in and is integrated in our brains are of the essence for building intelligence.

KANWISHER: Is there a question over here?

AUDIENCE: There's one right here.

KANWISHER: OK, great. Go ahead.

AUDIENCE: Hi. I'm really interested in the application of these ideas in the healthy adult brain. I work with healthy adults to help them be more effective in their lives, and one of the things that I notice is there's a lot of stuff we've learned throughout our lifetimes, some of which helps us be more effective in our lives, and some of which kind of fights against that.

And I think it goes partly to the modularity of the brain that you were talking about before, there's all these different little centers in our brains which are sometimes working against each other. And so how do we apply these ideas to become more effective in using the parts of our brains that have learned things that are useful and how do we unlearn some of these things that have made us less useful?

SUR: Are you looking at me?

[LAUGHTER]

AUDIENCE: Anybody.

KANWISHER: Go ahead, Mriganka.

AUDIENCE: I'm interested in what any of you have to say about that.

SUR: Well, there are some aphorisms in our field. One of them is, use it or lose it. And so that means that we should-- that practice makes perfect or using brain systems, brain circuits, synapses, will strengthen the ones that matter and might weaken the ones that don't. At least, for the task. [INAUDIBLE]

SEJNOWSKI: So I have, actually, a specific suggestion. So there's a whole new field in memory research called reconsolidation. And just Google it. And the way it works is that, every time you remember something it actually alters the memory because you re-energize the same circuits that actually hold the memory, but now it's not the same memory anymore.

And so the idea is to actually subtly alter it. If you have a bad memory you bring it to consciousness, which may sound like a bad idea, but if you can subtly alter it then it won't be as bad in the future. And that might be a good place to start.

TENENBAUM: I'll jump, also, [? in ?] [? here, ?] I guess is the right verb. Think causally. So that was the [? theme ?] that I was talking about. Whether we're talking about the simplest kind of learning mechanisms, associative learning mechanisms, that we understand how they work in terms of synaptic plasticity or whether we're talking about these richer, more abstract ideas that the mind is fitting causal models to the world trying to elucidate the causal structure out there, I think the idea of learning as a kind of causal discovery, it has pervasive clinical and educational applications.

So you mentioned schizophrenia and many times, when I've talked about how this view of the mind and learning, people have come up to me, pointing to different aspects of schizophrenia. So you talk about basic perception of structure and vision, but also schizophrenics are different on classical kind of causal judgment tasks. There's the whole sort of paranoid aspect to at least some schizophrenia, where it seems like schizophrenics are more likely to see certain causal structure that we would say isn't there.

Or take depression, for example, where it's often been said that in some sense maybe being depressed might be having a more realistic view of the world. But what cognitive behavioral therapy has shown is that actually, in many ways-- one component to depression is sort of focusing on the wrong parts of causality, or not having the wrong causal theory about your life and the way your own mind works. And that by actually giving people, in some sense, a better causal theory that's part of the best cures for depression.

And just to come back to education-- so one of the most exciting developments in education builds on this idea of intuitive theories but looks at the causal role of children's intuitive theories about how their mind works in education, basically. So I'm thinking of Carol Dweck's work, who was at Colombia and is now at Stanford, who's shown that you can sort of look at what kids' kind of intuitive theory of how the mind works, which is often, unfortunately, kind of more of a nature as opposed to a nurture theory.

We're talking about kids at the critical ages where, say, math starts to get hard and they think, well, you know, I guess I'm not a math person. I just wasn't born for it. And she's shown that if you give them a different causal theory that says the mind is kind of like a muscle, right? It's just like use it or lose it, right?

You can exercise and make yourself stronger, cognitively. Just teaching them that idea about the mind makes them learn more effectively, makes them get better at math and stay with math longer. I think that's an incredibly powerful idea.

SPELKE: Let me jump also, with a word, actually, of caution. I mean, I think we do have some local success stories here. Carol Dweck's motivational work is one example. Other examples-- staying in education-- are places where people have been able to harness the systems that children already have to extend them into new territory. And you can see this benefiting education.

But I also think we have to be really careful, faced with immediate needs to help people and also humble in realizing that some of the most exciting questions that I think we are going to be able to answer in our lifetimes, are still wide open. And until we get those answers, I think we don't want to jump too far or too fast in saying, here's how I would cure schizophrenia, or here's what I would do about depression.

I think the field's at a very exciting point right now and I hope it will have more to say in answer to your question in the coming years.

AUDIENCE: [INAUDIBLE]

KANWISHER: Thank you. Well, since we are now way over time I think we better wrap up this session. So I wanted to thank the panel for a really fabulous discussion.

[APPLAUSE]