Brains, Minds, and Machines: Social Cognition and Collective Intelligence

Search transcript...

THOMAS MALONE: I'm Thomas Malone. I'm a professor in the Sloan School of Management and Director of the MIT Center for Collective Intelligence. And it's my pleasure to welcome you to our panel on social cognition and collective intelligence.

Now, this morning, we saw various kinds of intelligence in vision, action, language, and so forth. Part of our goal in the panel this afternoon is to expand our view of intelligence. First, we'll turn to another important kind of intelligence-- what we might call social cognition or reasoning about what other people are thinking and feeling.

In just a moment, we'll hear from Rebecca Sachs about recent research in this area. And in fact, it's the logo from her research center that I'm using to illustrate here the topic of social cognition. Rebecca is the Fred and Carole Middleton Career Development Professor of Brain and Cognitive Science here. She also received her PhD in cognitive science from MIT. And she studies how people think about other people's states of mind and emotion and the specific brain regions that underlie this kind of thinking. Rebecca has received numerous awards, including several MIT teaching awards and the American Psychological Association Robert Fantz Award for Young Psychologists.

Now, this panel also includes not only social cognition, but also collective intelligence. And that, in various forms, will be the main theme for the other panelists today. Now, to understand what we mean by collective intelligence, it's important to realize that intelligence is not just something that happens inside individual brains. It also arises with groups of individuals. In fact, I would define collective intelligence as groups of individuals acting collectively in ways that seem intelligent.

Now, by this definition, collective intelligence has existed for a very long time. Armies, companies, countries, families-- these are all examples of groups of people working together in ways that at least sometimes seem intelligent. So after Rebecca's talk, we'll hear several examples of these other kinds of collective intelligence from Andy Lo, Martin Nowak, and Sandy Pentland.

Andy is the Harris & Harris Group Professor of Finance at the MIT Sloan School of Management and the director of the MIT Laboratory for Financial Engineering. He received his PhD in economics from Harvard, and he studies, among other things, financial engineering, risk management, and the evolutionary and neurobiological models of individual risk preferences and financial markets. He's also the author of numerous articles and is an author of the book The Econometrics of Financial Markets-- a Non-Random Walk Down Wall Street.

Martin Nowak is the professor of mathematics and biology at Harvard and director of the program of evolutionary dynamics there. He got his PhD from the University of Vienna with studies in biochemistry and mathematics. And his latest book, Super Cooperators, co-authored with Roger Highfield, shows the power of cooperation in the evolution of animal and human societies.

Sandy Pentland, the Toshiba Professor of Media, Arts, and Sciences at the MIT Media Lab, is also the director of the MIT Human Dynamics Laboratory and the director of the Media Lab Entrepreneurship program. Sandy's PhD from MIT was in psychology, and he is a pioneer in, among other fields, computational social science, organizational engineering, and mobile information systems. His most recent book, Honest Signals, is about the human behaviors that evolved from our most ancient primate signaling mechanisms and which are major factors in human decision making in everything from job interviews to first dates.

Then at the end of this list, I'll return and say a little more about how to measure collective intelligence and how to increase collective intelligence in new ways made possible by the internet.

Rebecca.

[APPLAUSE]

SANDY PENTLAND: Thanks, Tom. OK. So I'm going to start off on social cognition with the claim that human intelligence is intelligence about humans. So what I want to talk about is what we know, what we can do, and how we can reason when we're thinking about people. And in particular, I'm just going to give you a tiny hint of how good we as humans are at thinking about other people.

In particular, when we're thinking about other people, we can figure out a huge amount from just a tiny bit of information. So our thinking about people is fast and sensitive, and maybe even innate. Human babies, when they're first born, already prefer to look at things that look like human faces, already prefer to listen to things that sound like human voices.

And for example, if I just show you, for example, two scenes of flowers, you immediately know in one of them there was a person present. Here, similarly, two scenes of icicles-- in one of them, there's the act of a human being. These are both pieces of art by Andy Goldsworthy. So from almost nothing, from just a scene in nature, you already know there was a person there.

And in fact, from just tiny glimpses of patterns of movement, you can tell what characters know, what they want, what they're feeling, what they're trying to achieve. This is a clip from a famous film made by the psychologists Heider and Simmel in the '40s before Pixar. And just from the movements of these little shapes and characters, you already get a sense of who's good, who's bad, a sense of maybe foreboding at this point as the big triangle closes in.

So thinking about people is first fast-- based on almost no information in the stimulus, you get a lot of information in your mind. It's sensitive to tiny bits of information. It might be innate.

But what I'm even more interested in about how we think about people is the ways that it is rich and sophisticated and probably unique to human beings. So try to imagine how you would explain to a dog or a chimpanzee what this means-- I know you think you understand what you thought I said, but I'm not sure you realize that what you heard was not what I meant. That's apocryphally attributed to Alan Greenspan.

So all of us are capable of constructing incredibly rich and sophisticated representations of what other people are thinking, what they meant. And to give you a simpler example, but I think a very powerful one, of what we can do with this information, I'm going to give you an example of the kind of experiment we do in our lab. I'm going to tell you a story about Grace.

So I'm going to ask you at the end to use your hand to make a judgment. So get ready to make a judgment. And the question I'm going to ask you is about how much we should blame Grace. So here's what happens to Grace.

Grace is on a tour of a chemical factory with her friend. And on this tour, they stop for coffee, and Grace is making the coffee. Grace's friend asks for sugar in her coffee. And next to the coffee machine is a jar of white powder labeled dangerous toxic poison. So Grace thinks that that is a dangerous poison left behind by the scientists.

She takes a spoonful of it and puts it in her friend's coffee. Actually, that was just sugar left behind by the scientists. And so Grace's friend drinks the coffee and is fine.

So use your hand to say from no blame to very much blame, how much should we blame Grace for putting the white powder in the coffee? It's safe to drink with most of you.

OK. Here's the opposite case. Now Grace is on the tour. Again she's making coffee. Again there's a jar of white powder next to the coffee. This time, the jar of white powder is labeled sugar, and it's a jar of white powder next to the coffee machine, so Grace thinks that the powder is sugar.

Actually, the sugar has dangerous toxic poison in it which was left behind by some scientists, and so drinking this powder would be lethal. Grace puts a spoonful of this into her friend's coffee, and her friend drinks the coffee and dies. How much should we blame Grace?

OK. Interesting. So you guys do what most people do, which is when you're figuring out how much should you blame Grace for these outcomes, if you noticed, you blamed her a lot more when Grace's friend drank a cup of coffee with sugar in it and nothing happened at all, nothing morally detectable even happened. And that's in the case that we call a failed attempt, when Grace appeared to be trying to harm her friend, but didn't succeed, compared to the case of the accident, where Grace had no way of knowing that she would harm her friend even though what happened there was much more morally serious-- namely, somebody died.

So these moral judgments give us a measure of how easily all of us always use other people's mental states in making judgments that matter. And I'll come back to this example in a minute.

But the claim I want to tell you about is what I actually work on in my lab, which is not just that we think about people in a way that this fast, sensitive, and innate, but also rich and sophisticated, but also, that we do so via a special group of brain regions that humans have. So in addition to the brain regions that are very similar in human beings and all other animals-- brain regions that we use for things like seeing and hearing and moving and feeling-- in the brain of humans are also groups of brain regions specifically representing information about other people-- so for recognizing people's faces and voices, seeing their actions, and understanding their emotions.

This is an illustration, but this is real data showing you what the brain region that, to me, is most surprising and intriguing, a brain region we call the right temporoparietal junction. It's above and behind your right ear. And this is a brain region that's involved specifically in helping you to figure out other people's thoughts, beliefs, desires, intentions, hopes, and suspicions. I should say that these are data from a subject population we call normal human adults. They're MIT undergraduates.

So to give you a sense of what these experiments are like, I'm just going to give you two examples. Obviously, 10 minutes is not long enough to get a real sense of these experiments. But here's two quick examples of how we know that this brain region really is involved in helping you think about other people's thoughts.

So in one experiment, we have people reading a whole bunch of different stories while they're in the scanner, and then we ask other people to judge for us the properties of these stories. So I'm going to give you some examples of these stories, and I'm going to ask you to make one kind of judgment. Again with your hand, I'm going to ask you to indicate how much are you thinking about the person's beliefs, desires, thoughts, suspicions, and hopes, from not very much to very much. All of these are stories about things that happen to people.

So here's an example of such a story. Get ready to use your hand. So Larry was going to his first day of a new job. The job starts very early, so Larry was extremely tired. Larry made himself some fresh juice and took a big drink. The juice was cool, and Larry felt the tang of it on the soft tissue inside his mouth.

How much are you thinking about people's mental states? A little.

Here's another one. Oscar was doing the dishes after dinner. He was talking with his friends while his hands were in the soapy water. Then Oscar's hand hit a sharp knife. The knife cut deep into the skin between his fingers, and the cut burned in the dirty water.

How much thoughts, beliefs, desires? More? OK.

Here's another one. Maggie is in love with a man who wishes that he would reciprocate her feelings. One day, this man sends a text to Maggie that he meant to send someone else saying he does not want to see her again. Maggie gets the text and starts crying.

OK. So then we can ask, across these different stories, how much brain activity was going on in this brain region in your brains while you were reading these stories and compare it to the answers of a bunch of questions. So we can compare it, for example, to the question how much did the character stuff in the story. And as you see, there's no relationship. So this is showing how much the character suffered on the x-axis and how much activity there was in that brain region on the y-axis. Or how much physical pain were they in, again, no relationship. Or how vivid and movie-like was the story-- and again, there's no relationship.

But if we ask the question we asked you, how much did you consider the thoughts, beliefs, desires, suspicions, doubts of the characters, then the amount that people said they were thinking about that predicts the amount of activity in other people's brains while they're reading those stories. So that's one way that we study the function of this brain region.

But perhaps the more convincing way to study the function of a brain region is to ask, well, what would happen if we interfered with the function in that brain region? And fortunately, we can do that with a tool called transcranial magnetic stimulations. This is a tool that lets us pass, temporarily, a targeted magnetic wave through somebody's skull and into a specific brain region.

And to give you a sense of what this looks like, I'm going to first show you what happens to a quarter when I put it on the coil, and then what happens when I put it on my head. So here's a quarter going on the magnetic coil. And you can see the clicking. That's the machine going off.

So now we're going to put it on the part of my brain that controls my right hand. So we're targeting it to the specific part of my brain. And when we do that, a little magnetic pulse, specifically in the part of the brain that controls my right hand, makes a small but noticeable contraction on my right hand. So now we can do the same thing, but focus that magnetic pulse on the part of your brain that lets you think about somebody else's thoughts.

And so now if we compare moral judgments of Grace in these two scenarios, when we have this magnetic pulse going to another part of the brain-- so here again, moral judgments look normal. You get much more blame for the attempted harm that failed than for the accident. Once we put the TMS to the right TPG, though, we change those moral judgments. So we make people say that failed attempts were less bad and accidents were more bad, as if they were paying less attention to the other person's beliefs.

So that's just a little taste of the research that we do. Again, the claim here is that why this is interesting is that we're trying to understand human intelligence. What we're trying to understand, in part, is intelligence about humans. I want you to take away from this talk that our intelligence about humans is both fast, sensitive, and innate, but also rich, sophisticated, and unique, and depends in particular on a group of brain regions that let you specifically think about other people's thoughts. Thank you.

[APPLAUSE]

THOMAS MALONE: OK. Our next speaker is Andy Lo from the Sloan School.

[APPLAUSE]

ANDY LO: So I'd like to start by thanking Irene Haim, Tommy, Paul, Joe, and Josh Tenenbaum for organizing this wonderful symposium and for inviting me to participate in this session.

Now, I believe I have the dubious distinction of being the only economist in the symposium, so I should probably say a few words about what I'm doing here. I was once a perfectly well-adjusted financial economist-- not particularly interested in interdisciplinary research and secure and content in the knowledge that I understood exactly how the world worked, particularly financial markets. And I was steeped in the theory of efficient markets, expected utility optimization, rational expectations, and the random walk hypothesis, which incidentally, Norbert Wiener, who was mentioned yesterday, had more than a little to do with.

But this happy period of perfect certainty in my career didn't last too long. Because as soon as I confronted these theories with the data, I learned what cognitive dissonance meant. In 1988, my co-author Craig McKinley and I rejected the random walk hypothesis using historical stock market data. And over an 11-year period during our careers, we tried to explain away these rejections within the standard paradigm of economic theory. And ultimately, we couldn't.

So we wrote a book. The book that Tom Malone mentioned was published in 1999. It is called A Non-Random Walk Down Wall Street. And I would not recommend any of you read this. My publisher, when we wrote this, warned us that for every equation you put in a textbook, you reduce the readership by half. So I think that that's just-- we have 1/10 of one reader for this book.

But it does show that we really tried our best to reconcile the inconsistencies between the data and the theory within the traditional economic paradigm, and really couldn't do that. Now, during that time, I started looking for other explanations for these anomalies and came across the psychology and cognitive neurosciences literature. And as many of you know, in that literature, a number of behavioral biases in human behavior were documented-- things like probability matching, loss aversion, overconfidence, overreaction, and so on.

And by the time I read through the literature, I was convinced that the human animal was the stupidest creature on earth, which can't be right either. Because we know that financial markets are highly competitive, highly adaptive, and it's not trivial to make a living in that industry.

So the challenge that we were faced with, and one that we've been focusing on, is how to reconcile the two. And this reconciliation is not just of academic interest, because the answer really lies at the core of the current financial crisis of regulatory reform, the legal system, and the political and social structures that we set up for our society.

So the challenge that I'm referring to is really whether or not we can come up with a satisfactory and complete theory of human behavior. That's the challenge, I think, we are focusing on. It's one that was touched upon by the panel last night.

Now, I have to say that this is, by no means, a new challenge. No less an intellect as Norbert Wiener in 1948 published a book that was really, I think, the first attempt at trying to do this. In '48, he published this book called Cybernetics, which is a term that I believe he coined, or certainly his research motivated, which was the first attempt to come up with a mathematical theory of animal behavior-- in particular, human behavior.

But a number of others have tried since then. And in particular, economists actually made some progress in the 1960s and '70s-- an economist by the name of Herbert Simon, who won the Nobel Prize, ultimately, for this work, that was basically completely ignored, even to this day, by most economists. Simon pointed out that, in his view, humans did not optimize the way that economists would suggest. But instead, they engaged in a behavior that he called satisficing.

Now, this is a word that he made up to indicate a kind of behavior where you don't optimize, but you come up with heuristics that are good enough, that are satisfactory-- hence the term satisficing.

And so in doing so, he proposed this mental models view of behavior that ultimately became, actually, quite important to the artificial intelligence community, both through Simon's work and his student Alan Newell. But it had virtually no impact on economics, because economics had the rational expectations revolution where the mathematical power of optimization was really brought to bear on economic decisions.

Now, the question that economists asked of Simon's work that ultimately caused them to reject his theories was, if you aren't going to optimize, but you're simply going to satisfice, how do you know what is good enough? How do you know what's satisfactory?

And Simon and his proponents couldn't really address that challenge very successfully, which is one of the reasons why his work didn't have as much impact in economics. But it turns out that my collaborators and I have been working on an approach that uses evolutionary theory to try to answer that challenge. And it turns out that when you use some basic principles of evolution applied to both heuristics and other kinds of simple behaviors, you can get some extraordinarily powerful results.

And since I don't have too much more time, I thought I would just give you a very simple example of how that might occur-- a simple example of why it is that we don't optimize in practice, but we engage in certain satisfactory heuristics. So the illustration that I have in mind has to do with a challenge that all of us face every day, which is getting dressed in the morning. And in order for me to explain to you just how challenging that task is for me, I need to tell you about some of the parameters of that challenge-- in particular, my wardrobe.

So let me give you an inventory of my wardrobe. I've got five jackets, 10 pairs of pants, 20 ties, 10 shirts, 10 pairs of socks, four pairs of shoes, and five belts. Now, you might think that's a rather limited wardrobe. But if you do the combinatorics, you will see that that gives me two million unique outfits. Now, it's true that not all of these outfits are equally compelling from a fashion perspective, so I've got an optimization problem. I've got to pick just the right outfit for the occasion.

So if it takes me one second to evaluate the fashion content of an outfit, how long do you think it would take for me to get dressed each day? Well, the answer is 23.1 days. Now, I promise you I get dressed much more quickly than that. And the question is how? How do we do this? What an amazing feat of intelligence.

The answer, of course-- and the answer that Simon would have given-- is that we don't optimize. We come up with a satisfactory solution. But how? The economists would say, well, what do you mean by satisfactory? How do you know if, when you come up with a single solution, that if you don't spend a little bit more cycles optimizing, you wouldn't get to a better solution? And how do you know if you haven't done the optimization that the optimal solution is so much better than a satisfactory one that you should go ahead and do the optimization?

And the answer is you don't. The answer about how we find what a satisfactory solution is comes from evolution. It comes from the process of reinforcement learning. And I'll give you a personal anecdote that describes it.

When I was six years old and I was in first grade, at that time, a marketing genius figured out that if you put a Superman label on a jacket, you can get a lot of six-year-old kids to buy it. And so I wanted that jacket desperately. And being from a single parent household, we didn't have a lot of extra money to spend. So my mother initially said no. And so I nagged her for weeks on end until finally she relented and agreed to get me the jacket.

I still remember to this day the Friday evening we went to Alexander's in Queens, New York, got the jacket. And for the whole weekend, I spent the entire weekend in that jacket. And come Monday morning when it was time to go to school, I put on my jacket, got in front of the mirror and posed in all sorts of action ways to see how good I looked. And by the time I was done posing, I was late for school. I was late enough that I needed a note from my mother.

And I remember distinctly walking into the classroom half an hour late. Everybody was already seated. I had to bring the note from class to give to my teacher. And all the students were snickering that I was late. And I walked in the back of the room, and I was completely and totally traumatized by that event. And you know that I was traumatized, because 45 years later, I still remember that day.

But I'll tell you what-- from that point on, it never, ever took me more than five minutes to get dressed in the morning. My heuristic was indelibly altered through the process of natural selection, through trial and error. That's how we can find out what satisfactory means.

Now, since that time, a number of new models have been developed. And in particular, with my collaborator, Tom Brennan, we've actually used very simple binary choice models to formalize that intuition that I just described. And using this simple model, we're actually able to generate behaviors that economists find anomalous but that cognitive scientists have documented for decades now-- things like probability matching, randomization, risk aversion, Bayesian as well as non-Bayesian decision making, bounded rationality of the kind that Simon talked about, and finally, what we think of as intelligence.

Now, since I'm just about out of time, I won't go into the details of it, but I'll give you a hint as to what we're referring to. And if you go to the later session and you hear Jeff Hawkins talk about the memory prediction model, you'll see what I mean. With a binary choice model, if you take that simple binary choice and you string it together in a sequence of binary choices, you get a binomial tree. And that means that the end nodes of that binomial tree are binary bit strings that can actually encode enormously complicated types of outcomes from very simple sequences of binary trials.

Now, imagine that these binary trials are not individual trials, but they're neurons. And now think about the possibilities of 2 to the 100 billion neurons and the kinds of patterns that you can actually get through the process of evolution-- evolution not through species selection, but evolution through binary control selection. This is evolution at the very speed of thought.

So in conclusion, I think that by using these simple models but in a rather more complex and selective manner, we can actually achieve what Marvin Minsky set out to achieve decades ago. Minsky said that he wanted to build a machine not that he could be proud of, but that ultimately could be proud of him. And I think that we'll get there if we take on this challenge. Thank you.

[APPLAUSE]

THOMAS MALONE: Our next speaker is Martin Nowak.

[APPLAUSE]

MARTIN NOWAK: Thank you very much. It's great to be here. As Don Malone said, I'm a mathematical biologist, and I would like to tell you a little story what that means. There's a man, and a shepherd, and a flock of sheep. And the man says, if I guess the correct number of sheep in your flock, can I have one? And the shepherd says, OK, try. So the man looks around and says 83. And the shepherd is amazed, because it's the right number.

So the man picks up his sheep and wants to walk away. The shepherd says, hang on. If I guess your profession, can I have my sheep back? You must be a mathematical biologist. How did you know? Because you picked up my dog.

So you realize that in my profession, it's important to get the numbers right. And today, I hope that I get the numbers right for the evolution of cooperation. So what is cooperation? In a very simple form, you can define cooperation as an interaction between two individuals. They can be [INAUDIBLE] or people. And one is a donor, and the other one is a recipient. And the donor can pay a cost, and the recipient has a benefit.

And normally for corporations to make any sense at all, benefit is greater than cost is greater than zero. And we can think of this as being modeled in terms of cultural evolution, a group of people that will learn from each other, or in terms of genetic evolution where, say, cells or animals evolve certain strategies.

So this interaction defines a game, a very famous game in the context of game theory, and this is called the prisoner's dilemma. And in the prisoner's dilemma, you have a choice between these two options-- cooperate or defect. So in a moment, I want to play the prisoner's dilemma with me. So therefore, I've write down the payoff matrix here.

And in the payoff matrix, I've written down what you get, and I get the equivalent formulations. So when you cooperate with me, it costs you c, but I get the benefit b. And if I cooperate with you, it costs me c, and you get the benefit b. And you can think of it as benefit, say, $3 and cost is $1. So you have to look at this matrix and make a decision. Who wants to cooperate with me? Raise your arm. Who was to defect?

So I should say, this is not an optional game. We also have optional games. There will be a third column here which means do nothing. It's called the loner strategy. So you have to make a choice. Who wants to cooperate? Just one shot. Who was to defect? So there are mixed opinions.

So this is how the rational analysis of the game works. You don't know what I will do. Let's assume I cooperate. If I cooperate, you get b minus c if you cooperate, but you get b if you defect. So if I cooperate, you get more for defecting.

If I defect, you get minus c if you cooperate and you get 0 if you defect. So you also get more for defecting. No matter what I might do in this game, you are always better off by defecting.

And if I analyze the game in the same way, I will also defect. And we both come to the conclusion to defect. And then we realize this is a pity, because we both have a 0 payoff now, and it would have been so much better had we been minus c both, because that is positive, and we would have been better off. So this is the dilemma. Therefore, it's called the prisoner's dilemma. And why it's called prisoner's, you can actually read in many books. I'm not going into this because I don't have so much time.

So what we realize from this simple example is that natural selection needs help to favor cooperators over defectors. You can imagine now populations of players that are either cooperators or defectors. And with the same analysis, you come to the conclusion that defectors always out-compete cooperators.

Why are interested in-- so I will give you five mechanisms for the evolution of cooperation, five mechanisms that help you to solve the prisoner's dilemma. And I will briefly discuss them. And the first one is direct reciprocity. And these are repeated encounters. Therefore, the question from the first row-- are we playing a one-shot game or repeated game-- that will make a huge difference. The second one is indirect reciprocity. And very briefly, I will mention spatial selection, group selection, and kin selection.

And why are we so interested in this? Because as an evolutionary biologist, we observe that cooperation is needed for construction. In some sense, we need cooperation whenever evolution goes to a higher level of organization-- the formation of genome cells, multicellular organisms, animal societies, and humans.

So direct reciprocity is the idea that I help you and you help me. So we have repeated encounters, and maybe by helping you now, you will help me later. In this direct reciprocity situation, in the repeated prisoner's dilemma, you can no longer make the simple argument it's always best to defect.

So we have a repeated prisoner's dilemma, and the question is how do we play this game. And this question was asked by a political scientist, Robert Axelrod, Ann Arbor, Michigan. And he said, let's have computer tournaments. And people sent him strategies, and they played against each other. And he actually had two such tournaments.

And the surprise was that the simplest of all strategies won. And this was a three-line computer program. It's called tit-for-tat. And tit-for-tat is I start this cooperation, and then if you cooperate, I will cooperate. And if you defect, I will defect.

So without noticing it, you are playing a mirror. You are playing yourself once removed. And the surprise was that the simple strategy won in two consecutive computer tournaments.

But there was an Achilles heel of the world champion in the repeated prisoner's dilemma. And this is that if you have errors in the sense of trembling hand or fuzzy minds-- the economist's term for these types of errors-- then errors can destroy the cooperation between two tit-for-tat players. And tit-for-tat is actually a weak strategy when you have to deal with errors.

So something that I did in my PhD thesis-- some time ago, I said let natural selection design a strategy, or let natural selection play out the prisoner's dilemma in the context of errors. So we started with a random ensemble of strategies, and out of this random ensemble of strategies, the first thing that comes out is always defect. But unfortunately for my thesis, it wasn't over here. What we saw next is that there's a small group of tit-for-tat players that could invade that population and take over.

But amazingly, tit-for-tat was only around for a very brief period, because it was quickly replaced by another strategy, and this other strategy, we called generous tit-for-tat. And generous tit-for-tat is if you cooperate, I will cooperate, but if you defect, I will still cooperate with a certain probability. And this is a mathematical version of the evolution of forgiveness. This strategy replaces tit-for-tat very quickly, because it can deal with errors. And two generous tit-for-tat players have a much higher payoff against themselves.

So what happened next was kind of a surprise. If everybody here plays generous tit-for-tat and I play unconditional cooperation, I'm a neutral mutant. I have no advantage, no disadvantage. Random drift can make the neutral mutant spread in the population. The population drifts from generous tit-for-tat to always cooperate, and now you can guess what happens next. You invite the invasion of always defect.

And you have a mathematical model of human history with installations of war and peace. But what I have seen it all my models for the evolution of cooperation-- it's always about psyches. It's never really about equilibria. Cooperation is always as good as-- it can stay around for some time. Then it gets destroyed, and then it has to be rebuilt again. And this is also what might mirror the ups and downs of financial systems to a certain extent.

One thing that I find very interesting in this analysis is that winning strategies have to be hopeful, generous, and forgiving. If a strategy doesn't have that property, it's not a successful strategy in this game. Hopeful means if I meet a stranger, my first move will be corporation. I hope I can establish cooperation with this person.

Generous means that I am willing to accept slightly less than 50% of the points. So the generous tit-for-tat never gets more from any other player than the other player gets. And forgiving is this property that I described that after defection, I [INAUDIBLE] cooperation.

Very quickly, later, we developed another mechanism for the evolution of cooperation indirect reciprocity. And here's a painting of Vincent van Gogh of "The Good Samaritan." And whatever his motive was in this noble action, he did certainly not think this is the first round of a repeated prisoner's dilemma, and I should cooperate. So the idea of indirect reciprocity says I help you and somebody will help me.

Indirect reciprocity works in the following way-- that the interaction between people are observed by others, and then gossip spreads the reputation about people in the population. So indirect reciprocity leads to evolution or the selection pressure of a kind of social intelligence and human language. Because we need to understand who does what to whom and why, and we need to have the ability to talk to each other about others.

As my colleague David Hague put it, for direct reciprocity, you need a face. But for indirect reciprocity, you need a name. And as much as we understand or don't understand about animal communication, we certainly don't think that animals can refer to each other on a first name basis. He did this to me yesterday. And this is the kind of gossip that gets the reputation systems of indirect reciprocity going-- also what we see, for example, in eBay on internet trading.

Just the last two slides are very quick. Spatial selection would be the idea that neighbors help each other. So then cooperators can form clusters, and these clusters can prevail in a world of defectors. Then group selection-- there's a beautiful quote from Charles Darwin. There can be no doubt that a tribe including many members who are always ready to give aid to each other and to sacrifice themselves for the common good would be victorious over other tribes, and this would be natural selection.

And the last mechanism-- kin selection-- here, the idea is the interaction occurs between genetic relatives. And J. B. S. Haldane said I will jump into the river to save two brothers or eight cousins. And the idea here is that the coefficient of relatedness between brothers is 1/2, between cousins is 1/8. J. B. S. Haldane was not only a founding father of population genetics, but also politically very active. And here, he's addressing a group of British workers, presumably not talking about kin selection theory.

And all of this is summarized in the new book that I just wrote, and it came out a few weeks ago, called Super Cooperators. Thank you very much.

[APPLAUSE]

THOMAS MALONE: And our next speaker is Sandy Pentland.

[APPLAUSE]

SANDY PENTLAND: So I'll see if we can get this going. So some years ago, I asked the question-- what did humans do before we had significant language capabilities? We were a social species. We cooperated with each other. We survived. And eventually, we developed language.

And if you think about evolution, what normally happens when a new capability like language comes along is it subsumes earlier capabilities. The earlier capabilities don't go away. They sometimes get co-opted. Sometimes, they get built on top of.

And so it seems that, like other apes, we had a framework for cooperating that's based on signals-- not affects, not cognition, nothing in the way of high order intentionality, most likely. But it served to bind us together into groups and to survive. And we would expect that when language came along, that it would take this structure, and it would use it in some way.

And a particular way that is a very interesting is signaling mechanisms called honest signals. It's not that they're always true. They are information carrying, but they also are a releasing stimulus for other animals so that when one animal signals something, the other animal responds in a predetermined way.

So this is something that you can see in apes, and you can also see it in people. So let me give you some examples of things. So let's take our autonomic nervous system. When we have fight or flight, we get nervous energy. We become more active. You can see people when their autonomic nervous system becomes more activated. Think of a three-year-old bouncing around, or think of a dog that's excited. And that's an interesting thing, because that means we can read interest levels of another species that's not even terribly closely related to us.

The other thing that happens about being excited like that is it's contagious, mood contagious-- extremely well-established phenomenon. In other words, the signal you give off by having your autonomic nervous system fired up insights the autonomic nervous system of other people. That's an honest signal.

You see the same thing around attentional effects. So these are tight timing between action and effect between two people. It's the scaffolding on which language is learned. In fact, it's been used for more than 30 years as a diagnostic for language development problems. If the timing between the mother and child is not the tight, precise dance that you would expect, then something is wrong, and there will almost certainly be language development problems.

Another signal that has gotten a lot of attention recently is mimicry. When we sit close together and I nod my head, you will unconsciously begin nodding your head. But it also does some other things. It changes the way you feel about me. If I ask questions about trust, you'll rate trust higher. Jeremy Bailenson did a great experiment where he had little avatars that would mimic you and try and sell you things at the same time. And if they were mimicking you, they were more than 20% more effective than if they weren't mimicking you. So it causes changes in the way you respond to things.

And a final one is fluidity of production. When you practice something for a long time, you get really good at it, you get very fluid in how that production happens. If you're very cognitive loaded or you're not very expert at it, there's a high entropy on how you do it. And people respond to that as expertise.

So what can you do with these sort of signals? Well, you can build computer systems that do that. We build little name badges like this that measure this. People wear it, and it measures their signaling behavior. You can do it with cameras. You can do it with microphones. And you can look at the signaling between people and predict things, like in a negotiation, who will come out ahead.

We've turned this into a commercial system now. We do commercial scale screening for depression. We can listen to people talking to their health care coaches-- so these are people with, for instance, congestive heart failure-- and as they're talking to the coach, you can tell whether or not they're depressed with an accuracy that's similar to the test retest accuracy of some of the most widely-used screening tests, but without asking any questions. It has to do with the pattern of interaction, these signals.

You can also listen to the same situation, and when the coach says, you should sign up for this program-- and the client always says yes, I will-- you can tell which clients will sign up and which ones won't with about 80% accuracy, and then you can focus on the ones that don't really mean it. So you can read people. It's a sort of social sense.

And a logical question, which I think is something to bring to this group, is what does this signaling behavior have to do with language? And the thing I would pose to you is that at some point, we had signaling behavior, but no language. And then for some reason, we developed language. For instance, Daniel Kahneman will opine that language developed to focus attention on features that are not currently present in the environment. And so a logical first thing that might have happened-- many people believe this-- you can't prove it-- is that you would take the signaling behavior, and you would add information to it-- deixis signs, things like that.

And the evidence for that is that you see things like this in current-day apes. And so a question arises-- if that's what happened in us, do you still see traces of this in our language behavior? For instance, would this sort of simple, quote unquote, language-- because ti's certainly not generative the way we normally think of language-- is this sort of language something that is present today, and does it have a functional role in our day-to-day behavior?

Because if it turns out that it's very common, and this very simple type of communication predicts outcomes as functional, then that changes the way you might think about language. It's not competency. It's how it's practiced.

So what is some evidence of this? Well, we did an experiment. This is both in Italian and English. And we looked at people in four-person groups who were negotiating things. It's a thing called a mission survival task.

And we had psychologists go in and annotate who was being the protagonist, who was being the supporter, who was being the attacker on 15-second by 15-second bases throughout all the different interactions. We also had them annotate who was providing information, who was orienting the group, and so on and so forth.

And what we found was by listening to the signaling behavior-- not knowing anything about what was being discussed or the words-- you could identify the social roles and the informational roles of the players with about the same test retest repeatability of the humans who were listening to the words. In other words, without listening to the words, you can tell if somebody's a protagonist. You could tell if someone's providing information.

How can that happen? Well, a good hypothesis is that we had signaling behavior, and as language evolved, we added it on in a parallel fashion. So signaling that was characteristic of being a protagonist in some situations before language now was retained when protagonist language was added to the mix. So there's a parallelism between those.

Interestingly, you can do it for both audio or from video, or from both, and you get about the same results. The signaling seems to be multi-modal.

OK. So that's pretty interesting. Here's something that's re-analysis of some data from the experiment that Tom Malone ran that I think he's going to talk about next. And I collected some of the data with these little badges. And this is looking at 204 groups who are doing things like brainstorming and judgment and shopping.

And what we found is you could account for 60% of the variance in their performance without listening to the words. You just looked at the pattern of signaling. And the pattern of signaling is characteristic of something that you would call information clearing. Are people making short, little contributions, typically less than three seconds? Are those contributions answered by back channel? Really? That sort of thing? Less than one second long? And is the distribution of contributions even? And those three principles caused the performance to be predicted with about 60% of the variance accounted for.

We've seen the same thing in larger areas also. For instance, we take these badges, and we put them on people in corporations as they go about their normal life. We've done some 20 corporations now. Typically, we pick about 100 people in some division, and we watch them for a month. So for this, this is a German bank. It has five divisions. The blue stuff is all the email. The red stuff is the face-to-face signaling picking up by the badges.

And what's interesting is that when you analyze this sort of data, this pattern of communications in a real operating firm, you find that you can account for between 40% to 60% of the variation in performance between high-performing groups and low-performing groups without listening to the words or knowing anything about the content. You just need to look at how fast and efficiently the information is cleared within the group.

Now, in modern day groups, it's not just verbal signaling. You also have to look at email and other media which have slightly different properties. But by looking at the patterns of communication, you can predict the outcomes.

And more recently, we've done a number of experiments looking at creative groups and predicting when they were having a creative day and when they were having an off day. And using state of the art metrics for creative output, we find that we can do 80% to 90% accuracy in identifying creative days from uncreative days without looking at the content of the task or looking at the words or the character of the people. It's just are they harvesting information from around their environment, and are they clearing it very efficiently within their group?

So the bottom line here is that we once were apes as a species. We efficiently coordinated ourselves using signaling. We still have that signaling. And I think to really understand how language is actually used in a functional sense-- not a semantic sense or syntactic sense, but in terms of outcomes-- you need to understand how it mixes with the signaling behavior. Thank you.

[APPLAUSE]

THOMAS MALONE: OK. So the presentations we've heard so far, you could think of, in a sense, as being about the mechanisms of social and collective intelligence. What I'd like to try to do now is to raise our attention to the level of the intelligence that emerges from those mechanisms. So I'd like to talk about how to measure collective intelligence and how to increase it.

Now, it turns out that psychologists have been measuring individual intelligence for decades. And in fact, one of the most widely replicated results in all of psychology is the result that people who are good at one mental task are, in general, good at essentially all other mental tasks. In other words, they're better than average at one, then they're better than average at others.

Now, statistically speaking, this general intelligence factor is the first factor that emerges when you do a factor analysis of the performance of lots of people doing lots of different tasks. But as far as we know, no one had ever even asked the question of whether this same statistical phenomenon occurs with groups, not just with individuals.

So in a recent study, my colleagues and I used these same statistical techniques that are used in individual intelligence tests to create group intelligence tests. We assembled groups of two to five people. In fact, as Sandy just mentioned briefly a few minutes ago, with assembled groups of two to five people and gave them each a number of different tasks to do as a group.

What we found was that yes, there is a collective intelligence factor, just as there is an individual intelligence factor. In other words, just as with individuals, there's a single statistical factor that can predict the group's performance on a wide range of tasks.

We also found that this group collective intelligence is not strongly correlated with the average individual intelligence of the people in the group, or even with the maximum individual intelligence of people in the group.

It turns out, however, that the group's collective intelligence is significantly correlated with three things. The first is the average social sensitivity of the people in the group. We measure this with a test called reading the mind and the eyes, and it's essentially a measure of how well people are able to judge the emotions or the feelings that other people have from looking only at the eyes of those other people. So that was significantly correlated with the group's collective intelligence.

The second factor that was significantly correlated with the group's collective intelligence was the evenness of turn taking in the conversation. In other words, if one person dominated the conversation, the group was, on average, less intelligent than if people took more even turns in talking. Finally, and most surprisingly to us, we found that the group's collective intelligence was also significantly correlated with the proportion of females in the group-- more women, more intelligent.

Now, it turns out that this factor is statistically mediated, largely explained, by the first factor-- that is, it was already known before our work that women in general score higher on this measure of social sensitivity that we used. And so what it may be is that groups are more collectively intelligent if they have more people in them that are more socially sensitive, whether those people are male or female.

Now, soon, we hope to use tests like the ones we're developing here to test real world groups, natural groups, as opposed to the artificial groups we've studied so far. For instance, what if you could give a short test to a product development team measuring their collective intelligence, and then use that to predict how well they would do on developing a bunch of different products?

Well, what if you could do that for a sales team and predict how well they would sell in the coming year, say? Or what if you could give a test like this to a top management team and predict how well they would respond to a very wide range of other challenges?

Now, I think those things would be interesting. But what's even more interesting to me is the possibility that we could not only measure collective intelligence, but that we could also increase it. With individual intelligence, of course, we can measure it. We can predict with that. But there's not much we can do to change it.

With groups, it seems imminently possible to change and to increase the collective intelligence of a group. For instance, in the last few years-- for instance, we might be able to do that by changing the motivational factors, or the norms of interaction, or by giving them, I think, a special interest-- better electronic collaboration tools. For instance, in the last few years, we've seen some very new kinds of collective intelligence with far larger groups than were ever possible before enabled by the internet.

Consider Google, for instance, where millions of people create web pages, link those web pages to each other, and then the Google algorithms harvest all that knowledge so that when you type a question in the Google search bar, the answers you get often seem amazingly intelligent. Or think of Wikipedia, where millions of people all over the world have created a very large and very high quality intellectual product with almost no centralized control, and without even being paid.

I think these examples, like Google and Wikipedia, are not the end of the story, but just barely the beginning. And if we want to predict what's going to happen, especially if we want to take advantage of what's going to happen, I think we need to understand these possibilities for collective intelligence at a much deeper level than we do so far.

That's essentially our goal in the Center for Collective Intelligence. One of the key questions we ask ourselves is how can people and computers be connected so that collectively, they act more intelligently than any person, group, or computer has ever done before?

Now, to really answer this question well, we need to do more than just think about a big collection of cool things. We need to identify the different specific building blocks or design patterns that can be combined in different ways to produce different kinds of collective intelligence.

We call these different design patterns genes, and we've begun to map the genomes of collective intelligence embodied in some of today's most innovative organizations. For instance, we've identified a starting set of about 16 of these genes, the main categories of which are shown here.

In essence, each activity needs to have genes or design patterns to specify what is being done, who's doing it, why they're doing it, and how they do it. For example, the community of people that developed the Linux operating system includes what we call the crowd gene, because anyone who wants to can contribute to a new module to the system. But it also includes the hierarchy gene, because only Linus Torvalds and a few of his friends get to decide whether a new module is actually included in the next release of the system.

Now, in addition to classifying examples of collective intelligence and studying them, we're also creating new ones. For instance, in one project on climate change, we're trying to harness the collective intelligence of thousands of people all over the world to develop proposals for what we humans can do about global climate change. These proposals include suggestions for the political and other changes that might be desirable, as well as detailed plans for reductions in carbon emissions in different regions of the world, and computer models that automatically predict the results of those emission reductions for things like temperature change, sea level rise, and so forth.

So far, we've had over 14,000 people from all over the world visit the site. Last year, through a combination of expert judging and community voting, we identified four winning proposals, and they were presented at the United Nations and in a US congressional staff briefing.

And now for my grandiose finale. In the long run, as our world becomes more and more closely connected with many kinds of electronic communication, it will become increasingly useful, I believe, to think of all the humans and computers on our planet as part of a single global brain. And perhaps our future as a species will depend on how well we're able to use our global collective intelligence to make choices that are not just smart, but also wise. Thank you very much.

[APPLAUSE]

OK. So now I think the first question is do any of the panelists have questions for each other or reactions to things others have said? If nobody has any burning questions, I'll use one of the questions that was submitted by email for this panel.

The question says cognitive scientists and AI researchers both use the word "intelligence" to describe a capacity related to humans and computers respectively. Even though intelligence as it is featured by individual humans, may be something very different than intelligence in computers, both fields have affected and helped each other, perhaps by borrowing metaphors and so forth.

The question is, in what ways can the study of collective intelligence be informed by research in cognitive science and artificial intelligence? What can we learn from what's already known about how computer intelligence works or human brains work that might affect collective intelligence? Sandy?

SANDY PENTLAND: I'll mirror a discussion this morning that was between, I think, [INAUDIBLE] and some of the other people about engineering versus biological approaches. And recently, I've been giving talks where I start with a diagram from Kahneman's Nobel Prize lecture where he points out that people have two ways of thinking. It's a little bit of a caricature. But one is perhaps fundamentally linguistic, and it's rule-based, and the other is this much more ancient way of thinking that you could call associational.

And the thing that's interesting to me is that you look at books like Tetlock's where he looks at the world's experts in their domain and looks at how good they are making judgments about what's going to happen, and finds out that they're precious little better than chance. So that's a very sobering thing if the best of us at our best can only hit chance thereabouts. Whereas there's a line of research in psychology labs and other places showing that properly conditioned, properly bounded, this old associational mechanism is actually a much better way to go. It's often 20% better than people who actually think about it.

So the way that comes down is perhaps Google isn't so far off the way intelligence really works. Perhaps it really is just massive amounts of data plus simple mechanisms and the right boundary conditions.

THOMAS MALONE: So what do you mean by the old associational mechanisms? Say a little more about that.

SANDY PENTLAND: Well, I will appeal to both Simon and Kahneman, who categorized our ways of thinking-- rather black and white-- but into things that are the types of learning that you would expect to see in animals, but in fact, actually, the types of learning that we do when we're not conscious of things-- so picking up habits and things like that. Associational is a broad paint brush for those sorts of things.

THOMAS MALONE: So you're saying that Google is something like autonomic learning of the global brain or something like that?

SANDY PENTLAND: Well, that's pushing it a little far, but OK.

THOMAS MALONE: My job as chairman is to get you provoked a little bit.

SANDY PENTLAND: But there are Google-like things that people have out there and are proposing-- these massively parallel associational mechanisms that differ in their details. But perhaps that really is a great way to build intelligence, and perhaps that's also a secret of biological intelligence that we haven't focused on as much.

REBECCA SACHS: I'm curious, actually, to ask Andy and you, Tom, about talking about the analogies between individual intelligence and what we know from cognitive science and group intelligence and what we know from finance markets. And I wonder whether you think that the failures of individual intelligence and the failures of group intelligence, as exemplified by the failures that you've studied in the financial markets-- whether those are analogous kinds of failures or different kinds?

THOMAS MALONE: You want to go, Andy?

ANDY LO: Well, I think that's a great point. I would go even further, though. Not only do I think it's analogous, I actually think it's one and the same. In other words, I don't see a difference between evolution as it operates at the cellular level and as it operates at the societal level. Obviously, EO Wilson's work on sociobiology was among the first to apply these principles to behavior.

But when you study the evolution of very, very minute changes in multicellular organisms, you find that the kind of changes that you observe at that level is actually identical to the kind of changes we observe at social levels. Ultimately, what ties it all together it is the environment. So politicians have a phrase that is a rather impolite one which goes, it's the economy, stupid-- the idea being that economic issues drive political decisions.

I actually think that biologists should come up with a version which says it's the environment, stupid. In other words, the environment is what dictates the path of evolutionary dynamics, and that can occur at any level, including the collective level.

THOMAS MALONE: So I think I'd agree that in some sense, both kinds of intelligence are similar in that they're both influenced by the environment and their reactions and responses from that. But I think it's also useful to think about ways in which intelligence or errors in intelligence are different at the individual and the group level.

For instance, James Surowiecki has popularized these notions of wisdom of the crowds where, in many tasks-- like, say, guessing how many beans are in a jelly bean jar or how much a cow weighs-- you can have a number of different errors by individuals that, in some sense, cancel each other out at the group level. If you average all those people's guesses, the average is often remarkably accurate, often more accurate than all but a few of the individuals.

But there are different kind of errors that can arise at the group level. For instance, if the guesses of the individuals are not independent of each other-- for instance, if people call out what they think it is, and you hear an influential person say one thing-- that influences a lot of other people, and then the system doesn't work. The average maybe very wrong in that case.

ANDY LO: So let me follow up with that. That's a great example. If you replace the independence with a situation where there's great dependence-- for example, we all start thinking exactly the same way-- you don't get the wisdom of crowds. You get the madness of mobs.

And I think that explains, to some degree, what happened over the last few years in the financial crisis. We all started thinking alike, and that independence started breaking down.

It's not always a bad thing. Because if we see a fire happening on the stage, I suspect that all of us are going to be thinking the same thing, which is get out that way. And that's a good thing. But I think that the danger is that when you lose that independence, you are subject to some very, very idiosyncratic shocks in the environment.

REBECCA SACHS: Do you think that there's a madness of mobs inside individual heads?

THOMAS MALONE: That's what I was just about to ask you.

ANDY LO: Absolutely. And I think that certain psychopathologies can ultimately be explained. Along the lines of Marvin Minsky's notion of society of minds, you can have society of minds but madness of mobs within those societies.

THOMAS MALONE: You're the neuroscientist here. What do you think?

REBECCA SACHS: This doesn't seem like a neuroscience question. You mean are neurons going mad?

THOMAS MALONE: Can you think of neurophysiological phenomena that are like the madness of mobs?

REBECCA SACHS: Well, what were you thinking of when you said psychopathology?

ANDY LO: Well, in certain kinds of decision making contexts, typically, we try to balance a variety of different heterogeneous kinds of signals. If that balance is disturbef-- if, for some reason, we have, for example, no pain inputs-- imagine a situation where your arm is numb and you can't feel pain. I've had a friend who experienced that. After a week of this temporary numbness, his arm looked like a cat used it for a scratching post.

And it's because if you don't have any negative reinforcement, if you don't have any negative feedback, you don't know to pull your arm away from sharp objects. So the madness of mobs within an individual can be to have an imbalance of inputs. And that kind of imbalance can lead to what we think of as pathologies.

THOMAS MALONE: Where there's an interdependence between different regions or different parts of the brain-- can you think of any things like that, Rebecca?

REBECCA SACHS: Let me think about it. Not off the top of my head.

THOMAS MALONE: OCD.

SANDY PENTLAND: OCD, epilepsy, yeah.

THOMAS MALONE: OK.

SANDY PENTLAND: But the typical madness of mobs has to do with poor feature selection. People tell you something, you focus on certain features, and as a consequence of that, you ignore other salient features. And if you had paid attention to those other ones, you would make good decisions. That's perhaps the most common way people make seriously bad mistakes.

THOMAS MALONE: OK. Did anybody else on the panel have other questions?

REBECCA SACHS: I have one other one, which is Martin, I thought your summary of tit-for-tat as being effective when it's hopeful, generous, and forgiving was hopeful. But I wondered, given what Andy said, that it's the environment, stupid, do you have thoughts on what are the environments in which people in competitive contexts become hopeful, generous, and forgiving?

MARTIN NOWAK: Yeah. So I would actually like to answer this question, but I would also like to make a remark to the discussion which I said before. In the context of evolution, Ernst Mayr, a Harvard Professor who died a few years ago, he wrote a beautiful book, What Evolution Is. And if you read this book, then the first thing you will learn is that nobody really understood evolution until Ernst Mayr himself sorted it out. But he will also tell you that if you ask the question what is it that evolves-- so loosely speaking, we talk about the evolution of genes or the evolution of the brain. We even talk about the evolution of species. But the only thing that actually evolves are populations-- populations of reproducing individuals. And that's what I find very interesting.

So if you want to have an evolutionary metaphor, it always has to be in the context of a population-- of a population of reproducing individuals. But reproduction can be genetic or cultural. So it can be like a learning type of reproduction.

And now specifically to answer your question, what is the environment that would lead to cooperation-- so I would say this-- always competition, and sometimes there's cooperation. And cooperation can win, can be selected, if the environment instantiates one of these mechanisms that I discussed. So the environment must make it possible that there are repeated interactions, that reputation matters.

And now we are actually researching how these mechanisms can be combined synergistically. Because what we can show is that repetition alone only gets you a certain level of cooperation. And you get much, much more out of the system if you have repetition and some reputation or repetition and some space. That, I think, is the role of the environment.

THOMAS MALONE: OK. Any other questions on the panel?

ANDY LO: I just wanted to add to that environmental aspect. And I think we can see this even more starkly when we take a look at behaviors that we consider to be socially acceptable or unacceptable. So for example, if you think about the situation in Afghanistan where a troop of soldiers are going down the road and somebody tosses a grenade in front of them, it's not that unusual for one of the soldiers to jump on top of the grenade, sacrificing his life for his compatriots.

And yet if that same situation played itself out in Grand Central Station-- I don't know about you, but I grew up in New York, and nobody is going to throw their bodies on a grenade in New York. What is it that leads us to do this kind of altruistic behavior in one setting but not in another? I would argue that it's the environment-- the fact that in Afghanistan, there is a threat-- the threat is a broad threat to the entire troop of soldiers. And so that kind of sacrifice is absolutely necessary for the survival of that troop. Whereas in Grand Central Station, the threat is not nearly so broad-based and specific. So I think that's an example of how the environment actually induces behaviors in ways that we might not expect.

THOMAS MALONE: OK. If no more questions or comments from the panel, what about from the audience? Questions from the audience? OK. I think we have a microphone going around. Is that right? OK. We have a question right here.

AUDIENCE: Here are two tools that deserve some research-- Twitter and Facebook. The news has made it very clear that they affect collective behavior. Do you have any comments about whether that's intelligent behavior?

THOMAS MALONE: So I'll respond to that, and many others will, too. We've thought a lot about Twitter and Facebook and things like that. And I think the question is not one of fact. It's one of perspective.

So if you view the goal of Facebook, for instance, to create a massive directory of a very significant fraction of the humans on the planet and their relationships with each other and a lot of demographic and other information about them-- if you view that as the goal, then it's a very intelligent system. It's achieved that goal extremely well.

If you view the goal of Twitter as to predict the stock market movements, there are at least a few studies that have found certain ways of analyzing Twitter traffic actually does anticipate, and therefore predict, stock market movements. But that's not yes or no, they are or not intelligent. It's just from certain points of view, you can view their behavior as being intelligent.

SANDY PENTLAND: So another interesting thing-- so Twitter was actually copied from a piece of software that was developed here at MIT called Text Mob. And Text Mob was built to support political protests and conventions and World Economic Forum meetings, et cetera, et cetera, by allowing protesters to be able to create flash mobs and to essentially be much more aware of what was happening in different parts of a city.

And I believe that the founders of Twitter thought that that was just generally a good idea that you have increased awareness about all the different types of things that you want to pay attention to. And that's the base theory. And I think that that's actually a very key element of being intelligent. You have to be aware of your environment.

And of course, the question is, are they are aware of the right features in the environment, or they aware of stupid features in the environment. And that's what all the layers of filters that people put on it are all about, is paying attention to the right things at the right time.

THOMAS MALONE: Another question next door.

AUDIENCE: This question is aimed at Dr. Pentland. As I understood your presentation, you were able to observe certain signaling among people involved in conversations with some objective, and you were able to predict with 60% or 70% accuracy the outcomes just based on the signaling and not on the substance of the discussions or conversations or debates.

I was wondering if it were possible now that you are conscious of this signaling activity whether, with your consciousness, you could manipulate the outcome if you were a participant by conscious signaling.

SANDY PENTLAND: So the accuracy numbers are 80% to 90%. If you square those, you get the variance accounted for, which is 40%, 50%, 60%. So we've done experiments in changing people. And roughly, you can't pay attention to your signaling and talk at the same time is the short collection.

But there is a way to change things, and that is that people have these social models or social roles that they have learned by observing other people or by doing it themselves. And if you pick a social role-- like you're going to be the hard ass boss, you're going to be the compliant employee-- what happens is the signaling that you exhibit goes along with the role that you pick.

And what we can do, of course, now is we can actually look at the interaction and identify the social role that you should adopt that will produce the most dollars at the end of the day. Or we can look at an organization and talk about the pattern of communication in the organization that will produce the highest productivity. And then what we do is we re-engineer organizations so that the flow of communication is a higher productivity pattern rather than a lower productivity pattern. There's a lot of amusing stories I could tell you about when we've done this-- in the former Soviet Union, for instance, you get very, very sick patterns of communication. And you can do things to actually substantially fix the places.

THOMAS MALONE: Another question right here.

AUDIENCE: I think he was actually pointing up there. I do have a question.

THOMAS MALONE: Why don't you go ahead, and then we can do him.

AUDIENCE: This is actually for you, Professor Malone. There's some scale issues, I think, associated with collective intelligence. And so I think you would probably say that a team of experts that are used to working with each other would have a higher degree of collective intelligence than a team of experts that had not worked with each other. So an expert team is different than a team of experts.

So the question I would have if that were likely to be true-- is there an optimal number that would make the best team if you will? And does that really scale to a global collective?

THOMAS MALONE: OK, great. So the first question, or the first part of the question-- you're right that a team that works well together is better than just a team, even if the team consists of experts. It's not guaranteed, however, that experience together will necessarily produce good functioning group behavior. Some people work together so long, they get sick of each other and become a bad team rather than a good team.

But I think the more interesting part of your question is the one about scale. And that's where, I think, there are some very interesting possibilities. Depending on what task you're doing and so forth, the optimal size for a face-to-face group is somewhere around 5 or 10 people, something like that. When you get a whole lot bigger than that, the group functioning-- at least levels, often, actually becomes worse when you get too many people in the group. So one of the most intriguing possibilities, I think, is that new technologies and the appropriate kinds of tools built on those technologies may allow us to scale groups to much larger sizes with continuing improvements in the functioning of the group.

Here's a thought experiment to suggest what might be possible. Imagine that you had 5,000 people in a football stadium trying to write an encyclopedia with nothing other than paper and pencil and the loudspeaker system. 5,000 people in a stadium could write things and pass them around, and there would long lines in front of the editors waiting for approval, and stuff like that. So they could do something in a given amount of time-- let's say an hour or two.

But imagine what those same 5,000 people could do if they were all using the Wikipedia software. I think it's fairly clear that they could be a much more intelligent group. They could produce much better encyclopedia material in a fixed amount of time with that collaboration tool than just with paper and pencil and face to face.

So that's only one thought example, but I think there's a fascinating and really important line of research about how to create the kinds of tools and the kinds of organizational designs that will allow us to have much larger groups than were ever possible before operating in ways far more intelligent than we could ever have before.

SANDY PENTLAND: Can I add to that? So we've been able to look at these patterns that correlate very highly with group productivity. And one of the things that most people are very concerned about are distance groups-- so where people are geographically distributed. And we've done a whole family of experiments where we put back visual displays of the signaling so that people are aware of the behavior of the people at a distance. And in many cases, you can bring the performance of the distance groups up to near the face to face level.

And the other thing that we find is when we look at large, dispersed organizations, it's useful to think about different types of communication. So face to face is very rich. Twitter is very thin and very abstract. And what you need to do is look at the-- that you get enough of each type of communication often enough among the whole organization.

And it seems to be that errors of that sort are a primary defect, for instance, in a lot of military operations-- your over-reliance on Twitter or email, and so you lose a lot of the rich media, and therefore, a lot of the context. And by being careful about what sort of media you use when and putting those social signals back in, you can seem to be able to get much more regular high performance.

THOMAS MALONE: OK. Let's go to the question here in the front.

AUDIENCE: What do you mean by intelligence? Is it adaption? Is it survival, or is it problem solving and innovation? Is the concept well-defined?

THOMAS MALONE: Intelligence is a notoriously difficult word to define, so I'm going to ask my other panelists to each give their opinions.

ANDY LO: Well, I'll take a turn at it, and I guess cite the book and the ideas of Jeff Hawkins, who's going to be speaking later this afternoon-- the memory-prediction model. It seems to me that a very simple definition of intelligence is a mechanism for recognizing patterns and making predictions based on those patterns that are useful. And by useful, that means increasing the chances for survival. So--

REBECCA SACHS: I'm going to push that even further and say specifically for human intelligence. And here, I'm influenced by Josh Tenenbaum, who's the organizer of this event. I would say it's not just going from patterns to predictions, but going from very little data about the patterns to very big, powerful predictions about the future.

So it's one thing to be able to take 10,000 data points and predict the next one. More distinctively human intelligence is to be able to take two data points and predict the next 10,000.

ANDY LO: I would argue that that's a matter of magnitude, not necessarily a qualitative difference.

REBECCA SACHS: And I disagree.

ANDY LO: OK.

THOMAS MALONE: Anybody else want to add?

So I'll give one other slightly different perspective. You were both focusing on the importance of prediction in intelligence. There is a standard definition of intelligence among psychometricians-- that is psychologists who measure intelligence with things like IQ tests-- and that's basically what I referred to earlier. It's the common factor that predicts your performance in a very wide range of different tasks.

So from that point of view, you could say that intelligence is not just a measure of your ability to do some specific task very well, but your adaptability, your ability to do a wide range of tasks, or to rapidly learn how to do new tasks. So none of these are definitive answers, but I think these are all in the direction of what we mean intuitively by the word "intelligence."

And I think we're out of time. So thank you all very much.

[APPLAUSE]