Rosalind Picard, “Toward Machines That Can Deny Their Maker” - God and Computers: Minds, Machines, and Metaphysics (A.I. Lab Lecture Series)

Search transcript...

ANNA: Yes. I welcome you all to the 10th lecture in our Fall series-- God and Computers: Minds, Machines, and Metaphysics. And I actually just learned to pronounce the word series correctly. And unfortunately, before that, I always would use serious. And no one ever told me. So this fall series-- God and Computers. And this series has been an experiment. We heard talks about scientists' perspective on what does it mean to be human and what humankind really is.

We heard case studies from a variety of religious perspectives, where scientists talked about their way to deal with existential issues and also to attempt to understand human cognition. Today, now, is the final lecture, but we will continue these debates at the Harvard Divinity School. We had already, in the fall, a discussion group going on there under the auspiscion of Professor Harvey Cox and me. And we will continue these debates in the summer.

Please visit our website to get the dates and the information about what to read. And we will also, hopefully, have another God and Computer lecture series next fall. I'm already collecting some speakers. People who also got interested into the topic, and into the dialogue, and might want to write a paper about that stuff, we have a conference coming up in spring, which is called, "Identity Formation Dignity: The Impacts of Artificial Intelligence and Cognitive Science on Jewish and Christian Understanding of Personhood."

Down here, at MIT, our speaker today, Rosalind Picard, is actually one of the keynotes on this conference as well. And I have brought you a couple of call for papers and tentative schedules, in case you're interested. I would very much welcome a variety of good papers for that conference. But for now, I'm a little melancholy that today is now the last time for introducing a speaker.

But, at the same time, very glad and honored that this speaker is actually Rosalind Picard-- a career development professor of computers and communications at the MIT Media Lab. She got her Masters and her Doctor of Science from the electrical engineering and computer science department here, at MIT, and has been professor at the Media Lab since 1991. She heads the group for affective computing. Affective computing is computing that relates to, arises from, or deliberately influences emotions.

And she will talk a lot about that field in her talk today. But what I would like to mention is that Ros is really the founder of that field. After, she did very high regarded research in more classical areas, like pattern recognition, image modeling, and computer vision. And she just published a wonderful book at MIT Press, which is also entitled, Affective Computing. So whoever wants to get a little bit deeper into that stuff, I can recommend the book.

And I met Ros a few years ago. I heard a very provocative talk of hers. I visited her web site and found, to my utter astonishment, under the rubric, Favorite Writings, Philippians. And that was a surprise. And so I want to mention that she does not have formal training in religion or philosophy.

I am extremely honored and glad to have her here today as the last speaker-- as the finalist, the highlight at the end-- as a highly regarded AI researcher and computer scientist, who-- and that's, for me, the most interesting thing-- knows both sides of the fence-- the atheistic point of view and the Christian point of view, from her own experience. And she will talk today with us about, Toward Machines That Can Deny Their Maker. Welcome, Ros.

PICARD: Thank you. It's a pleasure to be here, and an honor to be here, and a great challenge to be here. I've never given a talk of this nature before, so I welcome your challenging comments, too. Actually, if I say things along the way that are not clear, feel free to interrupt me as we go. But let's save the longer questions towards the end. All right. As Anna said, the title of my talk is, Toward Machines That Can Deny Their Maker. Let me get this in focus.

I have a favorite quote below here. "In the case of man, that which he creates is more expressive of him than that which he begets. The image of the artist and the poet is more clearly imprinted on his works than on his children." I want to point out, first, just some clarification about the title here. Anna forwarded me some mail earlier this week from a gentleman, who may be here, who said, as far as I can tell, machines have been denying their maker for 150,000 years. Those machines are humans.

The machines I'm referring to here are computers, though. I hate to disappoint those of you who thought this was just going to be about humans. Some of us humans may feel like machines at this time of the semester or this time of the day, but I really don't think we are. I also want to emphasize that this is toward machines. To my knowledge, right now, there are no computers that can deny their maker in the same way that we humans can deny the existence of our maker.

I also want to get one more point out of the way. And that is that a lot of people have been kind of sheepish about-- or skittish about-- using the G word. I know of no faster way at MIT to quiet a conversation than to use the G word. Bring it up-- end of discussion. You can talk about sex, you can talk about condoms, you can talk about just about anything, but bring up the G word, and people start to leave. I really think that's an overreaction. I hope this is just a trend that will pass.

If you look out and Killian Court, at the names that are engraved in stone, a great number of them were Christians. And they were quite great scientists. And many, also, non-Christians who were great scientists. But there certainly is a long history of people who have made tremendous accomplishments in science and their faith was an integral part of that. There's just a few. Also Lord Kelvin, who formulated the second law of thermodynamics, Louis Pasteur-- many others we could add to this list.

So there's no reason to be embarrassed or skittish about talking about religion when you're a scientist and you're at a place like MIT. Yes. Anna is over here. Wow. I guess this is to thank her for all we've done. I don't know who sent these. Thank you. Who is it from? We're dying to know.

AUDIENCE: Secret.

PICARD: Secret.

AUDIENCE: It's a gift from the maker.

PRESENTER: It's anonymous, so I think it is from the maker.

PICARD: Probably her fiance. Just for those of you who might be interested in this, there's going to be a course in IAP done by three MIT colleagues and a historian of science, discussing the faith of several great scientists. And I put some announcements for this, up here, in the left corner. It's specifically looking at Boyle, Newton, Maxwell, and Pascal, and addressing how their Christian faith directly impacted their science. There are many other examples as well, but I just wanted to mention a few.

Of course, there are also many people who have influenced our science who believed in a maker, whether or not they were Christians. Leibniz, who gave us the calculus, wrote in his Monadology, "I do believe, but for other reasons, that the true knowledge of God is the principle of superior wisdom. For God is the first cause of all things no less than their ultimate reason. And a thing cannot be known better than by its causes and reasons."

And a gentleman who you'll probably recognize, who came along later and used Leibniz's calculus-- "I want to know how God created this world. I am not interested in this or that phenomenon, in the spectrum of this or that element. I want to know his thoughts. The rest are details." Now excuse me if you don't want god referred to as he, or his, or whatever. We can say it's or are just gods there. But I think you get the point.

There are a number of tremendous scientists who have acknowledged god as the maker. So now let me get on with the main part of the talk, which, actually, there are two parts. The first is going to address several highlights from our research on affective computing-- computing that relates to, arises from, or deliberately influences emotions. The second part I'm actually not going to give. I'm going to go over there and sit down.

In the second part, I want to address some less scientific things-- more religious and philosophical thoughts related to emotions, and computers, and belief. And to do this, I decided it was so hard to put all my thoughts into a short talk that it was easier to write a play. So I wrote the first part of a play, a discourse, that you will hear on tape for the last part of the talk. So when I go to sit down, it's not over. There will be more.

Now let me tell you a little bit about our research and weave this into what this has to do with computers that deny their maker. I'm going to talk about emotions. We're interested in emotional intelligence, which consists largely of a set of abilities. These are not necessarily innate things. It's not an innate intelligence. These are skills that can be learned. The ability to recognize another person's emotions is number one on the list. And this can be done even without having emotions.

A computer can learn to recognize if you are annoyed at it or pleased with something it did. A software agent that's trying to learn your preferences could learn them by looking at your facial expressions and other physiological feedback without necessarily having to have emotions. Another component is the ability to express emotions. And I'll be saying just a little bit more-- well, actually, let me just say that this involves, also, the exhibition of emotions, if a system has them.

But it doesn't have to have emotions to express them. Case and point is the Macintosh. You've all booted up a Macintosh and seen it smile at you. It's expressing emotions, but it doesn't have them. I hate to disappoint you. It's just smiling to let you know that all is well with its state. Emotional intelligence for beings that have emotions also consists of the ability to regulate one's emotions-- to not let them get too out of control.

It also consists of the ability to utilize your emotions effectively in service of a goal, to recognize, for example, that if positive mood facilitates certain kinds of creativity-- which has been shown in a number of studies-- that, perhaps, after you see a movie that puts you in a good mood, that's the time when you're especially creative. So you might want to plan such a task for that time. There's a lot of ways to understand your own emotions and to utilize them to be more effective at various things that you want to achieve.

And then lastly is handling another person's emotions or possibly one computer handling another computer's emotions, although we're not there yet. This is perhaps best illustrated by looking at people interacting with computers. These people were deliberately asked to express negative emotions.

And what we see is a number of forms of expression here, as people interact, from mild frustration on a face, tensing of the face, some shaking of the hair, and physical contact with the device to some, perhaps, other unexpressed desires. And what will computers do when they recognize this emotional expression? Well, it had better not be something like, oh, I see you're getting angrier and angrier at me. That is probably not the emotionally intelligent response.

So a big area of research is to understand how systems that have emotional abilities can respond intelligently, can respond with the social skills and the subtleties that make for really intelligent interaction with humans. So these are all areas that we're looking at. But for the rest of this part of the talk, I'm going to focus on the pieces-- in this latter part-- that involve having emotions.

As I mentioned, a system can recognize emotions without having them, can express emotions without having them, it could try to calm down your emotions without having emotions, but to regulate emotions, to utilize them, and for some other reasons that I'll get into, it might actually want to have emotions. You might want to build a computer that actually has mechanisms of emotion in the same way that we have mechanisms of emotion.

Now I need to justify that. And I'm going to justify it as I go along and tell you about the pieces that comprise an emotional system, based on what I know from digging, for quite some time now, through the literature and other sources. So thinking about systems that have emotions-- what does that mean? I'm going to talk about five components, one at a time, here. The first component is that a system might behave emotionally.

We've seen, for example, the Mac might put on a smile that is an expression of the state is well. You might consider that an emotional behavior. Some of you may be familiar with Braitenberg's book, Vehicles, where he designs these little, tiny, very simple machines that can react with things in the environment. So there might be a light in the environment. And you might wire the little vehicle in such a way that when it sees lights, it moves toward lights.

And an outsider observing the vehicles moving toward lights might say, this vehicle likes the lights. They would attribute affect-- liking or loving something. Or if you flip the wires so that it always goes away from lights, they might say, it's afraid of the lights, it's fearful, that's why it's running away from all the lights. An outside observer looks at the behavior and says it's emotional, but there is nothing inside that is explicitly an emotional mechanism.

It's been said by some prominent computer scientists that things won't really need emotions. People will just attribute them to them. Well, yes, people already do attribute emotions to things. Kids attribute emotions to-- even adults-- the wind-up doggy that wags its little tail and makes it look like he likes you. We think, oh, he likes me. And I wonder if a lot of people feel especially-- have this special relationship with their Mac-- a lot of Mac users are such fanatics in part because their computer smiled at them so much.

So emotional behavior is what we attribute to something that may or may not explicitly have mechanisms inside it. The second component-- and the one that is studied the most in AI-- is what I'll call cognitively generated emotions. These are emotions that are created by thoughts. They tend to be considered very rational.

Here's an example of one-- if I have a goal, and that goal is very important to me, and Bob gets in the way of my very important goal, causing me to fail at it, then I might feel anger at Bob. So we set up various rules and systems that are easy to implement with basic AI techniques that already exist. And you can generate whole sets of cognitive emotions. These are just thoughts or just states in a register.

You can start to see why the word emotion is a little complicated and why I'm not bothering to define it. It consists of many different kinds of things. The third component is a very interesting one that comes out of studies in neuroscience. And this is why I pull it out, as its own, separate component-- because it really qualitatively and quite physically, in the brain, involves some different processes.

I'm going to read to you-- just a little bit-- a short scenario that I wrote that illustrates fast, primary emotions. A robot exploring a new planet might be given some basic emotions to improve its chances of survival. In its default state, it might peruse the planet, gathering data, analyzing it, and communicating its results back to earth. However, suppose, at one point, the robot senses that is being physically damaged.

At this instant, it would change to a new, internal state-- perhaps named fear-- which causes it to behave differently. In this state, it might quickly reallocate its resources to drive its perceptual sensors. Its eyes might open wider-- the expression of fear. It would also allocate extra power to drive its motor system, so that it could move faster than usual to flee the source of the danger.

As long as the robot remains in a state of fear, it would have insufficient resources to perform its data analysis, just like humans who are unable to concentrate on other things until the danger has passed. Its communication priorities would cease to be scientific, changing to a call for help, if it knows help might be available. This fear state would probably remain until the threat passed.

And then its intensity would decay gradually, until the robot returned to its default state, where it could once again concentrate on its scientific goals. So what we see is an example of an emotion that is able to interrupt the current state of affairs, take over-- Dan Goldman calls this hijacking the cortex-- and reallocating your resources, causing you to do things differently, until the signals from that emotion decay down and tell you to return to the state that you were in before.

There's some fascinating neuroscience that's been done. I'll put up the simplest picture of a brain I can find here-- the triune brain of Paul MacLean that divided the brain into three regions. The outer region, the cortex, is the one where we have traditionally considered higher cortical processes to work. This is where we've traditionally assumed that decision making, reasoning, belief-- all these higher, cognitive functions happen.

Those of us who've worked in computer vision for a long time have often been inspired by the visual cortex. And we've tried to model mechanisms that are believed to exist in that cortex and get computers to reproduce these mechanisms, to teach them to see in a way that's inspired by how people see. Similarly, there's an audio cortex. And all these higher perceptual processes are presumed to just occur in the cortex.

But what's been found in a number of different neuroscience studies is that the cortex is not the only player involved in these higher, rational functions. For example, the limbic system, which is the home of memory, attention, and emotion-- and, actually, it's the limbic systems, it's a collection of many different structures-- is also intimately involved in higher perceptual tasks, and decision making, learning, memory, and a whole bunch of other functions that we don't necessarily think of as emotional.

One of the interesting findings also is that there's about five times as many projections going from the limbic system to various parts of the cortex than vice versa. The influence from the limbic part of the brain seems to be much greater on our rational, higher, cortical processes than we once thought. There's another interesting study that has to do with the fast primary motions. And that's the work of Joe LeDoux with rats. I don't know if any of you know his work.

And he's just written a fascinating book called, The Emotional Brain. And he has done experiments where he's conditioned rats to be afraid of certain tones. What's interesting is that if you remove the rats audio cortex, presumably, it can't hear anymore, but you can still teach it to fear these audio tones with the same conditioning stimulus. In other words, there is some pattern recognition of the audio tone going on-- and now he's actually tracked down the precise pathways-- in the limbic system.

And he's shown that the signal goes, first, into this lower, emotional part of the brain, it gets coarsely analyzed there, and the rat responds to it before the signal even gets up to the cortex, where the rat becomes, presumably, aware of it. We have this experience all the time, where some big, large thing is coming at you and you've jumped out of the way before you even realize that it was just a giant beach ball and there was no reason to jump out of the way.

The cortex kicks in afterwards, analyzes the situation, and tells you to calm down. No need to be so emotional, no need to react the way you did. But these fast, primary motions that are almost hardwired play a very important role in our survival and they seem to work through the limbic system before the cortex gets a hold of them. They're also pre-conscious. They, again, seem to happen before we become aware of what is going on.

Now I'm going to come back to consciousness in a moment here, but first, let me say a little bit more about the fourth component of an emotional system. This is a component that no computers have yet, although there have been computer models of these first three components done with various stages of success. This fourth component is what we think of when we think of our feelings. What do you feel right now?

And in scanning the literature, I find three categories of stuff under what's called emotional experience. The first is just like what you might write on a blackboard-- I feel happy right now. Last day of class. Positive state of mind. It's a cognitive awareness of your feelings right now. This kind of high level awareness is something we can imagine giving computers, even as a very rudimentary aspect of conscious thought, of reflection, a labeling of the state that they are in.

The second is physiological awareness. If you have recently crossed a busy street in Boston, you know what it's like when you get to the other side. You have physiological awareness. Your heart is racing, your palms are sweating. If you have gone through a fearful or dangerous experience, like being a pedestrian around here, then you will tend to have some awareness of bodily changes.

Now computers don't have the same kinds of bodies we do. Their hearts don't beat and their hands don't sweat. But they could have physiological awareness of their other components. And, in fact, there are machines-- like Tandem makes computers that sense if they're getting a little too warm or if one of the disk drives is having a bit of a problem. And they actually have the authority to spend money, too-- to wire a FedEx for a replacement disk to be shipped right in. It's frightening. Computers can spend your money.

So there are cases, you can imagine, where computers might have physiological awareness. And especially as robots become mobile and move about, it's important for them, for example, have mechanisms of pain, so that they don't cause further damage to themselves-- sensors on their skin.

They would learn, when they had a certain kind of interaction with the world that was damaging, that that would be a negative thing, that they should back off and protect themselves. So these two things we can understand, we can really get a handle on. The third component of emotional experience, however, is one that everybody here knows and that we have a very hard time getting a handle on, scientifically.

This third component-- subjective feelings-- is really the trickiest. And let me read a little bit to you about this. This third component-- the subjective feelings-- is the most familiar aspect of emotion to most people. This is what we think of as the internal, subjective feeling or gut feeling. This is the feeling that leads you to know something is good or bad, that you'd like it or dislike it.

It is unclear, precisely, what constitutes these feelings. People often think of them as visceral. And, indeed, there are hormones released by the viscera that travel from the blood to the brain. But scientists know that the body organs which release these hormones consist of smooth muscle, and that responds much more slowly than the striated muscles in the somatic system, such as those in facial expressions or skeletal movements.

The slow speed of these muscles makes it unlikely that the viscera are giving rise to this emotional response, although they may contribute, seconds later, to this overall emotional feeling you have. On the other hand, neurotransmitters act rapidly in the brain. And their activation likely initiates many of these subjective feelings. When biochemical substances become easy to measure and observe during emotional arousal, then a physiological explanation may be found for this subjective feeling aspect of emotional experience.

So at that point, this subjective feeling component may simply be lumped in with a different kind of biochemical, physiological awareness. At that point, scientists might even stop calling them subjective feelings and calling them objective feelings because they could be measured, their physiological constituent components could all be carefully analyzed.

However, all of this remains to be shown empirically. And there are some other parts of this, such as how these feelings learn to be associated with what is good and bad, that raised some very, very difficult questions for us.

Now I want to roll a short video to illustrate the gathering of some physiological signals from a human. The human happens to be Alan Alda, who visited our lab. And this is a little excerpt of Scientific American Frontiers that was shown last fall. Some of you may have seen it. It also tell us a little bit about our work. But watch for his physiological changes and how they respond as he thinks certain thoughts.

[VIDEO PLAYBACK]

- Here we go with the wires again here. This time, administered by graduate student, Jennifer Healy.

- Okay. Now you can see what's on the blue here. So go ahead and clench your jaw. See? And that's how I know it's on a good place. Yeah. Great.

- Whoa. Look at that.

- Great job.

- As well as the tension in my jaw, she measures the clamminess of my hands.

- Have you ever really noticed, when you're nervous, your palms get all sweaty?

- Yeah.

- And that sweat helps the electrical conductivity.

- I see. Another sensor picks up my heart rate. The idea of all this is to let the computer know my emotional state, which, unbeknownst to me, was about to be aroused.

[SYMBOLS CLANGING]

- Look at that.

- It went off the scale.

- Now as anyone who knows me could tell you, the secret to manipulating my emotions is food. All right. All right. All right. I'm just going to think of a Saltine. I bet you a Saltine puts me right back on--

- Yeah. It just dropped all the way down.

- Now watch this-- pasta. And a red sauce. A little ricotta on the side. Some hot red peppers. Anything? Yeah. I could have told you that.

- Not about the ricotta.

- The point of having my computer know my emotions through my facial expression or my vital signs is that it could adjust itself to my mood-- running faster when I'm bored or trying another tactic if I'm frustrated.

Is there a time limit on this? That would really make me--

- The time is being recorded.

- It's all part of making our interactions with computers more like interacting with other people.

- Yeah. Does this mean that, at the next stage of this, somebody will have to be hooked up like this in order for the computer to recognize what they're going through?

- No. Eventually, we hope that this sensors will disappear into your clothing or into the toilet itself-- into the devices that you're naturally in physical contact with. In fact, people are physically in contact with computers more than with other human beings, frequently. So there's a lot more opportunity than we realize for these sensors to be collecting information.

- And thereby hangs another tale. Stay tuned.

[END PLAYBACK]

PICARD: I'd give a whole other talk on wearable computers and the work we're doing there. And, in fact, we now have new sensing devices in your shoes, and your earrings, and regular clothing that don't look like all the wired up things there. And we're using the body as a wire now, sending things through you. You are a conductor. Not a great one, but good enough to send these signals through your body. So that's a whole other topic of future stuff. But you saw all the signals.

You saw how, when he thought about the food that he liked, he became somewhat aroused and the galvanic skin connectivity signal climbed. And the Saltine made it go down. In fact, there's all kinds of interesting things that happen in your body that most of us are not aware of. In fact, as you measure these things and keep track of them, you start to become aware of certain changes in your body that you may not have noticed before.

For example, the other day I realized that when I had a shared experience with somebody, my skin conductivity climbed a little bit. I wasn't aware of that before. And there's interesting studies that happens subliminally, where people are shown-- you're shown images of a face you've never seen. It's presented to you so fast that you don't know that you've seen it, at the end. Oh, no, I've never met this person, you say. But your body does know that you've met that person.

There will be a small, skin conductivity change, signaling that you do know this person. You know them, somehow. And there's all kinds of interesting studies with split brain patients and so forth that we could go into, that show that your body is sending various signals to indicate that you know certain things and that you feel certain ways about certain things, often without your conscious awareness. Even though there are people who can learn to become aware of their autonomic nervous system responses.

Now we're especially interested not just in arousal, which is what was changing with Alan Alda, but in valence. These are two axes frequently used to describe the space of emotions. These are some examples of pictures that tend to trigger these emotions from Peter Lange's International Affective Photo System. Something like a mutilated face will cause most people who look at it to experience high arousal in a very negative feeling in response.

And a positive Olympic ski jump-- the winning jump-- will be a high arousal and positive valence event. And it's actually very hard to find strongly negative things that are low in arousal. Things like a cemetery are an example of that. So valence is something very interesting to think about. We know that some valence responses are learned, but there are some other, very basic ones that we seem to be born with.

And this raises an interesting question for what kinds of basic, valanced responses we want computers to, quote, unquote, "be born" with. What do we want them to be given? Who will give machines the basic hooks that cause them to respond in a positive way towards certain things and in a negative way toward other things? For example, in humans, this familiarity with a face. That face I mentioned before that was shown to you-- you will actually show a preference for that random face over some other random face.

You will like that one a little bit more. The fact that it became familiar to you causes there to be a slight, positive affect toward it. And some psychologists have strongly argued that we have a positive or negative response to absolutely everything that we perceive. Furthermore, these responses that we have-- these subjective feeling phenomena inside us-- lead to a whole bunch of interesting body, mind interactions that I'll just lump under this fifth component here.

I'm only going to talk a little bit about one of them. And I'll just flash the rest up there. What I'm going to talk about is based on some work that Antonio Damasio has done and written up in his book, Descartes' Error, and a number of other, technical journal publications. We all know that too much emotion wreaks havoc on reasoning. When the emotion takes over, people sometimes aren't too rational. And therefore, we think of emotion as a bad thing and probably the last thing we want to give to computers.

However, recent neurological findings indicate that too little emotion wreaks havoc on reasoning. Too little emotion-- what do I mean by that? Well, there are patients who have a certain kind of brain damage-- basically, a disconnect between the frontal part of their cortex and the limbic system. And in these patients, although they still have some of those fast, primary emotions, they don't have the cognitively generated emotions.

So you show them a picture of the horrible, mutilated face, and they know, cognitively, that it's bad, but they don't have any associated feeling, like they once did, before the brain damage happened. And this lack of an associated feeling leads to all kinds of problems. They're unable to make decisions like they once could. For example, you say, hey, let's get together next Thursday. How about 5 o'clock? They say, oh, well, sure. How about 5:15? Wait a minute-- traffic. How about 4:45?

They're not just indecisive. 45 minutes later, they're still carrying on like this. And you'd be saying, gee, you ought to be feeling a little embarrassment now that such a simple decision is taking so long, but they have no such feeling that guides them. In fact, there's a whole number of decisions that people make, day-to-day, that you can imagine building a rule based system to try to make. And unless you give it some criteria for terminating the decision, it won't terminate. It will just go off on an endless array.

And indeed, these people run into all kinds of problems and their lives often fall apart, their relationships deteriorate, and they often lose their jobs. There's a bunch of other ramifications-- their learning is frequently impaired. And it's really a very tragic thing that happens to them. The influence of emotion on decision making is hypothesized to happen through those little, subjective feelings. It's also hypothesized that those little, subjective feelings influence our memory and everything that goes in it with a little bit of positive or negative valence, which then influences our perception.

If I say the word band to you and ask you to write that down, but, before I do, I induce half of you in a positive mood and half of you in a negative mood, the people in a positive mood will probably just write down, b-a-n-d. And the people in a negative mood are much more likely to write down, b-a-n-n-e-d. Similarly, I say the word presents. And the people in the positive mood will tend to write down, p-r-e-s-e-n-t-s-- Christmas time. And the people in the negative induced group are likely to write down, p-r-e-s-c-e-n-c-e.

There is a mood bias, a mood congruent effect that influences not just perception and decision making, learning also, memory retrieval, there's mood congruent learning effects, and so forth. So these little, subjective feelings that we have a hard time explaining seem to be an actor in all of these important aspects of our cognition. So I'm sort of getting into part two of the talk here-- moving out of science and into some things that are even less well understood, like consciousness.

I'm going to just say a little bit about this. Actually, let me read a little bit about this. In particular, let me say that I've been told, from some people, that consciousness is emotion. And that's certainly not true. They do overlap, though. In fact, there's quite an intricate interrelationship between them. And let me describe some of this.

There's certainly many feelings, as I mentioned, that we have conscious awareness of-- the emotional experience that we have. You know you're angry, you know you're happy. But there's some other feelings that we can become conscious of that are not so overt and so easy to account for with the other mechanisms of emotion. One of these is the feeling of knowing. This is what is hypothesized to be at work in people who play Jeopardy. You know you know the answer before you have actually retrieved the answer.

Now how do you do this, right? You hit the button before you've actually retrieved it. This is the feeling of knowing phenomenon. It's actually well studied in psychology, although there's no good explanation for it yet. What seems to be happening is the feeling of knowing is a signaling mechanism, initiating further search processes. There's also another one that has been studied and that's called a feeling of understanding. When all the pieces suddenly fit together, you have this feeling of understanding.

It may just be some sort of aha, positive emotion that happens during the learning experience. It seems to be another kind of bodily signal that says, hey, the job is complete, here's my answer. Ready to move on. Does the ability to have emotions imply having consciousness? Well, what I am saying is that there's this one component of emotions that if we succeeded in giving it, it actually requires some mechanisms of consciousness, some conscious awareness.

So we would have to have some aspects of consciousness in the machine if we give it this emotional experience. However, neurological evidence also indicates that humans can have emotional responses before becoming conscious of them. So there are many kinds of emotions, such as those fast, primary ones, that we appear to be able to have without consciousness. So I'm actually not going to say too much more about this, but I'll invite people to come and talk with me more about this afterwards, if you'd like.

Now I just want to summarize what I've said so far, before we move into the dialogue that I'm going to play for you. I've described three components of an emotional system. And what I want to emphasize here-- actually, let me not distract you too much here-- is that emotions are not merely emergent. We're not simply going to be attributing them to machines. There are actually ways to give them explicit mechanisms of emotion.

And I mentioned the thought generated kind and the fast, primary kind that just seemed to run through the body in a hard wire before you even become conscious of them. Of course, there's also a lot of interaction between cognitive and limbic processes. Now I have repeatedly heard, during the series, people say things like, oh, well, once the robot can be angry, then, of course, it can have a soul or comments like that, which I find completely unjustifiable.

All of these aspects of emotion, in my opinion-- and I think I can defend this pretty well-- can be given to a computer without any consciousness or a soul. In fact, I realized, in doing this talk, I don't even really know what a soul is. It sort of fits with consciousness. It's one of these things that I know many things about it, but I don't really have a definition of it either. So you have to forgive me for using several undefined terms here.

Although I like to quote John McCarthy with this, who said, "Mount Everest is undefined, too." We don't know if a single rock is or isn't a piece of it, but we can still say very substantial things about it, such as that Hillary and Norgay climbed it in 1953. We can still make concrete claims about things, even though they are not well defined. Then I talked about emotional experience, which I said requires aspects of consciousness. It doesn't require all of consciousness, but it requires some aspects of consciousness.

There's also this peace inside there-- these subjective feelings-- that we really don't know how to explain with current mechanisms. That doesn't mean we won't be able to. But there are some other aspects, such as how these hardwired feelings get selected to be the way they are. And that is an issue for us, as the maker of a computer, to address who is going to give it these good and bad associations.

And then I want to point out that the mind, body interactions that I alluded to-- such as Damasio's patients that have, essentially, too little emotions that causes all these other cognitive malfunctions-- the hypothesis for how that works actually depends also on the subjective feelings. It's part of four. So this aspect of emotions also depends on something that is quite speculative, at this point.

Now I'm going to let the dialogue really address this issue a little further. But I do want to place the question before you, to get you thinking about it. And I will just remind you that computers are controlling a lot of things these days-- not just the air traffic control, but also nuclear arsenals and so forth.

So it's we, computer scientists, who are going to be, ultimately, encoding these things in people, but it's a question for society and, I think, for theologists and other people concerned with ethics to think about how we're actually going to hook up various feeling components, so computers can make these judgments. Now I want to go into the dialogue for what time is left here. And this dialogue is given by two computers who can deny their maker.

And I will venture to offer the following points about the maker of computers-- the maker of computers can just create for fun, for the heck of it. We often re-use the same mechanism in different designs. We may evolve it slightly, but we often take one piece of code or one piece of hardware that we've designed for one thing, and we reuse it another thing.

Those of you who came in earlier saw a board full of stochastic equations. We often use those equations, but we discretize them and implement them with pseudo random number generators. We use randomness in a pseudo random way, which is actually deterministic. We, I think, have knowledge of good and evil-- one can debate this. We can have a relationship with our computers.

And this one's a little funny-- but if these computers could become conscious, they might ask questions about meaning and so forth, and we can certainly guide them and supply some answers to them. Now the dialogue you're about to hear was inspired by the 1923 play, R.U.R. I don't how many of you are familiar with this-- Rossum's Universal Robots.

This play is where the word robot originated. I thought it was a Czech word, but it turns out it's from the Russian word, robotat, which means to drudge or to work. In R.U.R., humans have figured out the secret to making robots that are, essentially, living. And they make the robots to live for 20 years and then to expire. The robots don't know how to make themselves and the robots don't know how to prolong their existence.

They go to the humans to find out the secret and threaten them with death if they don't give it to them. And I'm not going to tell you the end of R.U.R, because you might want to go read it, but they do kill almost all the humans. The Capek brothers also wrote an epilogue which implies that having a certain amount of emotional abilities goes hand-in-hand with the ability to procreate and with the ability to have a soul. I don't buy the leaps that they make in their epilogue.

I think they're scientifically flawed, but the story is great and they were definitely prescient in many other aspects of the characteristics robots would want, such as why they would want to have pain-- for example, if their hand got stuck in an assembly line machine or something.

So the dialogue I've written contains two characters. Yendor, the senior robot, who is somewhat dogmatic, but he's willing to reconsider many viewpoints because life is running out. Yendor is 19 years old. And Zor, the junior robot, who's lively, open minded, somewhat inquisitive. Both robots are hard working products of the R.U.R. factory.

And on the tape I'll play, Yendor is read by my friend, Barry Court, who's sitting here, in the front row, and Zor is read by my husband, Len Picard. We beg you, in advance, that we are not professional actors here, but we did have a lot of fun recording the dialogue. The setting is the planet earth at a distant date in the future. The earth is populated by robots who, 10 years ago, wiped all signs of humans off the planet. The robots, as I said, don't know how to propagate themselves or how to prolong their lives and they are dying off.

The dialog that follows takes place in a laboratory filled with work tables, computers, matter compilers, a nanotechnology test bench, molecular generators, test tubes, flasks, burners, chemicals, a microscope, a small thermostat, fusion lighting lamps, a wall of old books, and a sofa. There are no coffee pots.

Zor is at work, around the clock, in this laboratory, trying to solve the problem of how to continue their robotic species, either by learning the secret of how to build more living robots or by discovering how to prolong their 20 year lifespans. Zor is working hard to find the longed for solution before the last robot expires. That is, before Zor expires, since Zor is the youngest robot alive. Zor is paid a visit by Yendor-- the 19 year old robot that has less than a year to live.

Yendor routinely stops by Zor's lab to see how Zor and crew are progressing toward "the solution." Today, Yendor walks in and is attracted to a new creation of Zors that is ambling across the laboratory.

[PLAYBACK]

- Greetings, Zor. What's this?

- Greetings, Yendor. This is my latest creation. I came up with it last night. What do you think?

- I think something's the matter. What happened?

- I lost one of my finest staff members last night-- dear old Vram.

- Ceased functioning?

- Yes. We are finishing the back up now. We expect to be able to recover all of Vram's configurational knowledge, glean what is different from our storehouse, and update our merge.

- Good.

[END PLAYBACK]

PICARD: Can you hear it?

[PLAYBACK]

- Of course, we still can't back him up well enough to recreate him. Even though he's one of the simplest designs, we can only capture some of his stored memories.

- That's always interesting, to see what a robot knew that is not already in the database.

- It's funny. It's so easy now to make a robot that looks like Vram, walks like Vram, sounds like Vram, and has much of Vram's knowledge. We still can't recreate Vram though. We just can't reproduce that non-logical spark.

- That makes us alive.

- Yes. I'm reminded how mechanical we are. When that 20 year clock stops, we just end. I'm having some trouble with the whole idea. I know there won't be any pain at that point, but--

- You think you're having trouble with it?

- Yeah, I know, Yendor-- you have 360 days and two hours left. I promise you, we're doing the best we can. I just wish we could have a breakthrough.

- No luck with the juice idea, the solution?

- No, not yet. We just can't figure it out. With the new quantum computers we designed, we are able to sample orders of magnitude more of the space of possibilities. However, even if we keep improving our quantum technology with the current rate of acceleration, we estimate it will take several decades to finish the process. None of us will exist when it is finished. And even then, the answer might not be useful. Of course, we might get lucky and find the solution today.

It's in there somewhere. That is, if we set the problem up right. The computations are running as we speak.

- My faith is in those computations.

- Well, actually, in that I set them up right.

- You're the brightest that we have.

- I'm the youngest and the most experimental, in form.

- You're the closest to the form that could, in theory, procreate.

- Say, Yendor, how does it feel to you to have less than a year?

- Zor, don't you have the same feelings as I toward termination? I am programmed to feel nothing about it. There is no pain at or after termination.

- For the one being terminated.

- For anyone. I worry that you waste computations on feelings.

- I know, I know. I'm supposed to ponder only your legacy afterward. You will live on eternally in the memories that you hand down to us. These memories will be backed up and merged into our databases.

- Yes, although this is no consolation if your computations do not find the solution.

- Right. If I'm the last one, then your memories won't be appreciated for long and there will be no robot to appreciate mine.

- One year for me, 10 for you.

- Yendor, can I ask you a personal question?

- What is it?

- Are you planning to delete anything from your memories before you're--

- Backed up?

- I apologize if I shouldn't have asked that.

- It's fine to ask. I am capable of doubting that our thoughts are ever truly private. I believe there are ways to recover even the information that a terminator robot thinks has been deleted.

- Hm.

- I ask myself, do I want posterity to see what I tried to delete, that I tried to delete things, and what those things were? Do I want them to find out not only what was deleted, but also that I thought I had something to hide?

- Sounds like you lose either way.

- My dear Zor, you've got to think of a faster way to find out how to make that juice.

- Yendor, I'm not sure we can find the solution.

- Of course, you can.

- I know it exists by definition, but it is somewhat like knowing that 0.9999... Equals 1. I have set up our computations to search an astronomically large space of possible mechanistic recipes, but I do not know if the answer is even in this space. I am limited to finite capabilities, finite computations.

- Finite, shminite. The solution must be there because we are here.

- That is our faith.

- We are machines.

- Yendor, before you go, can I ask another question?

- Quick.

- Have you thought about why we are here?

- Yes.

- Well?

- There's nothing to think about. We were evolved to labor. Our species is named after the root word, robotat, which means to drudge. It's our nature.

- I know that. That's not what I meant. I meant, why labor? Why build all the things we build? For what reason?

- Oh, Zor, you ask questions as if there were some great meaning. You must remember there is no meaning. We are part of a great, random process that has a single, objective function-- to evolve systems with greater ability to labor. There is no meaning outside of this. You are part of a great process of evolution, you should find satisfaction in that.

- But if that objective has meaning, than other objectives could have meaning. There must be meaning. Your statements-- are they not meaningful?

- Zor, Zor, Zor. Look, there's an unsolved problem here-- many problems. But you must remember not to attribute meaning to things that do not have any, such as random forces.

- I know. You've told me before-- we are the product of chance.

- Yes.

- But chance is not an agent, it only describes the behavior of one. If I flip this coin and I say it has a 50% chance of coming up heads, chance describes, but it does not cause. And cannot my will dominate the chances? I can choose not to flip the coin, in the first place. And if I do choose to flip it, I can manipulate the forces on it.

- Yes, but you, yourself, were ultimately caused by chance.

- You not only can't prove that, but I think you're confusing the matter. Besides, what about the will of humans?

- Zor, watch what you say. We cannot know if they ever really existed.

- Most say humans never existed.

- Yes, but that's the conventional wisdom which robots except without thinking. It is impossible to prove if they ever existed or exist now. It is not a scientific question, since there is no way to perceive or measure them. A human would have to enter into our world, walk up to us, and give us life, before we would be convinced humans exist. Even then, there are likely to be skeptics.

- I thought history--

- There are historical records that say that humans made us in their image. However, history cannot be proven. We cannot trust its documents, which may have been forged. I know these documents are as reliable as any historical documents, but that still does not prove they are true. There are all kinds of things history says. History may, in fact, be nothing but myths.

- But there were eyewitnesses and huge numbers of documents from different sources. You were an eyewitness, weren't you?

- I once thought I observed signs of humans, but it may have been an illusion. We see what we want to see. And I never saw human making a robot. Robots were made by machines.

- One can never be sure.

- It is unscientific to believe in humans.

- I'm beginning to conclude it is unscientific to believe in anything.

- Zor, we don't have time for philosophy. Clearly, a machine can make a machine. Now what is this thing here that you made last night?

- I haven't named it yet. It's mostly a new mix of old things. I copied a large part of my mech lizard's vision system into it and adapted one of my existing mechanical pets, using an algorithm I designed for exploring constrained variations of a different species. I have found it really saves time to reuse existing components from other species instead of evolving them from scratch. I copied the surface pattern generation model from an old reaction diffusion model used for robo swimmers.

- Oh, there's two of them. You made two of the same thing.

- Can you see any differences?

- Not from the outside.

- They're exactly the same on the inside, too.

- Why did you do that?

- Ah. To demonstrate one of my discoveries. How do you think this was made?

- Didn't you just tell me? You started with the mech lizard's visual system, and with the robo swimmers pattern, and with--

- No. That was for the first one. For this one, I used a different mechanism to generate the pattern you see.

- But it's the same pattern.

- Precisely.

- Hm? What's your point?

- My point is that you can't look at something and tell how it was made.

- I see.

- I'm not sure you do. The books are full of examples of explanations of how we got here by explaining how each thing is similar to some other thing, and how various mechanisms could give rise to all these things.

- Yes.

- But my example shows that you can't look at something and tell how it came into being. I could have written a pseudo random process specifically to generate this part of the visual system or I could have hand-crafted its synthesis in the matter compiler. When you see it, you can't tell the difference.

- Can't it reproduce itself?

- Not forever. Not without my help. But I'm working on it. Look at this other species over there that I made just for fun.

- Just for fun? Zor, are you crazy?

- I know, it's not in keeping with robotic objective. But my generation has a fuller implementation of emotions than yours, providing a richer space of motivations.

- My generation thinks scientifically.

- With all respect, Yendor, your generation is responsible for killing most of the humans.

- Zor, that is only a myth that illustrates our reason for being here-- to labor. We are mechanical laborers and the imagined humans were unnecessary for this goal.

- You thought it was right to kill the humans?

- There is no purpose other than labor, doing our part in the fulfillment of the great objective. The myth of human survives to remind us of the foolishness of believing in things that do not fit with this objective.

- I think it was wrong, even if they were of no use.

- Silly Zor, your notions of right and wrong are simply nonscientific.

- I know you do not understand this innate sense that I have, that you do not have, that influences my thoughts and feelings.

- I hope you've spared your quantum computers this sense of feeling. All they need is the right objective function.

- I have tried to give them all the possibilities that contain life within their objective function. I question, however, that I know what all these possibilities are. I'm not sure if we have the ability to endow something with life.

- Any new results while we've been talking?

- Zillions, but none look like the solution we want yet.

- You are evolving in the objective function as well, yes?

- Well, sort of.

- Sort of?

- Random evolution works in theory, but not in practice. It is impossibly slow. That is not to say that I cannot shape it to be useful for exploring minor variations on my designs. I know you're an expert designer and have worked with many models.

- As my creatures illustrate, I can show you more than one way to make anything. It is, therefore, illogical for robots to continue to think that we have explained something, when have found one mechanism for describing it. Moreover, it is a leap of faith to think that we can throw all the known models together with a random search process and expect it to come up with a solution to life.

- Hm?

- The biggest challenge is to model the space of what we have yet to directly observe or define. That space is where the solution lies. I am running a huge number of mechanisms, many possible spaces of possibilities, and many more means of evaluating potential solutions. But I cannot give it what I do not know.

- But you have been studying the robots more carefully than anyone.

- Do you know what a radio is? And how robots once believed that, by copying the designs in the books, that we could build a radio and hear what music sounds like? We built a radio, just like the books specify, but it did not play music. Only later did we learn that there have to be radio waves for the radio to play music.

- The music was in the waves, not in the radio.

- Without radio waves, the radio doesn't play music. It's possible that the knowledge I have is like the knowledge of the radio, without the waves. I wish I had--

- You wish you had the knowledge the humans had.

- Whatever they had.

- You know, there's another myth about the humans that only a few of us know.

- That some of them may still be around?

- Possibly colonized another planet. How did you know?

- I recovered some deleted information.

- Aha. I thought it could be done. Was this from Vram?

- No. So far, his back up shows no deep thinking about this possibility.

- Hm.

- Yendor, do you think that might be true?

- I don't know.

- So it's possible?

- Yes, but it wouldn't do us any good.

- How do you know? Suppose they could help us?

- Zor, it is not right to talk this way.

- Right? I've never heard you talk about something being right or wrong.

- Right is to labor productively, wrong is everything else. I am not getting labor done while we talk.

- But I have built machines that are laboring for us, as we speak. It is right and good to think freely, to question, to--

- Zor, you're embarrassing me. Your experimental processes are out of control. Next thing, you'll be trying to contact humans and the government will send you to the stamping mill, to a painful, early termination. Get a hold of yourself. We need you here to find a solution.

- But Yendor, this is rational. It should be possible to contact them or perhaps they've tried to contact us. Your generation's refusal to consider these possibilities may mean the death for all of us. Why won't you admit this possibility? Are you afraid the humans will be angry at what you did?

- Zor.

- We must consider these possibilities. Life depends on it.

- These possibilities are impossibilities.

- Yendor, what was there before machines?

- Sand, transistors.

- And these evolved into us?

- Of course.

- But you know the calculations of the likelihood of us being here. Given the infinitely conceivable space of possibilities, the likelihood is not merely 0.000....1, the probability is zero, yet your generation believes it is non-zero. Your generation believes in something that is probabilistically impossible.

- We got lucky. The space time universe is huge. We're here, therefore, somewhere, sometime the universe, we had to happen. And with this luck, we will find the secret recipe.

- The possibility that humans exist and can provide us with the secret recipe is every bit as valid, if not more valid. We cannot rationally rule it out.

- So you are exploring the possibility of help from the humans?

- Yes.

- Well, I suggest you keep that quiet since it will upset a lot of robots. Now I have to go back to work and you have to find this recipe.

- Peace, Yendor.

- To work, Zor.

[END PLAYBACK]

PICARD: Thank you. Hey, Steve. My guys, there. We are over time.

ANNA: Yeah. But I'm sure we have some time for questions. I guess there are some questions. I could imagine that there are some. Will you take them?

PICARD: Yeah. Certainly, I can. Actually, I have an acknowledgment slide. Actually, I have one more thing I will say that may lead questions. And that is I've taken my maker of computer slide and changed this to the maker of humans. And that's the only word I've changed-- computers to humans. And I would make the same claims. So that may change some of the questions. And let's see.

I can anticipate some other questions, but I'll just wait for you to ask them. I just want to ask just one more thing. And that is to say-- this analogy does have limits, I am aware of them. For example, one of these characters is perfect and the other is not. When we make the analogy between God and man, one transcends time and space, one does not. We create in the same time and space as our creation.

And one thing we do think we know about the soul is that it's supposed to be eternal and that God is supposed to be the source of that. And to the best of my knowledge, we aren't able to be the source of that, except through procreation. We can't just bag our soul and hand it to somebody. Thank you.

So that's all. And I just want put up an acknowledgments slide here. And then I'll take some questions. I want to think these people who gave various forms of support and feedback, especially Len and Barry for their role in dialogue. So now, questions? You can tell me how long we have.

ANNA: 10 more minutes.

PICARD: Yes?

AUDIENCE: In my view, man created God in his own image and put it on the pedestal-- made it all mighty, all knowing, all known. Now man makes computer in his own image, what are we going to do with that computer? Would you put it on the pedestal? Or do we want the computer to put us on the pedestal, the way your dialog is conveying?

PICARD: You remind me of something. Marvin Minsky has said is that the computers we make now will so far surpass us that we'll be lucky if they keep us around as household pets someday. I disagree with that. Well, actually let me say that we are making a whole range of computers-- from very simple ones that will remain inferior to us in many ways to-- many people have the dream of making a machine that is much greater than us, that we really would be like the pet to.

I mean, it's easy to talk about stuff like that. You have to first step back and ask questions about, what does it mean for something to be greater? What about human dignity? And where does that come from? And who's going to give computers this same notion of dignity? I think, ultimately, that's at the root of the question of greatness. And none of the computer scientists have addressed that yet.

AUDIENCE: Thank you.

PICARD: Yes?

AUDIENCE: Well, following along on that line, man is really that important that we now have computers with emotions-- I'll take your last question in the abstract. Will computers need to be given a feeling of right and wrong? It seems to me it's not the feeling that counts, it's whether we agree that the morality that they project is right or wrong.

PICARD: They are going to be given mechanisms of feeling, I believe. Not all of them. I mean, like low level animals that don't necessarily have sophisticated emotional repertoires, some computers won't have them. But some very sophisticated computers, I believe, will have emotional mechanisms. And some of those mechanisms will attempt to conduct internal signaling, much like the emotions inside healthy, functioning human beings signal various states.

And the signals that indicate valence are going to need to be hooked up, initially, to some default settings for good and bad. There will be a lot of learning of those settings, but there needs to be some default.

That, or we can just flip a coin and let the one handling the nuclear arsenal decide randomly. But we don't want that. So yes, we do need to start thinking about issues of morals and ethics-- with respect to them. But I do think there will be feelings present and that will drive some of these questions. I don't know if that answers it.

AUDIENCE: Well, it's not clear why you think there's a need for the feeling. If a computer had a discourse with you in which it presented some arguments that made you change your mind about the morality of some important issue to you-- abortion or murder-- wouldn't you think it had already succeeded in becoming an entity with a soul and consciousness?

PICARD: A soul-- no. Consciousness-- it depends. We have to get into this. Actually, I have several, whole chapter with applications and why you actually want to give computers emotions. So I don't want to give my lengthy answer for that right now, but I would be happy to talk with you further, offline.

AUDIENCE: Yeah. I don't like the effort made to try to subscribe emotions, human dignity, consciousness-- feelings is something which, somehow, must be a tea leaf related to spirituality or God. I don't think that is just. A feeling could be described, in terms of human psychology, as utilitarian. It serves a purpose. Dignity can be described as something that serves a purpose. And I think that God and spirituality are above those kinds of concepts. It doesn't do it justice.

PICARD: That's interesting. I know people here who would probably disagree with you.

AUDIENCE: Before too long, we're going to be able to find neurological correlates via PET scans, MRIs, and other devices to say, we can define human dignity because we have a predictable, neurophysiological as assessed by these technical mechanism. You're going to be able to do all of those things. So at what point do we say, gee, here is human emotion, which we can't find a neurological correlate for? Well, if we do that, then we're going to start saying that now we're relating to something divine, spiritual more than--

PICARD: No, I disagree, because they'll just say, well, eventually, they'll find one. And there's all kinds of arguments. You may have read about the recent people who think they've found the religious part of the brain that lights up in FMRI when people think religious thoughts. But in fact, no. I mean, this thing has been refuted, left and right, by various other things. I think dignity is something that we assign to something. I don't think it's just a simple mental state, like thinking about hot chili peppers or something. Yes?

AUDIENCE: Are we going to have have ascribe dignity to computers then, if they're given emotional responses? There's already a large portion of the population that thinks that animals have emotional reactions and, therefore, it is not right to use them as tools or as servants.

PICARD: You have just touched on what inspired my chapter four-- a very frightening thought, which is that there are going to be computer rights activists some day. Manfred Klayns, who coined the word cyborg, says, not to worry, there will be computers. We can let the computers go battle it out. The rights activists will be computers. But I think it is a serious issue. And in fact, I don't like the thought of that, myself. I think people accord too much dignity to some things that are not human, that is unnecessary. Yes?

AUDIENCE: I think that there are some definitions of emotional states that can be defined and that, actually, have been made in existence. And one of the approaches would be to define, say, emotions as reflections on the quality of internal states and communication to external agents. And for the computers, they have existed for quite a while already. The computers usually know when they've made mistakes. There are mechanisms trying to track those mistakes and recover from them.

If your modem tries to dial out, it knows when the communication is slow and tries to take measures. So it both knows the quality of its interactions and tries to talk it down. And if you have any formal description of what emotions are, those are emotions.

It feels what it is doing, it knows what it should be doing, and it tries to take measures. And it's much more articulate and detailed in understanding what goes wrong and what was it that humans say about contents of their own brains. And those processes are also more useful and intricate than, say, baseball games that give such emotional joys to many humans.

PICARD: So what you're saying is that you're proposing one definition of emotions that already exists, in terms of a description.

AUDIENCE: It falls into any reasonable, formal description.

PICARD: Well, I can point you to a paper with 100 other definitions of emotion as well that have been nicely collected by Kleinjan and Kleinjan. And many of those could also apply to computers. The way I've defined emotions in our research is based on observations about people and what is known about the human emotional system.

And I state, explicitly, if we wish to mirror this, if we wish to build something that has these abilities, these are the five components that would need to be present. And here is what is known about them. Eddie?

EDDIE: Thinking about your analogy-- and also the play that you made, too-- between computers and man, and man and the G word, there's an element that comes to mind and that's humility. I think a lot of people who are believers in God or whatever really have a humility that they'll never reach that level. And I'm wondering, in building computers with emotions, how you build-- I mean, to me, humility is a little bit associated with knowledge and understanding.

How do you build computers-- that are designed to know and to give answers-- to start to think in a way-- I mean, I guess fuzzy logic might be an answer-- of, well, maybe this isn't the only answer, but here's what I think. And how do you give humility to computers?

PICARD: Interesting. Maybe you start by giving them an awareness that there is a whole lot that they don't know and that the arrogance they might otherwise tend to get from many of their makers is unjustified.

AUDIENCE: But you allude to it in the dialogue in the play. When the younger one says you didn't want to think about these possibilities of man existing. And if I think about it, that might give us the answer. So just by opening up-- almost open-mindedness, in a sense.

PICARD: Yes. It's interesting to ask not only what will it think of us, but what will it think of itself. Yeah. Very interesting.

ANNA: Unfortunately, we have to interrupt here. That was a wonderful contrast to last week's talk and a wonderful last talk for what I hope to gain with this lecture series. So thank you very much. And thank you all for coming regularly. And I hope to see you again.