Ray Kurzweil, "The Age of Spiritual Machines” - God and Computers: Minds, Machines, and Metaphysics (A.I. Lab Lecture Series)

Search transcript...

[MUSIC PLAYING]

PROFESSOR: Yeah, welcome all to the ninth lecture series in our fall series, God and Computers-- Minds, Machines, and Metaphysics. I welcome you all, and specifically the students who are at the end of the term and have, certainly, a lot of other things to do. Welcome here.

For all people who want to discuss a little bit beyond the question sessions we have tonight-- and usually after the question series-- on Monday, December 7th, at Harvard Divinity School at 12 o'clock, Harvey Cox and me run, again, a discussion group on this lecture series. 12 o'clock, Harvard Divinity School on Monday. So for all people who want to talk about issues which go a little bit beyond what we can answer in this lecture series.

So far, we had two different paragraphs in this lecture series. The first four lectures in the series dealt with, what does it mean to be human? What is the so-called human factor? The second lecture-- the second part of the lecture series, then the next four lectures presented case studies from a Catholic, a Jewish, a Hindu, and Buddhist perspective on how scientists deal with existential issues within and outside of their research. And the last two lectures, we have today and next Wednesday are lectures dealing now going back to computer science, and deal with questions on computer science, computers, and God.

And so I'm particularly happy to welcome, today, Ray Kurzweil. Ray Kurzweil is actually founder of no less than four companies. One is Kurzweil Technologies Incorporation, which is a consulting firm, patent firms, and other consulting issues. The other one is Kurzweil Education System.

Ray Kurzweil has invented a couple of very innovative technologies here. In that company, he distributes reading software for blind people. It is Kurzweil Applied Intelligence Incorporation, where he has invented voice command software. And finally, it's Kurzweil Music System, where he has invented a lot of professional musicians' systems for computers.

And so I was actually impressed, already, by that list of companies, but was even more impressed because his innovative technology is not the only part of why he is so much accepted. But he's also very much accepted within academia because his contributions to computer science and academia are enormous.

And so he got no less than nine honorary doctorate degrees in science, and engineering, and in music, received numerous awards-- and I only mentioned two. One is the Inventor of the Year at MIT, and the other one is that he received the award for the Most Outstanding Computer Science Book of 1990 for his book The Age of Intelligent Machines, which was published in 1990 by MIT Press.

I can highly recommend that book. It's very fun to read. And he is currently working on another book, which is called The Age of Spiritual Machines. And I'm very, very happy that you are here today to share with us the thoughts about this book. Welcome.

KURZWEIL: My wife is annoyed that I skipped the age of emotional machines, but we'll try to cover that as well. It's a pleasure to be here my old stomping grounds. When I was a student here, we had one IBM 1794 which was shared by several thousand students and professors.

You needed some influence to get more than a few seconds a day. And it ran at a quarter of a MIP and had 32K of RAM. Actually, core memory it was back then.

So things have progressed a bit, actually in lock-step to Moore's law, which we'll talk a little bit about. Let me start with a story that some of you may recognize. But I think for a lot of you, it might be before your time.

"The gambler had not expected to be here. But on reflection, he thought, yes, he had shown some kindness in his time. And this place was even more beautiful and satisfying than he had imagined. Everywhere there were magnificent crystal chandeliers, the finest handmade carpets, the most sumptuous foods, and, yes, the most beautiful women who seemed intrigued with their new heaven-mate.

He tried his hand at roulette and, amazingly, his number came up time after time. He tried the gaming tables and his luck was nothing short of remarkable, winning game after game. Indeed, his winnings were causing quite a stir, attracting much excitement by the attentive staff and by the beautiful women.

This continued day after day, week after week, with the gambler winning every game, accumulating bigger and bigger winnings. Everything was going his way. He just kept on winning and winning. And week after week, and month after month, the gambler's winning streak seemed unbreakable.

After a while this started to get tedious. The gambler was getting agitated. The winning was starting to lose its meaning.

But nothing changed. He just kept on winning every game, until one day, the now anguished gambler turned to the angel who seemed to be in charge and said that he couldn't take it anymore. Heaven was not for him after all. He had really figured he was destined for the other place after all. And indeed, that's where he wanted to be. But this is the other place, came the reply."

That's my recollection of an episode of the Twilight Zone that I saw as a young child. Anybody here happen to see that episode? I think the title might have been something like, be careful what you wish for. Or at least that's what I would call it.

And as this engaging series was wont to do, it illustrated many of the paradoxes of human nature. We like to solve problems, but we don't want them all solved. Not too quickly, anyway. We're more attached to the problems than the solutions-- maybe to the process of solving them.

Take death, for example. A great deal of our effort goes into avoiding it. We make extraordinary efforts to delay it and, indeed, often consider its intrusion as a tragic event. Yet we would find it hard to live without it.

Death gives meaning to our lives. It gives importance and value to time. Time would become meaningless if there were too much of it. If death were indefinitely put off or even just put off for a very long time in human terms, the human psyche would end up-- well, like the gambler in the Twilight Zone.

Now, we don't yet have this predicament. We have no shortage today of either death or human problems. Few observers feel that the 20th century has left us with too much of a good thing. There is growing prosperity fueled, not incidentally, by information technology. But the human species is still challenged with issues and difficulties not altogether different than those it has struggled with from the beginning of our recorded history.

The 21st century will be different, in my view. The human species, along with the computational technology that it initiated the creation of, will solve age-old problems of need, if not desire. And we'll be in a position to change the nature of mortality in a post-biological future. I'll explain what I mean about that a little bit later.

Now, do we have the psychological capacity for all the good things that await us? Probably not, but that might change as well. Before the next century is over, human beings will no longer be the most intelligent or capable type of entity on the planet.

Now, as the abstract pointed out, let me take that controversial statement back. The truth of that statement depends on how we define human, and that will be a key issue in the next century. Unlike the 20th century, the primary political and philosophical issue will be the definition of who we are.

But I'm getting a little bit of ahead of myself. To approach the 21st century, we need to start from the present. If we jump to the endgame, it's a little unsettling and maybe not quite believable. But if we take it step by step, I think we'll see the inexorable logic of the next century.

This last century has seen enormous technological change and the social upheavals that go along with it, which few pundits circa-1899 foresaw. And the pace of change is accelerating, and it has been since the inception of invention. And as we will discuss, this acceleration is an inherent feature of technology. That's what makes technology different from just simply the use of tools.

There are a lot of animals that use tools, or at least there are some, but those tools don't evolve with time. Our tools evolve. The genes of that evolution are really our own knowledge base, which we use to advance technology. And that's an inherent nature of technology.

And the result will be far greater transformations in the first two decades of the 21st century than we saw in the entire 20th century. But to appreciate the logic of this, we have to go back and start with the present. Now, computers today exceed human intelligence in a broad variety of intelligence yet very narrow domains, such as playing chess, diagnosing certain medical conditions, timing buy-and-sell decisions, guiding cruise missiles, and so on.

Yet human intelligence remains far more supple and flexible. Computers are still unable to tell the difference between a dog and a cat, although I-- actually, I've been saying that for years. I think we probably could program a neural net today to make that distinction. But computers can't tie a pair of shoelaces, they can't describe the objects on a crowded kitchen table, write a summary of a movie, recognize humor, or provide other subtle tasks in which their human creators excel.

Now, one reason for this disparity in capabilities is that our most advanced computers still remain far simpler devices than the human brain by a factor of approximately a million, give or take a couple of orders of magnitude, depending on what assumptions you use. But computers, as is well known, have been doubling in speed and capacity or complexity-- which, it's important to point out, is actually a quadrupling of computational ability-- every 18 months since the inception of calculating devices at the beginning of the century.

And this trend will continue. I'll come back to Moore's law in a moment. But if we extrapolate that out, computers will achieve the computing speed and capacity of the human brain by around the year 2020. Now of course, achieving the basic complexity and capacity of the human brain will not automatically result in computers matching the flexibility and range of human intelligence.

The organization and content of these resources that is the software of intelligence is equally important. And in a few minutes, I'll talk about at least one scenario that I think is credible to achieve this. Once a computer achieves a human level of intelligence, it will necessarily roar past it. Since inception, computers have been far more powerful than computers in memory and speed.

The computer can remember billions or trillions of facts with extreme reliability, can extract information from billions of items in a database in a matter of seconds. We're hard pressed to remember a handful of phone numbers or the directions to get here. The combination of human-level intelligence in a machine-- with the fact that electronic circuits are a million times faster than neural circuits-- combined with the computer's inherent superiority in speed and reliability of memory will be a very formidable combination.

Now as I mentioned, electronic circuits are a million times faster than neural circuits, so once a computer achieves the human level of ability in understanding abstract concepts, recognizing patterns, the ability to read, for example, it'll be able to apply this ability to a knowledge base of all human acquired knowledge.

In fact, there's a project in Kyoto at ATR, Advanced Technology Research Laboratory, where they're building a billion neuron neural net. And they plan to teach it the ability to read. And then we'll basically set it loose on all published human literature, at least that's available electronic form.

Now, a common reaction to the proposition that computers will seriously compete with human intelligence is to dismiss this specter based primarily on an examination of contemporary capability. I mean after all, when you interact with your personal computer, its intelligence seems limited and brittle, if it seems intelligent at all. It's hard to imagine one's personal computer having a sense of humor, or holding an opinion, or displaying any of the other endearing qualities of human intelligence.

But the state of the art in computer technology is anything but static. Computer capabilities are emerging today that were considered impossible one or two decades ago.

Examples include the ability to accurately transcribe normal continuous speech, which all three leading companies in speech recognition-- Dragon, my own company, Kurzweil Applied Intelligence, and IBM-- recently demonstrated at COMDEX. The ability to intelligently respond to natural language, to recognize patterns in medical tests, such as electrocardiograms and blood cell tests with an accuracy exceeding human physicians. And of course, to play chess at a world championship level.

In the next decade, we'll see translating telephones. My company, Kurzweil Applied Intelligence, was recently acquired by Lernout and Hauspie that has language translation software. And we're putting together a translating telephone with continuous speech recognition, text-to-text language translation, and text-to-speech synthesis. And we plan to hold a demonstration early in 1998, where you'll be able to call up somebody in Germany and hold a conversation in real time with your speech translating to German and vice versa.

In the second decade of the next century, it'll become increasingly difficult to draw any clear distinction between the capabilities of human and machine intelligence. Evolution has been seen as a billion-year drama that led inexorably to its grandest creation-- human intelligence. The emergence in the early 21st century of a new form of intelligence that can compete with and, in some arenas at least, exceed that of human intelligence will be, in my view, a development of greater import than anything we saw in the 20th century.

Now, this specter is not yet here. Computers today are still a great deal simpler than the human brain. But with the emergence of computers that really rival and exceed the human brain in complexity will come a corresponding ability of machines to understand and respond to abstractions and to subtleties.

Human beings appear to be complex in part because of their competing internal goals-- things we call values, emotions, represent and engender subgoals that often conflict and, in my view, are an unavoidable byproduct of the complexity that is human nature. As computers achieve a comparable level of complexity and as they are increasingly derived from models of human intelligence, they too will necessarily utilize goals that appear to exhibit values and emotions-- of course, not always the same values and emotions that humans exhibit.

And a variety of philosophical issues will emerge. Are computers thinking or are they just calculating? Conversely, are human beings thinking or are we just calculating? Presumably our brains follow the laws of physics so they must be machines, albeit very complex ones. Is there an inherent difference between machine thinking and human thinking? To put it another way, once computers are as complex as the human brain and can match the human brain in the subtlety and complexity of thought, are we to consider them conscious?

It's a difficult issue to even pose. Some philosophers feel it's not a meaningful question. Other philosophers feel it's the only meaningful question in philosophy. And the question actually goes back to Plato's time. They had debates on this issue, and imagined machines of the complexity of the human brain made up of pistons, and wires, and levers, and so on. But as we actually have machines that genuinely appear to possess volition and emotional responses, this issue will become increasingly compelling.

Now let's talk for a moment about what it will take to achieve just the hardware capacity of the human brain. As is well known, computers have been doubling in speed and also the density of computation, which if you have a massively parallel neural net actually means you can put twice as many neurons, run them twice as fast, which is really a quadrupling of the number of neural connection calculations you can do per second, every 18 months.

Computer memory today is about a billion times more powerful for the same cost that it was when I was born. And if the automobile industry had made as much progress in the last half century, cars today would cost a hundredth of a cent and go faster than the speed of light.

Now, this exponential progress is called Moore's law. How many people here are familiar with Moore's law? I expect a lot of hands to go up here.

Intel chairman Gordon Moore first commented on this, actually commented in the mid-'70s. He said it was 24 months. In the mid-'80s, he revised that to 18 months. That's a corollary of a broader law-- since nobody's claimed it, I've named Kurzweil's law-- on the exponentially quickening pace of technology that goes back to the dawn of recorded history. I mean, not much happened in, say, the 10th century technologically speaking. There may have been one or two things.

In the 18th century, quite a bit happened. In the 19th century, a lot happened. And now in the 20th century, we have major paradigm shifts in a few years' time.

Now, some observers have commented that exponential trends can't go on forever. And we have lots of examples of exponential trends that go along and then just hit a wall, like a species happening upon a new habitat, is the classical example. Its numbers will multiply geometrically for a time, and then it hits a wall and may even go in reverse when it exhausts the supply in its new habitat. But it would be highly premature to predict the end of Moore's law anytime soon.

The chip companies themselves have said they can confidently foresee 20 more years of Moore's law, just using conventional approaches. That alone is enough to put the computational capacity of the human brain into a $1,000 personal computer.

Moreover, today's chips are essentially flat with only one layer of circuitry, whereas the brain uses all three dimensions. We live in a three-dimensional world. Why not use all three dimensions? And circuits, particularly superconducting circuits that don't generate heat, will enable the development of chips-- or I should say cubes-- with thousands of layers of circuitry and also smaller component geometries, which will multiply computing capacity by a factor of millions.

There are more than enough computing technologies to assure the continuation of Moore's law for a very long time. Now, the human brain has about 100 billion neurons. On average, they have about a thousand connections from one neuron to the next. That's 100 trillion connections. Each connection's capable of a computation. So that's a hundred trillion-fold parallelism.

These circuits are very slow. I mean, if we just copied the human brain in electronic form, it would run a million times faster because the neural circuits run 200 calculations per second. Typical electronic circuits, 200 million calculations a second. But 200 times 100 trillion is 20 million billion calculations per second. And that's about what you get in the year 2020 in a $1,000 personal computer if you just track Moore's law.

And again, the chip companies feel that they can do that just with conventional approaches, not even assuming things like crystalline computers, and DNA computers, and optical computers, and quantum computers, which is itself an interesting subject because it's a whole different type of computation. But I won't get sidetracked on that. Oh, and by the way, if we were to harvest even a fraction of the unused computes on the internet, and there are some proposals for doing that, we could create virtual super computers with the computing capacity of the human brain today.

Now, as we achieve each new level of hardware capacity, new software technologies become feasible. And in fact, it becomes a compelling economic pressure to create software that matches the hardware capability. And one of the exciting things about Moore's law is, every time that that screw turns, whole new business opportunities become available.

As I mentioned, continuous speech with natural language understanding is now feasible. A product that Kurzweil Applied Intelligence showed at COMDEX recently, which got one of the four best of COMDEX citations, allows you to say things like, I enjoyed my trip to Belgium last week. Make this paragraph two points bigger and change it's font to Arial. I hope to go back to Belgium soon.

And it actually figures out that that second sentence is a command, and then carries it out. And you can say the commands in any way that you want.

These systems require 200 megahertz Pentiums with 32 megabytes. The point is that as hardware capacity becomes available, software capacities do follow. And we will see the same phenomena with the full range of human intellect, which of course is comprised of many different types of skills.

Now, there are a number of convincing scenarios we could discuss, but let me concentrate on just one which I find particularly compelling, which is reverse engineering the human brain. I mean, the design is right there in front of us, and it's not at all impossible to get at the information. As the hardware capacity to emulate the human brain, 20 million billion calculations per second-- and that's conservatively high. I've seen other estimates that are a thousand times lower.

But as that capacity becomes available-- we're not there yet, but we will be there in a couple of decades-- projects to scan and understand the algorithms of the brain will be initiated. In fact, that's already been started. Carver Mead's company, Synaptics, basically studied mammalian early vision-- the neural organization of mammalian early vision by looking at cat cortexes. And by examining the typology of the interneuronal connections, the algorithms become apparent. And it's not hard to see them.

Now, these are not conventional, sequential Von Neumann algorithms, these are not all digital algorithms. These are hybrid digital-analog algorithms. They're massively parallel. They're not also conventional neural net algorithms. They're specialized neural net algorithms, but you can see them by examining the topology of the neural connections. And they created a chip that, in fact, duplicates mammalian early vision.

Now, it's feasible, today, to perform a destructive scan of the human brain. You could freeze a brain and destructively scan it, and see all of the connections of someone recently deceased, say. Now we might not want to base our templates of intelligence on someone who's recently deceased. Better to catch someone before they died.

Recently, a serial killer agreed to be scanned before he was executed, and you can buy all 500 megabytes of him on a CD-ROM. This is available. We may not want to base our designs on the mind of a serial killer, either.

But in terms of noninvasive scanning, magnetic resonance imaging today can image individual somas, or nerve cell bodies. We can see individual cells with today's MRI. And the limitation of the resolution and speed of MRI is the speed of computation.

So as computers get faster-- MRI is a very computationally intensive process-- our MRIs can be high resolution. And in about 10 years-- in fact, the design of MRIs now being designed, which will be in the market in four or five years, can see the individual connections. The axons, dendrites, synapses, and other neural components that make up the interneuronal wiring.

It will be feasible to actually reverse engineer these connections with MRI in about 10 years. Eventually, we'll be able to resolve the presynaptic vesicles that many scientists believe is the site of human learning.

Just as the Human Genome Project is providing us with the templates of the human genetic system, efforts to scan the neural organization of the brain will provide us with the templates of intelligence. And I will point out that there was a lot of skepticism about the Genome Project 20 years ago.

And I mentioned this project in ATR in Kyoto, where they are in fact now building-- expect to have this in a couple of years-- a billion neuron neural net. Implemented in hardware, not one of these software neural nets. And that's actually pretty competitive with the human brain right there.

Now let's talk a little bit about the impact of the emergence of a thinking and feeling machine. In this book The Age of Intelligent Machines, which I wrote 10 years ago, I predicted that a computer would be world chess champion before the turn of the century. And I went on to say that when this happened, we would either think more of computers, less of human intelligence, or less of chess. And if history was a guide, we would think less of chess.

And indeed, that's exactly what we're seeing. Observers from Doug Hofstadter, to People Magazine, to Scientific American have discovered that chess is not so intelligent and creative an endeavor, after all. Rather, it's really a matter of brute force computation, for which computers are well suited.

But continuing to defend the superiority of human intellect in this way as we go through the next couple of decades will become increasingly difficult. A competition from intelligent machines may seem threatening, so one might expect constraints of one kind or another to be placed on this phenomenon while the human species is still in charge. But while the overall phenomenon may appear to be unsettling, defining and implementing any specific limitations will not be at all straightforward, and I don't think will be feasible.

The survival on Earth of intelligent entities that rival human thinking is not an alien invasion. Thinking machines are humankind's own creation, and evolution of tools intended for our own productivity and efficiency. And indeed, we're already enormously dependent on our intelligent machines, just the contemporary genre. If all our computers were to cease operating, society could hardly function.

The drive to add intelligence and knowledge to our computing devices is fueled by a compelling economic incentive. We've entered an age of knowledge in which the knowledge content of products and services is asymptoting to 100% of their value. The enormous market capitalization of the computer industry-- one recent estimate put the market cap of just Silicon Valley companies started in the last 10 years at approximately $1 trillion. And that's really where most of the wealth today is being created.

There was another large figure from MIT, but it was actually lower than Stanford's number. I'm not sure why that is. Might be the warm weather there.

But a computerized system that possesses more flexible intelligence and greater content renders its competition obsolete. Stopping this process would require repealing this basic engine of economic competition. Now, it may appear that we have full control of our machines today, yet computers already make the majority of buy-and-sell decisions in our financial markets, assemble products, evaluate the work product of human workers, render medical judgments and diagnoses, guide airplanes, and conduct a myriad of other tasks on which we depend and by which we are deeply affected.

And that's with machines that are still a million times simpler today than the human brain. When machines progress to be as complex and ultimately more complex than the human brain, which will happen in the next century, our dependence will only deepen. This process is gradual and ongoing. It's a seamless progression with no obvious discontinuities.

Now, computers don't yet appear to be conscious players in this drama. No one wondered what Deep Blue thought of its own victory or Gary Kasparov's press conference. It's brilliant at playing chess but it's otherwise an idiot savant with apparently little ability to register opinions or emotions. No on worries today about causing pain and suffering to their computer programs. But all of this will change, not suddenly but inexorably, nonetheless.

And we can address these issues by simply assuming that the human-machine nexus represents strictly an us-them relationship. Remember that computers are already deeply embedded in the grain of our civilization. And this relationship will ultimately become much more intimate than it is today.

We have today artificial skin, bones, joints, hearts, blood vessels, pancreatic implants. So why not replace at least some of our slow, unreliable, and aging neural circuits with faster, reliable, synthetic ones? We already have cochlear implants. Why not implants to improve vision, memory, and reasoning speed? Why stop there?

These things sound controversial. Of course, there was an article just in today's-- I think was today's New York Times that human cloning was considered very controversial some weeks ago and now people are saying, well, why not? So views of these things do change.

The computers of the 21st century will appear to have human-like emotions because, in at least some cases, their designs will have been derived from the reverse engineering of human circuits. And like humans, they will have-- or at least appear to have-- spiritual experiences. They will wonder, or at least profess to wonder, about the true nature of consciousness and free will. They'll wonder if it's really possible that humans are capable of being aware of their own existence, or if we only act as if we are.

And that's not adequate to state that such developments won't occur because of ethical or political proscriptions. The age of very intelligent machines will not be a single phenomenon. Human ethics will face a multiplicity of slippery slopes.

Now, the advent of machines that match and ultimately exceed the full range of human intellect won't arrive on a single day. It will be a seamless, ever-accelerating progression. And to get a feel for how this will unfold, I'd like to consider some milestones along the way. Now with the impending millennium, there are no shortage of predictions about what the next century will be like, and futurism itself has a long history. Not a very impressive one.

One of the problems with anticipations of the future is that by the time it's apparent that these predictions have had little resemblance to future events, it's too late to get your money back. Fortunately, you didn't have to pay for this lecture.

Let me start with a sampling of some of the predictions I made 10 years ago about the 1990s. Of course, I picked out the better ones. But if you buy the book, you can see them all. But in 1988, I wrote, "Computer will defeat the world chess champion before the year 2000, and we'll think less of chess as a result. There'll be a sustained decline in the value of commodities, i.e. material resources, with most new wealth being created in the knowledge contents of products and services, leading to sustained economic growth and prosperity."

And interestingly, Fed Chairman Greenspan did recently acknowledge that today's unprecedented, sustained prosperity and economic expansion is due to the increased efficiency provided by information technology. Occasionally you read studies that say that there's no increase of productivity from information technology.

The problem with these studies is they factor out the improvement. They put it in both the numerator and the denominator, and say, we still only get $10 of work done for $10. And we only get an hour of work done in an hour, because we expect to do a lot more. There's no question that productivity has increased. In my companies, two people can handle the operations that used to require 15 or 20 people before we had computers to organize these things.

But Greenspan is really only half right. He ignores the fact that most of the new wealth being created is, itself, comprised of information and knowledge-- $1 trillion in Silicon Valley alone. Increased efficiency is only part of the story.

"A worldwide information network linking almost all organizations and tens of millions of individuals," admittedly not by the name World Wide Web. "In warfare, almost total reliance and digital imaging, pattern recognition, and other software technologies. The side with the smarter machines will win. The vast majority of commercial music will be created on computer-based synthesizers.

With the advent of widespread electronic communication, in the Soviet Union uncontrollable political forces will be unleashed. These will be methods far more powerful than the copiers the authorities have traditionally banned. The authorities will be unable to control it. Totalitarian control of information will have been broken.

Continuous speech recognition for dictation." Although, I did have that in 1993. So we were off by a few years. "And then by the year 2000, chips with over a billion transistors."

So let me share with you a sampling of some predictions for particular points in time in the next century. And these are predictions I'm developing for the sequel to the Age of Intelligent Machines, which is a book with the same title as this lecture.

Start with 2009, the computer itself. "The majority of text is created using continuous speech recognition, but keyboards are still used. Also ubiquitous are LUIS, language user interfaces, using continuous speech and natural language understanding. Computer displays have the display qualities of paper. Chips are three-dimensional with many layers of circuitry.

Supercomputers match at least the hardware capacity of the human brain. Unused computes on the internet are being harvested, creating human brain-capacity, virtual, parallel supercomputers. There's increasing interest in massively parallel neural nets, genetic algorithms, and other forms of chaotic or complexity theory computing, although the majority of computes of computers still use conventional sequential processing.

Research has been initiated on reverse engineering the human brain. Education-- the majority of reading is done on displays, although the installed base of paper documents is still formidable. Students of all ages routinely have their own computer, which is a very thin slab weighing about a pound with a very high resolution display suitable for reading. Intelligent courseware has emerged as a primary means of learning.

Learning is becoming a significant portion of most jobs. Learning at a distance is commonplace. Translating telephone technology is commonly used for many language pairs. Telephone communication is primarily wireless and routinely includes high resolution moving images. Meetings of all kinds and sizes routinely take place with geographically separated participants. Haptic technologies allow you to touch and feel objects and people at a distance. People have sexual experiences at a distance with other persons, as well as virtual partners, business, and economics.

Despite occasional corrections, the 10 years leading up to 2009 has seen continuous economic expansion and prosperity. At least half of all transactions are conducted online. Intelligence assistance-- which combine continuous speech recognition, natural language understanding, problem solving, and animated personalities-- routinely assist with finding information, answering questions, and conducting transactions.

A company west of the Mississippi and north of the Mason-Dixon line achieves $1 trillion in market capitalization." I take no responsibility for that if you make investment decisions.

"Political and social issues-- privacy has emerged as a primary political issue. The virtual constant use of electronic communication technologies leaves a highly detailed trail of every person's move. There's a growing neo-Luddite movement as the skill ladder-- i.e. the ladder, level of skill needed for meaningful employment-- continues to move upwards. As with earlier Luddite movements, its influence is limited by the level of prosperity made possible by new technology. The movement does succeed in establishing continuing education as a primary right associated with employment.

Musicians routinely jam with cybernetic musicians. Philosophy-- there's renewed interest in the Turing Test," which I won't describe, but many of you are familiar with it. "Although computers still fail the test, there's greater confidence that they will be in a position to pass the test within another one or two decades." This is a test for seeing if computers have human equivalents in intelligence, at least one proposed by Alan Turing.

"There's increasingly serious speculation on the potential sentience or consciousness of computer-based intelligence. The increasing intelligence of computers has spurred an interest in philosophy."

Okay, now we'll jump to 2019, about 20 years from now. "The computer itself-- computers are now largely invisible. They are embedded everywhere-- in tables, chairs, desks, clothing, jewelry, vehicles. People routinely use three-dimensional displays built in their glasses. Computer displays create highly realistic virtual, visual environments overlaying the real environment. This display technology projects images directly onto the human retina, and is widely used regardless of visual impairment.

Keyboards are rare, although they still exist. Most interaction with computing is through physical gesture, which computers respond to both visually and physically, and through spoken communication. Computers are facile with two-way, natural language communication. People communicate with computers the same way they would communicate with a human assistant.

Significant attention is paid to the personality of computer-based personal assistants, with many choices available. Typically, people don't just own one specific personal computer, although computing is nonetheless very personal. The computational capacity of $1,000 computing device exceeds the computational capability of the human brain. Of the total combined computing capacity of the human species-- i.e. all human brains-- together with the computing technology that the species has created, more than 10% is non-human.

The majority of computes of computers are now devoted to massively parallel neural nets and genetic algorithms. Significant progress has been made in the scanning-based reverse engineering of the human brain. It's now fully recognized that the brain is comprised of many specialized regions, with the typology and architecture of the interneuronal connections varying widely depending on the region's information processing function.

The massively parallel algorithms are beginning to be understood. To recognize that the human genetic code does not specify the sites in interneuronal wiring of any of these regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm.

Education-- most learning is accomplished using intelligent virtual teachers. To the extent that teaching is done by human teachers, the human teachers are often not in the local vicinity of the student. The majority of adult human workers spend the majority of their time learning new skills and knowledge.

Communication-- you can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever-present. Routinely available technology includes high quality speech-to-speech language translation for most common language pairs.

Business and economics-- rapid economic expansion and prosperity has continued. The vast majority of transactions include a simulated person, which includes a realistic animated personality and two-way voice communication with high quality natural language understanding.

Political and social issues-- people are beginning to have relationships with automated personalities and use them as companions, teachers, caretakers, and lovers. Automated personalities are superior to humans in some ways, such as having very reliable memories and predictable personalities. They are not yet regarded is equal to human in the subtlety of their personalities, although there is disagreement on this point.

An undercurrent of concern is developing with regard to the influence of machine intelligence. There continues to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization, and is designed to be outwardly subservient to our apparent human control.

On the one hand, human transactions and decisions require, by law, a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement in consultation with machine-based intelligence.

Virtual artists and all of the arts are emerging and are taken seriously. These virtual artists, musicians, and authors tend to be associated with humans or organizations, which in turn are comprised of collaborations of humans and machines who have contributed to the knowledge base and techniques of these cybernetic artists.

However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative. Visual, musical, and literary art created by human artists is typically a collaboration between human and machine intelligence. The type of artistic and entertainment object in greatest demand continues to be virtual experience software.

Philosophy-- there are widespread reports of computers passing the Turing Test, although these tests do not meet the criteria established by knowledgeable observers. There is increasingly serious discussion of the subjective experience of computer-based intelligence, although discussion of the rights of machine intelligence is not yet in mainstream debate. Machine intelligence is still largely the product of a collaboration of human and machine intelligence, and has been programmed to maintain a subservient relationship to the species that created it."

Okay, we'll jump to 2029. "Computer itself-- $1,000 unit of computation has a computing capacity of approximately 10,000 human brains. Of the total combined computing capacity of the human species along with the computing technology that it initiated the creation of, more than 99% is non-human. The vast majority of computes of non-human computing is now massively parallel neural nets, much of which is based on the reverse engineering of the human brain.

Many, but less than a majority, of the specialized regions of the human brain have been decoded, with their massively parallel algorithms understood. The number of specialized regions numbering in the hundreds is greater than was anticipated 20 years earlier.

Displays are now implanted in the eyes, with the choice of permanent implants or removable implants similar to contact lenses. Images are projected directly onto the retina, providing the usual high resolution, three-dimensional display overlaid on the physical world. Cochlear implants originally used just for the hearing impaired are now ubiquitously used. These implants provide both auditory input and output between the human user and the worldwide computing network.

Direct neural pathways have been perfected for direct high bandwidth connection to the human brain. A range of neural implants are becoming available to enhance visual and auditory perception and interpretation, memory, and reasoning.

Human learning is primarily accomplished using virtual teachers and enhanced by the widely available neural implants. Automated agents are learning on their own without human spoon-feeding of information and knowledge. Computers have read all available human and machine generated literature. Significant new knowledge is created by machines with little or no human intervention.

Human and non-human intelligences are primarily focused on the creation of knowledge in its myriad forms and their significant struggle over intellectual property rights, including ever-increasing levels of litigation. There is almost no human employment in production, agriculture, and transportation. The largest profession is education. There are many more lawyers than doctors.

Computers appear to be passing forms of the Turing Test deemed valid by both human and non-human authorities, although controversy on this point persists. There's not a sharp division between the human world and the machine world. Human cognition is being ported to machines, and many machines have personalities, skills, and knowledge bases derived from the reverse engineering of human intelligence.

Conversely, machine intelligence-based neural implants are providing enhanced perceptual and cognitive functions to humans. There's growing discussion on what constitutes a human being. This is emerging as a significant legal and political issue.

The rapidly growing capability of machines is controversial, but there is no effective resistance to it. Since machine intelligence was initially designed to be subservient to you in control, it has not presented a threatening face to the human population. Humans realize that disengaging the now human-machine civilization from its dependence on machine intelligence is not possible.

There is growing discussion of the legal rights of machines, particularly machines that are independent of humans. Although not yet fully recognized in law, the pervasive influence of machines in all levels of decision making is providing significant protection to machines. Although computers routinely pass valid forms of the Turing Test, there continues to be controversy as to whether or not machine intelligence is the equal of human intelligence in all of its diversity.

The subjective experience of machine intelligence is increasingly accepted, particularly since machines participate in this discussion. Machines claim to be conscious, and these claims are largely accepted. They claim to have as wide a range of emotional and spiritual experiences as the human progenitors."

Okay, we'll jump finally to year 2099. "There's a strong trend towards a merger of human cognition and perception with the world of machine intelligence, which the human species initiated the creation of. The reverse engineering of the human brain appears to be complete. The hundreds of specialized regions have been fully scanned, analyzed, and understood. Machine analogs are based on these human models, which have been enhanced and extended along with many other massively parallel algorithms.

These enhancements combined with the enormous advantages in speed and capacity of electronic photonic circuits provide substantial advantages to machine-based intelligence. There is ubiquitous use of neural-implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants have difficulty engaging in meaningful dialogues with those who do.

There are machine-based intelligences entirely derived from models of human intelligence, although substantially enhanced in terms of speed and processing, memory capacity, and perceptual and cognitive ability. These intelligences claim to be human, although their brains are not based on carbon-based cellular processes. And there are many gradations above these two scenarios.

The concept of what it is to be human has been significantly altered. The rights and powers of different manifestations of human and machine intelligences and their various combinations represent a primary political and philosophical issue."

Well, let me end this lecture with just a couple of philosophical mind experiments. And these are experiments that philosophers have thought about, actually, long before we had the computational technology to even conceivably implement these mind experiments.

Well, consider the statement, I am lonely and bored. Please keep me company. Now, if your computer displayed that on its screen, would that convince you that your notebook is conscious and has feelings? Well, clearly no. It's rather trivial for a program to display such a message, and presumably the message comes from the human author of the program. The computer is just a conduit for the message, much like a book or a fortune cookie.

Suppose we had speech synthesis, and the computer now speaks its plaintive message. Have we changed anything? Well, we've added some technical complexity to the program, and some human-like communication means, but we still don't regard the computer as the genuine author of the message.

Suppose now that the message is not explicitly programmed, but it's produced by a game-playing program that contains a complex model of its own situation. This specific message may never have been foreseen by the human authors of the program. It's created by the computer from the state of its own internal model as it interacts with you, the user.

Are we getting closer to considering the computer as a conscious, feeling entity? Well, maybe just a tad. But if we consider contemporary game software, this illusion is probably short lived as we run up against the limitations behind the computer's ability for smalltalk.

Now suppose the mechanisms behind the message grow to become a massive neural net built from silicon but based on the reverse engineering of the human brain. Suppose we develop a learning protocol for the neural net that enables it to learn human language and model human knowledge. Its circuits are a million times faster than human circuits, so it has plenty of time to read all human literature and develop its own conceptions of reality.

Its creators don't tell it how to respond to the world. Suppose now that it says, I'm lonely. At what point do we consider the computer to be a conscious agent with its own free will? And what exactly do these words mean, anyway? This has been, really, the most vexing problem in philosophy since the Platonic dialogues illuminated the inherent contradiction we have in these terms.

Let's consider this slippery slope from the opposite direction. Our friend Jack-- circa sometime in the 21st century-- has been complaining lately of some difficulty with his hearing. A diagnostic test indicates he needs more than a conventional hearing aid so he gets a cochlear implant. Once used only by people with severe hearing impairments, these implants are now common to correct the ability of people to hear across the entire sonic spectrum.

The surgical procedure is successful, and Jack is pleased with his improved hearing. Is this still the same person? Well, sure he is. People have cochlear implants today, in 1997. We still regard them as the same person.

Now Jack is so impressed with the success of his cochlear implants that he likes to switch on the built in phonics-cognition circuits. And these circuits are already built in, so in case he decides to use them he doesn't need another insertion procedure. So by activating these neural replacement circuits, the phonics-detection nets built into the implant, bypass his own aging neural-phonics regions. His cash account is also debited for use of this additional neural software.

And again, Jack is pleased with his improved ability to understand what people are saying. Do we still have the same Jack? Of course. No one gives it a second thought. Jack is now sold on the benefits of this emerging neural-implant technology. His retinas are working okay so he keeps them intact, but he decides to try out the newly introduced image-processing implants and is amazed at how more vivid and rapid his visual perception has become.

Same jack? Why sure. Jack notices that his memory is not what it was as he struggles to recall names, and the details of earlier events, and so on. So he's back for memory implants. Now, these are amazing. Memories that had grown fuzzy over the years are now crystal clear. Some he would have preferred had remained fuzzy.

Still the same Jack? Well clearly, he's changed in some ways. And his friends are impressed with his improved faculties. But he still has the same self-deprecating humor, the same silly grin. Yep, it's still the same guy. So why stop here?

Ultimately, Jack will have the option of scanning his entire brain and neural system-- which is not entirely located in the skull-- and replacing it with electronic, photonic circuits of far greater capacity, speed, and reliability. There's also the benefit of keeping a backup copy in case anything happens to the physical Jack.

Now certainly this specter is unnerving, maybe amusing, maybe more frightening than appealing. And undoubtedly, it will be controversial for a long time, but just keep in mind how long human cloning was controversial.

And according to the exponentially growing pace of technological change, a long time will not be as long as it used to be. But ultimately, the overwhelming benefits of replacing unreliable neural circuits with improved ones will be too compelling to ignore. At some point-- and if not in the 21st century, then most likely in the next-- there will be Jacks who go all the way.

Have we lost Jack somewhere along the line? Jack's friends think not. Jack also claims he's the same old guy, just newer. His hearing, vision, memory, reasoning ability, perceptual ability have all improved, but it's still the same Jack.

Now we could we could extend this scenario and mind experiment in many different ways, but I'd like to leave some time for discussion. So let me just touch on one other issue, which is to put aside the continuity of consciousness and just consider the fundamental issue of the existence of consciousness itself.

How do we know that it exists at all? I really only have one data point, which is my own consciousness. I am conscious. I have experiences. I have emotions. I can infer that I exist.

This is Descartes's famous dictum. And although it's been interpreted as a symbol of Western rational philosophy which describes fundamental reality to the physical world, Descartes's simple statement is just this, I think-- i.e. I have conscious experiences-- therefore, I am. Therefore, I exist.

But I also know that I'm not the only conscious entity in the world. Or do I? Actually, I have no evidence for this, but I assume it to be the case, as otherwise it would be difficult to maintain my sanity. And I say this in the first person because that is where I have to start.

But what evidence do I have that other people are also conscious? Well, maybe after an hour of this lecture, I am the only conscious person in this room. But I maintain that the evidence is not scientific evidence at all because it's not a scientific question.

We can, of course, correlate certain expressions of conscious experience with certain patterns of observed neural functioning, and there is research along these lines. But that does not constitute scientific proof for the subjective experience of another entity, even if that entity is a person.

From the thought experiment at the beginning of this discussion-- i.e. The computer says I'm lonely-- we know that an entity claiming to be having a particular experience is not proof that it is so. Not only might the entity be having a different experience than it's claiming to have, it might not be conscious at all.

Now I assume, and I assume we all assume, that other people have conscious experiences somewhat similar to mine because they are convincingly displaying behaviors that are similar to my own behaviors when having certain experiences. In other words, if I see somebody crying, I assume they're having an experience similar to ones I've had when I felt sad and displayed a similar behavior.

But despite my empathetic reaction, there's no direct way to measure the subjective experience of another person. Now, although we humans all tend to assume that each other is conscious, as we move away from shared human experience, human views of this issue begin to diverge quite radically.

For example, the issue of animal consciousness-- are animals conscious creatures making decisions and experiencing feelings? To many humans, including this speaker, it seems apparent that this is the case. Again, based on this empathetic perception of animals displaying behavior similar to my own. But opinion on this is far from unanimous. And if we disagree on the subjective experience of animals, then we might conclude that there's far greater opportunity for disagreement when it comes to the subjective experience of machines.

So far it doesn't seem like much of a debate. With our machines still a million times simpler than the human brain, their complexity and subtlety is comparable to that of insects. But as we've talked about, this philosophical debate will become far less abstract as the next century unfolds. We will have Jacks who have indeed replaced and supplemented some-- and eventually, someday-- all of their neural circuitry with electronic photonic equivalence.

Conversely, we'll have computers that have been derived, in part or in whole, from scans and reverse engineering of humans. And there will be many variations and combinations of these scenarios. So who and what is conscious? When is the professed feelings of an entity to be taken at face value? And when is it to be regarded as a programmed response?

When is destruction of an entity to be considered destruction of property, and when is it murder? When do we have to be concerned about causing pain and suffering to our computer programs?

In the 21st century, this polite debate of the last two millenniums will become the primary political struggle. And how will this issue be resolved? In my view, the outcome will favor the machines.

First, recall that many of us are convinced that animals have their own subjective life, as a result of our noticing behaviors in animals that are similar to those in humans. This phenomena will actually be stronger with regard to machines of the 21st century because these machines will be based, in large part, on human designs. They will appear very familiar to us. And as the above scenarios point out, they will indeed claim to be human-- to be your old friend, to be Jack even if instantiated in an electronic, photonic substrate.

Second, the machines will be part of the debate. They will be sufficiently persuasive to profoundly influence the discussion. Thank you.

PROFESSOR: Thank you, Ray, so much for this provocative and fascinating view to the future. So I guess there are some questions. I could imagine, do you take the question, or--

KURZWEIL: Questions, comments, flames?

PROFESSOR: Yeah, flames.

KURZWEIL: Yeah?

AUDIENCE: This is probably a bit off topic, but it concerns the economic side of these classifications so that's [INAUDIBLE] stop me, now. But I think it deserves at least brief comment. You said in one of your predictions was that essentially no human beings were working in agriculture in 30 years, and that's what most human beings are working on now. Which brings me to my question, what are all these other people doing?

KURZWEIL: Well actually, in the United States, we've already gone through that. We had 40% or so of our population working in agriculture in 1900. Today it's 3%. Well, I mean, we've survived that transition. We're undergoing a similar transition now in production labor in factories. But within a few years, the percentage of the population actually working assembling products in factories will be under 5%.

I mean, this is the challenge of the Luddites. In the late 18th century, when these textile machines started coming out and one person could do the work of 10 with a machine, it was just obvious that employment was on its way out. It would just be had by an elite. What we've actually seen-- if you look at, say, the last 100 years, when most of human automation has taken place-- is we've gone from, in the United States, 12 million jobs in 1870 to 130 million jobs today from applying something like 25% of the population to almost 60% of the population.

And the jobs are at a much higher level. And a lot of the new employment is, in fact, in education to provide the higher level of education required. And that's the skill ladder that I referred to. We're adding jobs at the top of the skill ladder, and removing them at the bottom. And a lot of the new employment is in education to provide those skills.

And so far, the human race has been able to keep up with that. And jobs today are more interesting, they pay eight times as much in constant dollars today compared to 1870. So so far, it's been a very good thing because the challenge is, what happens-- I mean, can we keep pace with that indefinitely as the skill ladder moves up? I don't know.

PROFESSOR: Can we have a--

AUDIENCE: I kept hearing lots of [INAUDIBLE]. It appears that some notion that understanding humans' identity, or personality or even entity will be obsolete to some degree. Like if you have entity, is that completely independent from the substrate? Already you have multiple backups that issues physical immortality, they just go. Looked But what happened? As you were saying, you can modify your identity very rapidly?

Once in a while, you will experience what I would call death forward. It's modification beyond recognition. So the physical substrate lifespan issue would be replaced with the identity lifespan issue.

Also, if you have complete memory of your experiences, you can do funny things like personality rewinds. And if you have a distributed fluid architecture that is global, what you can do, you can employ multiple sensors, actuators, and the processes for a cognitive task on a contractual basis, and then dissolve the whole thing. So it's hard to say what exactly is an entity once-- what needs to be preserved, like much of it goes.

PROFESSOR: Can I have you repeat the question?

[LAUGHTER]

KURZWEIL: Well, the question has to do with what constitutes the identity of a person. You used the word entity, but I think that's what you were getting at. And it's the issue that I'd started to get into at the end of my remarks. It's an issue we could talk a long time about.

I mean, there are two schools of thought. One is that we are these particles, and that's what I am. And the other school of thought is, I'm not the particles at all, I'm the pattern. It's like the shape that you see in a stream. If you see water rushing around a rock, you see a certain shape. And that shape could remain constant for hours, but in fact the particles are changing every few milliseconds as the water rushes through.

Now, there's a stronger argument for the pattern than there is for the particles because, in fact, we are like the eddy in a brook because our particles are constantly changing. We change most of our cells every few years. Not nerve cells, but even in nerve cells we change the particles that comprise the cells every few years. I mean, every two or three years, we're a completely different set of particles. But it's in, at least, a continuum of the same pattern.

So that would argue for the pattern. But you could make a strong counter-example for that. If you scanned somebody, you now have that information. And then you instantiated the person in a new substrate.

And the person walks out and thinks it's the same person, but the old person is still there. And that old person thinks he's the same person, and in fact, may not even be aware that you've ever instantiated a copy of him somewhere else. And that would tend to contradict the pattern argument. So it is a real dilemma, and that's why it's been a vexing philosophical issue.

AUDIENCE: The essential premise I gathered was that with all these extra supercomputing capabilities, we'll be able to emulate intelligence by replacing or reverse engineering brain circuits with newer circuits. My question was actually posed a hundred years ago by Swami Vivekananda, and I'd like to pose it-- he posed it better than I could. And what he said was this.

"There is a big discussion going on as to whether the aggregate of materials we call the body is the cause of the manifestation of the force we call the soul, thought, and so on, or whether it is thought that manifests this body. The religions of the world, of course, hold that the force called thought manifests the body, and not the reverse. There are schools of modern thought which hold that that which we call thought is simply the outcome of the adjustment of the parts of the machine, which we call the body.

Taking the second position-- that the soul, or the mass of thought, or whatever we may call it, is the outcome of this machine, the outcome of the chemical and physical combinations of matter making up the body and brain-- leaves a question unanswered. What makes the body? What force combines the molecules into the body form? What force is there which takes up material from the mass of matter around it, and forms my body one way, another body another way, and so on.

What makes these infinite distinctions? To say that the force called the soul is the outcome of the combination of the molecules of the body is putting the cart before the horse. How did the combinations come? What force was there to make them?

If you say that some other force was the cause of these combinations, and the soul, which is now seen to be the mind of the particular mass of matter is itself the result of the combinations of these material particles-- it is no answer. That theory ought to be taken, which explains most if not all of the facts, without contradicting other existing theories. It is more logical to say that the force which takes up the matter that forms the body is the same, which is manifested to that body."

KURZWEIL: Well, let me repeat the question. But I think it's a good example of how this issue has been considered long before computers were evident. There are really, again, two schools of thought. The Western school of thought is that the fundamental reality is stuff, matter, particles. And that they swirl around and evolve, and eventually evolve intelligent entities that are conscious.

The view that's been, say, associated with Eastern thought is that the fundamental reality is consciousness-- not matter or particles-- and thought, which is a manifestation of consciousness. And that matter is what we have to be conscious of. And essentially, we start with consciousness and then create matter as a projection of consciousness.

It's interesting that in the 20th century, Western science has come around to ascribing a fundamental reality to consciousness because in quantum mechanics, particles do not decide where they're going or even where they have gone until they are forced to do so by a conscious observer. And there's a fundamental reality to consciousness. And so we do see a blending of these two schools.

AUDIENCE: The examples you gave of reverse engineering basically talk about measuring electrical signals or visualizing what's going on in your brain. You're sort of localized. But how about non-local processes like-- well, consciousness as I understand it is hard to localize, but also the brain, as I understand it, also produces hormones, which get into your circulatory system and affect your whole body. And I don't see how you're going to ever electronically model something like that.

KURZWEIL: Well, the endocrine system is certainly tied in strongly to our neural system. The chemical messengers are very strongly influenced by the endocrine system. And that's the whole genesis of mood altering drugs, such as Prozac, and so on.

But those processes are not actually nearly as complex as the neural process itself. And there's no reason why they can't be emulated as well, to the extent that we want to emulate them or need to emulate them. As for consciousness being not a local process, we don't really know what consciousness is. I really don't think it's a scientifically observable process.

But it is true that human cognitive processes are not localized. They are more like holographic processes. Our memories are based on the patterns of connections in a region. And that's the essence of neural net processing. I mean, that's not inherent to human beings. It is quite different than the way most computers work today, which is sequential and logically organized.

Our sequential rational functions, which are in the cortex-- which is the folded up two-dimensional surface, which is a fairly late evolutionary development-- is a very small part of our neural processing. Most of the human processing is pattern recognition, which is highly distributed. But we can emulate that in machines as well.

And a lot of the anti-AI backlash is really focused on just symbolic processing, which is one technique, which is a valid technique. But we won't get to emulate many types of intelligence, particularly the whole area of pattern recognition, by just using sequential processing.

And I had an interesting discussion with Murray Campbell of the IBM Deep Blue team. That is because the Deep Blue is an ultimate example of sequential processing taken to an extreme. And the computer builds up this massive tree of move and counter-moves. And at the end, it finally has to evaluate [INAUDIBLE] position, and it does that with a grab bag of rules that they've just hand-tuned, which is fairly much a hodgepodge of rules.

And I said, why don't you replace that with a neural net? And you can train the neural net on every master game that's been played in this century, which they have on their hard drive. And he thought that was an interesting idea and started to work on that, but then they canceled the project. But it is possible, actually, to use both types of techniques.

PROFESSOR: We have time for one more question.

AUDIENCE: In your evocation of reverse engineering, you mention a number of strengths that would be gained by implementing algorithms derived by human processing in machines. Presumably, if we did that, lacking a perfect human to do it, we'd also implement pathologies of thought in machines. Have you considered the implications for machine pathologies? For example, socio-pathologies that might in fact lend them less of a benevolent picture-- which in the benevolent synthesis, which you have depicted.

KURZWEIL: Well, there are many different scenarios. We wouldn't necessarily want to copy one person unless, in fact, that's what we're intending to do-- is to capture a particular personality. But the primary scenario I was articulating was not just to scan the information in a dumb way-- which is basically what we've done today with the genetic code.

We have the entire human genetic code. Actually, we'll have in about 12 months. But we don't understand it. It's kind of a machine code. And we're trying to do a disassembly of it and understand the source code, but we don't know what the source language is.

But what we really would like to do with the genetic code is understand it. And what we would like to do with the interneuronal connections is to really understand the algorithms. Now, the one example I gave of Carver Mead's work, they've done exactly that. They looked at the connections, and they were actually able to figure out what the algorithm is and then implement that in silicon. It's very clever work.

And that's really the primary thrust of my argument, which is, we will find that there are hundreds of different specialized regions. The brain is not one tabula rasa, just one big neural net that organizes itself. There are very highly specialized regions that deal with vision, or hearing, or music, memory, different kinds-- emotional regions. And each of these have different structures that are highly specialized for their task.

And it certainly will be a great benefit to understanding human behavior, to actually begin to understand how the brain works. And I believe that will be feasible. And you don't necessarily have to see every single connection because in these early vision circuits, it's very highly repetitive. So after you've seen 50 of them, you don't have to see all 50 million circuits.

So we'll gain a lot of insight into human behavior. And we'll also learn some intelligent algorithms that we can then apply to computer problems.

PROFESSOR: Can we have one last flame? You choose.

AUDIENCE: I'm wondering about your picture of the computers as being pretty subservient-- designed to serve man, and all that. And my impression is that if you get a really intelligent machine, you have to give it a lot of autonomy. The way you get information from [INAUDIBLE] and you got into it. Keeping a machine like that subservient may be possible, but I--

KURZWEIL: I never said that they're subservient, I said, apparently subservient. I said, apparently subservient.

AUDIENCE: Okay, but anyway, I respect that they may try to make legal terms so that we won't let machines be completely autonomous. So you'll have to have this human interface so that someone is going to take responsibility for it, but I don't think it's practical. I suspect that we'll get very, very intelligent machines that only be very autonomous, and the question of whether we want to give them human rights is going to be extremely moot, and we won't decide. It will be decided by the machines.

PROFESSOR: Would you mind repeating that question? Repeating the question for the--

KURZWEIL: Well, the question dealt with whether or not machines would remain subservient, and whether or not we'll want to give them human rights. And then pointed out that they won't really be necessary since they won't really care about that. I think computers will provide a benevolent and subservient face. But that could be as much a tactic as a strategy or a reality.

It is the case, generally, that the more intelligent organism, or species, or civilization, or the civilization with the superior technology has generally been dominant, for better or for worse. So whatever intelligence is a form of power. Knowledge is power. And machines will have the power consistent with their capability.

But again, it's not an us-them nexus. I think human beings-- we will enhance our abilities through this type of technology, and we will be part of that future. It's a combined civilization, really which ultimately will merge human and machine capabilities. Thank you very much.