I'm trying to write about whatever I'm currently thinking about on this blog, but with the holiday, I've been mostly thinking about how relaxing it is to have some vacation. And, this Wednesday, I leave for San Francisco to visit Sanna's family (she's my girlfriend) and some friends. So, I want to make a post before I go, so that you don't think I've given up on this whole blog thing. Rather than talk about hanging out on the fourth, or politics or libertarianism or something else that's just gonna get me fired up and ruin the end of the weekend, I'm going to talk about what might be the most important thing in the world to me, outside of people: the singularity.
The singularity is the name given to a broad collection of theories concerning the future development of technology and refers to the idea of a critical mass of technology beyond which lies an explosion in human intelligence and technological capabilities. The term was coined about 25 years ago by scientist/science fiction author Vernon Vinge to describe a startling trend in computer science: more and more, computers were performing basic tasks in place of humans, including circuit design and compiling executable code. Humans had to know and do less basic tasks as computers became able to do them. So, Vinge asked, what if this trend continues until all human thought is basic enough that computers can be made to do it?
Before I get too deep here, I'd also like to point my influences in this area for context. I first encountered the singularity reading Ray Kurzweil's The Age of Spiritual Machines in college. He's probably the most optimistic singulatarian, and while I think his projected timeline may be a bit ambitious, he does serve as a deep inspiration to me. I also had the fortune to study under Marvin Minsky while in graduate school at the Media Lab, including taking his class on cognition and computation (for which I received an A+ :)). From a technical and philosophical standpoint, I identify closely with him, and feel his work on cognition and AI is the most significant relative to the singularity. I highly recommend his books The Society of Mind and The Emotion Machine for a lay (but rigorous) examination of consciousness, its origins, and how one might go about implementing a system to possess it. I'll also note that, in poor form and against Marvin's recommendations, I'm going to use the term consciousness because it's easy. Really, it's a dangerously overloaded term and shouldn't be used in scientific context. Just know that when I refer to consciousness, I'm speaking exactly to cognition and personality, not being awake, or anything religious or classical psychologists mean.
There are many possible answers to the question of the extent to which computers can replicate or replace human thought, and of course you have to accept it's premise is valid to get started. If you believe in some sort of soul or other mystical force that makes humans somehow totally unique and outside the physical basis of the universe, you probably disagree with most singulatarians, and many of the things I'll say on this blog. But, if only to better understand your enemy, you can follow my thinking by knowing the two postulates about humans and cognition to which I subscribe:
(1) The brain is a finite system subject to the same physical laws as all other matter in the universe
(2) Everything that composes what we think of as a person's character or consciousness or whatever you want to call it arises solely from the brain
(NOTE- I'm going to use the word "brain" a lot without loss of generality. It may be that other parts of the central nervous system or body matter for consciousness, but we don't know for sure yet. In the end, adding a few more body parts to these postulates does not affect the validity of the argument. So, I'm going to keep it simple and just say brain.)
Postulate 2 implies there is no soul or animating force beyond biology. That may not sit well with you, especially if you're religious/spiritual/whatever, but I think it only increases my wonderment at existence. It's too easy to just invent all these mystical, infinite, unknowable things like spirits and grand designs and gods to explain things. Such simplifications detract from the stunning processes that actually govern the universe in verifiable manners. But I'm not here to convert you to atheism or defend it. I'm here to explain the singularity, which doesn't even need to imply anything religious in some interpretations. I'm just trying to show you where I'm coming from so you can understand my thinking.
So, from those postulates, we can begin making some projections about how far computers can go in helping humans perform tasks. If the brain is finite, we can model it. And if the sum of a person's consciousness is their brain, then we can model a person.
If you're reading this, you already use technology to do a lot of basic tasks for you. You're accessing my words without having to go pick up a letter at the post office, or if we're even more primitive, walking to my apartment to listen to me talk. Language itself is a form of technology, one that took 4.5 billion years less a few tens of thousands of years to evolve on Earth with modern linguistic characteristics.
You probably also don't do much arithmetic anymore, using calculators and spreadsheets to do math. You probably don't write by hand as much either, instead typing everything on a computer. All of these things are examples of technology we have created to replace basic, repetitive work we didn't want to do.
So, then, why should we stop applying technology more and more to help us? Are we going to wake up someday and decide we have enough convenience, enough tools to help us improve out life? I don't think so. Are we going to run into some barrier or insurmountable problem we'll never be able to solve? Maybe, but I think it's unlikely. Humans have shown an incredible ability to think our way past seemingly insurmountable hurdles. So if we're at least going to try and continue to improve life with technology, what would that look like?
Here's the basic plot for the singularity: through a combination of hardware capabilities, software implementation, and further understanding of the biological and chemical basis for human intelligence, we are able to create a computer program which is as smart as a human. That's the fundamental characteristic of all singularity theories. How this happens and what else it enables is a vast field of speculation that forms a lot of the talk about the singularity now.
There are lots of ways we might get there. It could be on purpose, through groups like the Singularity Institute who are expressly dedicated to doing it. It could happen so gradually we don't even notice it, as people more and more begin incorporating technology into their lives and even bodies (you can already get highly-effective artificial cochlear implants implanted in your brain if you are deaf). Or it could be an unguided machine like Google's web crawler developing so many connections and having so much information that intelligent behavior spontaneously arises. No one knows. Not everyone even believes it's possible. But, if we accept that the brain is a finite system, there is a strong possibility we can model it and emulate the intelligence of a human.
The implications of doing so are too many to address in a blog, but I'll note some. First, a computer as smart as a human will inevitably become smarter than a human, because it will be able to program itself as well as any computer scientist could. This is the concept of Seed AI, which shows how rapidly intelligence will grow once we reach that basic level.
Second is the idea of uploading and virtual immortality. If we can completely model a human brain in a computer, and we can scan a biological human brain well enough to input it into this model, then we can load a copy of an individual human into a computer. You can imagine this as effectively a prosthetic brain. As you age and your biological brain's physical medium begins to break down, you can replace it with a more stable technological medium, much as you might replace a lost limb or lost sense of hearing today. A radical idea, but one I accept as possible and the ultimate goal of the singularity.
There are all sorts of moral implications to uploading. Do you still have the same rights as a biological human brain? Are you the same person? To help answer them, let me summarize an arugment by Kurzweil.
Suppose you were in an accident and lost your hearing. You get cochlear implants in your nervous system to restore it. You're still the same person, even though you have some technology wired into your brain, right? Then suppose in you lose your vision and replace that with a set of cameras wired into your brain. Still the same person, right? 15 years later, you begin developing Alzheimer's, and get a memory prosthesis implanted in your brain. Now your memories are stored on a chip. But they're still your memories, and you're still the same person. Then, you get a math processor implanted in your brain to help you do basic calculations. Still the same person, except you can use the calculator in your brain without having to tell your fingers to push buttons, right? This continues, until biology accounts for very little or none of your actual brain. Are you still the same person? I'd say yes, but if you say no, then tell me, when did you stop being you? And if you have a soul that somehow prevents that technology from actually being you, when did your soul leave?
Tough questions to answer, especially if you're coming from a religious or anti-singularity point of view. Personally, I accept the simple solution that you are still the same person, and should be afforded all rights given to purely biological people. But, I'm sure that won't stop outcry from various religious or conservative humanist groups. I wouldn't even be surprised to see war or terrorism about it. But fundamentally, I think there is no difference between biological or technological people, and I don't doubt that we will have both in the future.
Another question about the singularity is when will it happen. Estimates vary wildly, from Kurweil, who thinks less than 30 years, to others who think centuries or millennia from now, or even never. And of course all of this supposes we are mature enough as a species not to kill our selves off on the way there. This post is long enough, so I won't get into details, but I'll mention that Kurzweil makes compelling arguments related to the strong exponential trend in technological advances, even going as far as showing how technology is really just an extension of evolution. Moore's Law is an illustration of this exponential advance in computing power, and Kurzweil's full timeline demonstrates how it dovetails nicely with the evolution of the universe. Personally, I think that the singularity will occur in my lifetime, and likely in no more than 30-50 years (I wouldn't be surprised if medical advances would extend my life expectancy to 100 years or more independent of the singularity. Potentially much more than 100 years).
I could go on about the singularity a lot more, from any number of technical and philosophical tangents, to what I'm actually working on to try to help. Ask me sometime about it if you like. I'll also probably write more posts about various aspects of it in the future. But I've summarized it here, at least as I see it, and it is very important to me. It's not the Matrix or science fiction, but a legitimate area of research involving hundreds or thousands of well-respected scientists. It's not a religion, and I don't need any faith to accept it, but it might be as important to me as religion is to other people. It's what got me interested in AI, set me on the path to a lifetime of learning about AI and cognition, and gives me hope for the future of the human race.
For more information on the singularity, I recommend Wikipedia surfing from the singularity entry, and the IEEE site for the singularity.