http://www.youtube.com/watch?v=As0-8Y7y1N4
The new movie Her is just one of many in which a mechanical or electronic construct becomes a character in a human’s story. HAL 9000 in 2001: A Space Odyssey, Commander Data, HARLIE, the robots of Lost in Space and Forbidden Planet, Asimov’s robots, and a hundred less-memorable movies and TV shows.
Okay, maybe Julie Newmar was memorable, but for other reasons.
Her carries it on a little further, when the main character falls in love with the personality that serves as the front end for a new operating system. They eventually consummate their love in what is supposed to be a rather steamy, and apparently mutually satisfying, episode of what’s a whole new meaning of “phone sex.” (I say “supposed to be” because I haven’t seen the movie yet; in any case, this isn’t a review of the movie.)
So here’s a question for you: when Samantha, the operating system’s personality, has an orgasm, is it real or is she faking it?
Expressed a little more generally, Alan Turing started asking the same questions in 1950 in his famous paper “Computing Machinery and Intelligence,” which begins with:
I propose to consider the question, “Can machines think?”
Turing begins by describing a game, called the “imitation game,” which has become known as the Turing test. In his first example, an interrogator is interacting with two people, a man and a woman; the game is to decide which of them is male and which female. The trick is that one of the two may lie, and attempt to convince the interrogator the he is she, or she is he.
Turing goes on to make a comparison between the (then very new) idea of a digital computer, and the human profession of computer that was common then: people, skilled at arithmetic, who followed written procedures to evaluate mathematical functions. Digital computers were quite effective at imitating the actions of human computers, and Turing wonders if they could also imitate more complex actions. So he proposes a variant of the imitation game — this is what we actually now call the “Turing test” — in which the interrogator is interacting through a teletype or computer screen with two others: one of them is a real person, the other is a computer, and now the computer has been instructed to attempt to pass as a human. In particular, the computer can by choice do an arithmetic problem wrong, can claim no knowledge or ability at some task like writing poetry, and in particular, would answer questions like “are you conscious?” by saying “Hell yes, I am! Who the hell do you think you are!” much as a human would do.
The test comes down to this: after some indeterminate time, an observer asks the interrogator to decide which of the two entities with whom he’s been chatting is a computer or a human. If the interrogator can’t tell, or chooses wrongly, the computer is said to have “passed the Turing test.”
The question then is whether that is really thinking or not. Or, to ask a related question, is that computer conscious or not?
Turing considers this in his paper — and I recommend you read it, as it disposes of all the usual arguments rather neatly, including, presciently, Gelernter’s. It’s not really mathematical at all, and Turing was a good writer.
There are a number of counters to Turing’s argument. The most famous is John Searle’s famous Chinese Room. Searle proposes this experiment: we have a room, and inside this room is a man with a (very complicated) table mapping phrases written in Chinese to appropriate replies, like
你 好 马?-》很好 谢谢!
Now, you don’t need to understand that, as long as you know that if someone hands you a slip of paper under the door that has the glyphs 你 好 马?on it, you should hand back a piece of paper with the glyphs 很好 谢谢!
With a sufficiently complicated table of rules — which is all a computer program is — and a sufficiently diligent, obsessive, and hermitic occupant of the room, a native Chinese speaker might carry on an extended conversation with the Chinese Room and walk away convinced that the occupant actually speaks Chinese.
Searle makes the claim that we wouldn’t say then that the room spoke Chinese, and we wouldn’t claim the room was “thinking” or conscious.” But that, it seems to me, just begs the question: how do we know if anyone besides ourselves is conscious? I know I’m “conscious” — there’s someone here watching as I type and compose these words. But what about other people? (Be honest, haven’t you wondered if some of the people in Washington are actually “conscious”?)
Let’s try to answer that with another thought experiment called a philosophical zombie. In this thought experiment, a zombie is an entity who looks and acts exactly like a human, but is not conscious: these entities are capable of everything a human does, including extended philosophical discussion of consciousness, but don’t have the experience we have of consciousness. What’s more, this zombie doesn’t believe in consciousness: he will argue at length that not only is he not conscious, but you aren’t either.
Now, how will you prove to the zombie — and to yourself — that you really are conscious?
Gelernter’s piece in Commentary basically answers using the famous mathematical proof method of repeated vehement assertion while insulting your opponents’ intellect and morals. It’s fun to read but scientifically unsatisfying, and leaves open the question “How can we test if someone is “conscious” in a scientific — which is to say experimental — fashion?”
Turing’s answer is to say that if you can’t propose an experiment that will reliably let you tell the human from the sufficiently facile computer program, then you must assume that the program is thinking, is conscious. And until someone can propose such an experiment, I suggest that we have to say scientifically that a “thinking machine” is just as conscious as we are.
If your computer program says she came, you can believe her.
Join the conversation as a VIP Member