I'm currently suffering from another attack of RSI, which gives me very little time to do any writing. Given that I have an ever-lengthening backlog of blog posts I want to write, Rachel and I are teaming up to write them together. This is the first one of them and is also posted at Bellis Perennis. More will hopefully follow soon. Many of them, like this one, will be a collaboration but the occasional post will just be a case of Rachel donating her wrists to my cause.
(This post has some mild spoilers for Prometheus.)
The film itself was not very well received, but people agree that Fassbender cuts a very fine figure as David. Tall, blonde and virile in appearance, David is stronger, tougher and probably smarter than the rest of the crew. Being a robot, he also displays an unshakeable calm and constant empathy. He is furthermore well-dressed, well-spoken and can mix the perfect martini.
Early in the film, some of the crew prepare to go outside the spacecraft, pulling on protective suits. David also dons one and is asked by another character why he bothers, given that he doesn't need to breathe. His reply is that not putting on a spacesuit would dispel the human appearance with which he was designed. "You humans deal better with your own kind."
That is to say, David's appearance is a deliberate skeuomorph. Just like mp3 players that mimic the old technology of the hifi and shutter clicks on digital cameras, his appearance is a message saying, "You can interact with me just as you would with a human being."
In fact, the crew spend a lot of time making jokes at David's expense and pointing out that he is just a robot. This constant hostility would seem a bit strange if directed at the paper towel dispenser or the fridge. The skeuomorphism has had unintended consequences. When they look at David, the crew see someone with all the markers of someone in charge. Historically in their/our culture, at least, it is tall, virile men who have been the leaders, and David—the closest thing to a son that the tycoon Peter Weyland ever had—has been designed with all the qualities of a man in the top tier of society. As a robotic servant and drinks-mixer, he is at the bottom of the hierarchy. If he were human, though, he would be suited to any of the positions above him; thus everybody views him as a threat. The crew unconsciously see him as human and constantly put him down and humiliate him to impede his rise.
In building David, his designers had supposed that his human shape would cause people to treat him with respect and value him. After all, don't we believe in the intrinsic dignity of humanity? Each human being is inalienably granted a value of exactly 1.0 units—units which may not be compared with more mundane quantities, such as money, without committing a great breach of ethics (or at least etiquette). His creators had assumed that this idea was so ingrained as to be unconscious, but nothing could be further from the truth. Human beings find it all too easy not to value other people at all, and to care deeply about pets and inanimate objects. We care about what is close and known to us. Is it surprising that someone might be more concerned about the dog that has been their companion for ten years than about a stranger?
This is not some moral aberration but simply how people's emotions work. Of course universal human dignity is a noble idea and one that would ideally be ingrained in everyone. In some highly contrived scenario, where you would have to choose between the life of your faithful dog and that of a stranger, you would be expected to save the stranger's life, but you could not be faulted if your first impulse was to do the opposite. The danger lies in assuming that people will always act in this moral way without prompting, especially where the ethics of the situation are obscured. We are very good at justifying acting the way we want to—even if it goes against what we believe we believe.
***
Perhaps, then, it would be better not to make David look like a human—but what is the point in making him think like a human? The really amazing thing about David is not the way he looks, or that he's very strong, or that he doesn't need to breathe: it's his ability to think and learn like a human being.
Put too simply, there are two kinds of artificial intelligence we are working towards today. The first are expert systems, computer programs for reasoning about some specific topic. You might have an expert system for handling the logistics in a company or one for diagnosing illnesses, given a list of symptoms. Computer programs can be much better than human beings at these tasks: they don't forget things; they can keep track of an arbitrarily large number of variables simultaneously; they're very patient, are free of prejudice and will always update their conclusions when new information becomes available. Studies have shown that the kind of diagnosis you get from a human doctor really depends on the year they left medical school. They tend not to learn about new medical science that becomes available when they are practising—no big surprise, given how overworked most of them are. Still, we haven't replaced our doctors with robots because robots have terrible bedside manners. The lack of fresh knowledge in older doctors can also be offset by their knowing their patients better and having earned their patients' trust.
The other kind of artificial intelligence research tries to create a computer program whose behavior is indistinguishable from that of a human being. The generally accepted measure of this is the Turing test. A human being communicates with some entity via text chat and must decide if it is human or a computer. Any question at all is allowed, from clever word puzzles to intimate questions about feelings. David could clearly pass the Turing test.
Why do we want to achieve this? Such a program could perform all kinds of subtle cognitive tasks like legal casework, acting as a personal assistant or secretary, and jobs that are tedious but require the mental flexibility to deal with the unexpected. As science fiction writers through the ages have pointed out, however, this leads to a massive ethical dilemma. We would create what are, to all intents and purposes, human beings and then compel them to work for us. The idea of human dignity, unless tied awkwardly to conditions such as having a flesh body, would require us to grant these artificial intelligences the freedom to choose their own work and the right to be paid for it. At that point, why not use human beings? We know how to make those.
Or maybe we want to create humanlike artificial intelligence to learn more about human nature in the process. As we develop computer programs that are humanlike in some way, we can test our assumptions about what makes a mind. Once we create a computer program so similar in behaviour to us that we can't tell the difference, then looking at its structure should tell us a great deal about ourselves.
Of course, this presupposes that there are no major shortcuts to behaving like a human being. What happens if we create an AI completely indistinguishable from a human being, whose program clearly lacks any part that could be labelled 'consciousness'?
Halfway through Prometheus, David asks one of the crew members why it is so important to him to meet the alien race in search of whom they have come to the planet, who originally engineered human life. He replies that they need to ask why human beings were created. To what purpose and in what spirit? Humanity deserves an answer.
"Why did you create me?" asks David.
"Because we could," the crew member jeers.
David, who has been reminded throughout the mission that he cannot feel real emotions, replies quietly, "Imagine how disappointing it would be for your creator to say that to you."
But what would a good answer be? Would it be preferable to be told that you had been created for a specific purpose? That you are really just an expert system? We want meaning to flow from creator to creation but are unable to define what such meaning would look like.
Maybe we want to create artificial intelligence that is not quite like us, to offer us some perspective. Do we think and feel the way we do because of accidental details in our nature? Meeting an entity just as smart but different would shed light on this, and in the absence of sentient space aliens, artificial intelligence might be our best bet. We can only hope that we will recognise its intelligence despite its differences from our own.