“A great science fiction detective story” - Ian Watson, author of The Universal Machine
Hang on! I know you´re anxious to get to the love and the sex and all the sweaty good stuff, but before moving on, don´t forget to click on the Luck & Death banner, above. For a limited time you can order a special edition of the book at the regular retail price of $5.00. Free sample chapters are available, as is an MP3 sample chapter for your iPod or other device. If you enjoy this site, try it!
Okay, now on to Robot Sex Week…
Yesterday we began this series, now informally dubbed Robot Sex Week.
We started with look at David Levy and his thoughts on sexual relations between humans and robots and even the possibility that we would form loving attachments to synthetic humans.
Today it´s time for some counterpoint, at least with respect to the love. (I promise we´ll get back to the sex very soon.)
Meet Dylan Evans, a British academic and author. Evans has written several popular science books and is the founder and CEO of Projection Point, a company that designs risk intelligence training programs for corporate clients.
Evans has written one chapter in a book called Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues.
The chapter is entitled Wanting the Impossible: the Dilemma at the Heart of Intimate Human-Robot Relationships and confronts David Levy pretty directly, mano a mano as it were, on the issue of love with homo artificialis.
Very kindly, he´s put the uncorrected proofs for his chapter online in their entirety (13 pages) so you can read what he has to say without having to pay US$149.00 for the book.
The opening summary sets out his terms:
In a recent book entitled Love and Sex with Robots, the British scholar David Levy has argued that relationships with robot Companions might be more satisfying than relationships with humans, a claim which I call “the greater satisfaction thesis” (GST). The main reason Levy provides in support of GST is that people will be able to specify the features of robot Companions precisely in accordance with their wishes (which I call the total specification argument or TSA). In this paper, I argue that TSA is wrong. In particular, the argument breaks down when we consider certain behavioral characteristics that we desire in our partners. I illustrate my argument with a thought-experiment involving two kinds of robot – the FREEBOT, which is capable of rejecting its owner permanently, and the RELIABOT, which is not.
Evans concludes that the Reliabot (I can´t go along with the use of all upper case letters) doesn´t give us what we crave in a relationship:
People often say they want their partners to be reliable, faithful, always there for them, never to leave them, and so on. But they want these qualities to be the fruit of an active and ongoing choice. The most effective way to signal that there is a real choice involved here is for the partner to drop hints that there is a genuine possibility that they could leave, if they ever wanted to. So, paradoxically, for people to feel secure that their partners freely choose to be with them, and not with anyone else, they must occasionally be made aware of the partner’s freedom by occasional rejections (huffs, moods, and so on), and by the occasional sign that one’s partner finds other people attractive too. It can be very painful when one’s partner is grumpy, or seems attracted to someone else, but it is also strangely compelling.
At the same time he argues that the Freebot, which has the liberty to reject us just like a human would, has nothing to offer that human partners can´t give us already.
It´s a tidy little dichotomy, and without having read Levy´s book I can´t say for sure that it doesn´t give a convincing refutation of Levy´s position.
But is that enough? What I mean is, putting Levy aside, does Evans´ argument stand on its own? Crucially, the argument can´t yet be tested, so what if it could be?
The fulcrum of the entire argument — and again, to be fair, this may derive from Levy — is that the attraction of the robot is its lack of free will. But if we imagine a time (possibly distant, possibly not, depending on your view of the technology) when a type of robot exists that could test Evans´theory, is unwavering obedience really the only thing it would have to offer us?
If the robot isn’t just a sophisticated sex doll (apologies for that wording to any Real Dolls out there, at least one of whom I know reads this page), but comes with a sufficiently advanced artificial intelligence, it could offer its human partner an intellectual challenge that is specific to his or her interests and pitched at an appropriate level — neither so high as to be insurmountable, nor so low as to be boring. Would that be enough to begin to evoke a genuine emotional reaction?
Imagine the companion and teacher that a robot might be if, in essence, it embodied all the information on the internet, but all that data about the world was mediated by an artificial personality. Isn´t that something to which a person might begin to have a deep emotional attachment?
Now include within that mediating personality a set of realistic facial expressions, gestures, vocal inflections, and other non-linguistic forms of communication that tend to evoke emotional reactions in all of us — how about now?
My argument isn´t that any of these examples is sufficient — this isn´t an academic paper, so I´m not treating the question completely rigourously. My point is simply that I believe Evans´argument is too simplistic.
I say thanks to Dylan Evans for raising the stakes, and certainly it´s good to hear an intelligent, serious argument against Levy´s position, but I´m not convinced yet.
Tomorrow: `Sex times technology equals the future´ — J.G. Ballard