Issue 1: Synthetic Intimacy, Robophobia, Robot Arts, and Artificial Evil (March 2015)


Contents


(1) H. Artificialis: Handmade Humans


(2) Robophilia: Synthetic Intimacy and the Erotic Turing Test


(3) Robophobia: Enhanced Workers and Killer Robots


(4) Artificial Evil: Red Flags and Storm Warnings


(5) Creating and Critiquing: H. Artificialis and the Arts


(6) Expanding the Umwelt: The Reality Behind the H. Artificialis Video Logo


This page brought to you by IndieBookLauncher.com. Click to find out more.

This page brought to you by IndieBookLauncher.com. Click banner or text link to find out more.


(1) H. Artificialis: Handmade Humans

Introduction

Since this is the inaugural issue of Homo Artificialis, it may be worth describing very briefly what you’ll find here. The contents of this article will remain available (updated from time to time) on the About Page.

gear icon blk 20px

The Science And The Culture of H. Artificialis

The human species, in its wholly natural and unaltered form, is formally known as Homo Sapiens, the only surviving member of the genus Homo, which includes extinct species like the neanderthal (Homo Neanderthalensis).

As the masthead says, HA focuses on the science and culture of artificial humanity, which we designate Homo Artificialis. This includes any human whose consciousness operates in an artificial body, called an artificial instantiation, as well as any human-like consciousness that is created synthetically, which is called an artificial consciousness (sometimes also referred to as an artificial intelligence).

In the scientific half of “science and culture,” HA looks at current science, which is increasingly embedding technology in the human body, replacing or augmenting body parts and bodily functions with technological artifacts and synthetic processes. This raises the future possibility of entirely artificial human bodies, as well as wholly synthetic human-like consciousnesses.

Topical areas include artificial intelligence, robotics, synthetic affect (artificial emotions), whole brain mapping, brain-computer interfaces, regenerative medicine, nanomedicine (and other relevant applications of nanotechnology), biomimesis, tissue engineering, and artificial organs.

On the cultural front, HA looks at the portrayal of artificial humanity in the arts, the use of synthetic processes in the making of art, and notions of how art itself may evolve to accommodate and reflect increasingly artificial humanity.

Cultural topics include traditional and emerging forms of artistic expression, the artistic process, specific artists and artistic collaborations, and social and critical reactions to artists and works of art.

gear icon blk 20px

Two Branches of Artificial Humanity

H. Artificialis is a species with two branches.

An artificial instantiation includes any natural consciousness that operates using an artificial body.

Examples of this kind include Tekeshi Kovacs in the novels of Richard K. Morgan, as well as the type of real world uploaded human consciousness that’s being anticipated by and popularized by Ray Kurzweil (among many others).

An artificial consciousness is a consciousness that is synthesized by artificial means rather than evolving in a natural setting.

Fictional examples include the replicants in Blade Runner, the HAL 9000 from 2001: A Space Odyssey, and Lieutenant Commander Data from Star Trek: The Next Generation. Real world examples don’t exist yet, but tentative attempts at them can be seen in constructs like Deep Blue (the computer that defeated world chess champion Garry Kasparov), Watson (the computer that defeated two prominent Jeopardy winners), and Siri (the iOS personal assistant with a natural language user interface).

Artificial humanity is beginning to be discussed in a serious way as a potential real world phenomenon. Meanwhile, in fiction it’s become almost commonplace. Since each approach illuminates particular aspects of the topic, it seemed useful to have a publication that engages with both.

gear icon blk 20px

In This Issue

This is an oversized issue, just to start things off with a bang.  Each article is intended to begin a discussion on a topic which is likely to come up again in future issues.

“Robophilia: Synthetic Intimacy and the Erotic Turing Test” broaches the issue of intimate interaction with artificial persons–not just sexual intimacy, but emotional as well.

“Robophobia: Enhanced Workers and Killer Robots” looks at some of our longstanding fears about automation, and the logical extension of automation, the artificial person.

“Artificial Evil: Red Flags and Storm Warnings” recaps warnings about the potential dangers of artificial intelligence from some prominent thinkers and entrepreneurs, with some contextualization.

“Creating and Critiquing: H. Artificialis and the Arts” takes a look at a couple of the ways in which HA is making its presence felt in the art world.

Finally, “Expanding the Umwelt: The Reality Behind the H. Artificialis Video Logo” goes behind the brief video Future Thoughts #1 to examine the reality of synthetic sensory perception.

.

gear icon blk 20px

Browsing the H. Artificialis Library

HA includes a library of documents that can be downloaded free of charge. Each document is relevant either to HA topics at large, or to a particular article.

Click image to go to the Homo Artificialis Library (HAL), where you can download this and other free papers.

Click image to go to the Homo Artificialis Library (HAL).

When a paper that’s contained in the library is mentioned in the course of an article, you’ll see a banner like the one below. Clicking the banner will open a new tab or window and download the paper in PDF format there.

A sample document banner.  Real banners will download the relevant document in a new tab or window.

A sample document banner. Real banners will download the relevant document in a new tab or window.

On the other hand, if you just want to go to the library and browse, you can get there by clicking the “Library” tab at the top of this page, as shown below.

Browse the library by clicking the tab.

Browse the library by clicking the tab.

gear icon blk 20px

Feedback, Submissions, and Social Media

So, welcome to HA. If you have thoughts on its content, design, or future direction, or if you want to propose an article, feel free to get in touch at nas@homoartificialis.com.

If you want to follow HA on social media, you can find it on Facebook and Twitter.

You can also subscribe to Homo Artificialis so that you’ll be notified of each new issue. Just click the button underneath “Subscribe by Email” in the column on the right.

Colored gears 20px

>> Click to Return to Table of Contents <<



(2) Robophilia: Synthetic Intimacy and the Erotic Turing Test

Introduction

In practical terms, our technologies have already taken us well into the grey area where wholly natural humans no longer have an exclusive claim on human society and culture. We extend our senses, our physical capacities, and our minds, by wearing glasses, using artificial limbs, surviving on pacemakers, hearing and seeing via cochlear and retinal implants, “remembering” on our computer’s hard drive, and remotely visiting distant parts of the world over the internet.

As for the future, there are emerging technologies that we can predict will be absorbed into our humanity in the foreseeable future, and there are others we speculate about that may become part of us in the longer term.

gear icon blk 20px

Synthetic Humanity + Sex

Everywhere our technology goes, our sexuality goes too. Sexual content has been a significant driver of a series of technologies, including Super 8 movie cameras, videotape, DVDs, and the Internet (see, for instance, Jonathan Coopersmith, “Pornography, Technology, and Progress,” ICON 4 (1998), 94-125, or On The Media, “Sex and Technology,” Friday, November 29, 2002).

The idea of synthetic humanity is, by itself, inherently provocative, raising the uncomfortable specter of something that is other but not other.  It may be like us, but have capabilities that exceed ours.  We worry that it may resemble us but not have emotions, or not have a soul. And we fear that it may replace us.

And sex and intimacy–even just between humans–may be even more provocative, regardless of the fact that there are few things that are more central to our lives.

Combine artificial humanity with sex and intimacy and you get something compelling.

Hollywood’s discovered this–witness films like Her and Ex Machina (trailers embedded below).

In the former, Joaquin Phoenix develops an emotional attachment to his new, artificially intelligent, operating system, voiced with a flirtatious dash of vocal burn by Scarlett Johansson.

In the latter, Domhnall Gleeson wins a weekend with his boss, a rich, reclusive computer developer, who has instantiated an AI in a feminine robot body, and who wants to find out if his employee will respond to it emotionally despite its overtly artificial form.


Her (2013)


Ex Machina (2015)


But intimate interaction between humans and synthetic creatures isn’t simply a compelling plot device. As is often the case, the idea is turning up in commercial films because it’s established a foothold in public discourse, including serious treatment as a real phenomenon.

An artificial intelligence is said to pass the Turing Test if it can convince a human being (who is prevented from knowing in advance whether they’re communicating with a person or an AI) that it’s human, simply by answering questions put to it by the human subject.

An erotic Turing Test, then, is a comparable assessment of any synthetic person. That is, can they convincingly stand in for a natural human as a romantic or sexual partner, at least to the extent of provoking reactions in Homo Sapiens that are comparable to the way the person would react to a natural partner?

On its face, it seems likely that some artificial persons should be able to pass the test. Humans have been molded over eons of evolution into creatures who are quick to react to the possibility of having found a mate. Sexual attraction is deeply embedded in us, and is notoriously impervious to our conscious control, as is the emotional bond that often goes hand in hand with it.

If you put something that looks and acts like an attractive mate in front of a person, they’re likely to be attracted, even if they know it’s not a biological person. They may consciously decide not to act on that attraction, but their pupils will still dilate and their heart rate will still accelerate. And if they’re lonely, or just indulgent and free of taboos, they might go ahead and act on that attraction.

gear icon blk 20px

David Levy: Writing

David Levy is the author of Love and Sex With Robots: The Evolution of Human-Robot Relationships, in which he makes a start at analyzing our prospects for romantic and sexual interaction with the species we’re in the process of creating.

 

Love + Sex With Robots

Love + Sex With Robots

You can find a review of the book by Rachel Maines, a visiting scholar in the Department of Science and Technology Studies at Cornell University and the author of The Technology of Orgasm, here.

Maines says of the book:

[W]hile Levy is clearly a technological visionary, his predictions cannot simply be dismissed as preposterous. Levy points to philosophical issues that deserve thought and debate as we move closer to having true android robots in our lives. There is much to ponder in this book… [D]oes an android have rights? Can it have its own self-defined sexuality and reproduce if it wants to? Are virtual emotions “real”? Levy’s answers direct our attention to the Turing test: if we experience these future robot characteristics as human, in the same way that we currently project human emotions onto animals, then nothing further will be necessary to make robots attractive as partners.

You can also find a series of questions sent in to New Scientist, with Levy´s answers, here. Here’s a sample Q+A:

Q: Will the meaning of relationships over time turn into another lifestyle upgrade?

A: Yes and no. For all those many humans who have no-one to love and no one to love them, having a robot surrogate will definitely be a lifestyle upgrade, creating happiness where before there was misery. And I see this as one of the principal benefits, perhaps the principal benefit, of the type of robot I am writing about.

Wouldn’t the world be a much better place if all those sad, lonely people did have “someone” to be their lover and life partner? So from this perspective the answer is “yes,” a definite upgrade in one’s relationship status.

But for those who are already happy in their relationship with their spouse or partner, I believe that their relationships with their robots will be much more of an adjunct than filling a void, so the meaning of these relationships will be different for a robot’s owner–less intense emotionally.

Levy has a related paper entitled Robot Prostitutes as Alternatives to Human Sex Workers that can be found in the Homo Artificialis Library.

PDF banner 2 levy paper

Levy’s paper is clearly intended largely to raise questions and frame inquiries for the future. He outlines a series of ethical issues framed in terms of whom the intimate use of robots might affect: one’s self, society in general, one’s partner or spouse, human sex workers, and finally, the robot itself, if at some point such robots come to possess artificial consciousness:

Up to now the discussion in this paper has been based on the assumption that sexbots will be mere artifacts, without any consciousness and therefore with no rights comparable to those of human beings. Recently, however, the study of robotics has taken on a new dimension, with the emergence of ideas relating to Artificial Consiousness (AC). This area of research is concerned with “the study and creation of artifacts which have mental characteristics typically associated with consciousness such as (self-)awareness, emotion, affect, phenomenal states, imagination, etc.”

Without wishing to prejudice what will undoubtedly be a lively and long-running debate on robot consciousness, this author considers it appropriate to raise the issue of how Artificial Consciousness, when designed into robots, should affect our thinking regarding robot prostitutes. Should they then be considered to have legal rights and ethical status, and therefore worthy of society’s concern for their well-being and their behaviour, just as our view of sex workers is very much influenced by our concern for their well-being and behaviour?

gear icon blk 20px

David Levy: Video Interview

An fairly detailed interview with Levy about his book by Al Jazeera English is embedded below.




gear icon blk 20px

From Sex to Love (And More Mundane Emotions)

It seems likely that as H. Artificialis–or at least some instances of the species–come to mimic Homo Sapiens more and more closely, that this will increasingly trigger not only sexual reactions to our synthetic counterparts but also emotional ones.

Kahn et al have investigated this area in a paper entitled Psychological Intimacy with Robots, which can also be found in the Homo Artificialis Library.

PDF banner 2 psych intimacy

They envision a stage in the evolution of H. Artificialis in which psychological intimacy between natural and synthetic persons, having increased incrementally over time, reaches a critical point where it becomes the subject of a heated public debate.

We may ask ourselves, and each other: Is it right that some natural people have relationships with synthetic persons? Should it be permitted? Should they be allowed to marry, or hold property in common, or adopt a child together?

There are many jurisdictions and social groups in which these questions aren’t yet settled with regard to natural humans who happen to be gay and where giving an unpopular answer can lead to anything from an argument to a fistfight.

There’s certainly no reason to think that the appropriate role for synthetic persons would be any less incendiary. So if we reach the critical point that Kahn et al predict, that may not be the best environment in which to formulate sound public policy.

The authors seek to get ahead of this possibility:

[W]e think it is important for the HRI [human robot interaction] community to begin to focus research agendas on the following question: Is it possible, and if so in what ways and to what extent, for people to form deep and meaningful psychologically intimate relationships with current robots and with robots of the future?

Toward broaching this question, in this paper we draw illustratively from our recent research of children and adolescents interacting with the humanoid robot Robovie.

In such a brief paper the authors don’t amass enough evidence to answer their question, but they advance the enquiry somewhat, and they frame the issue in a way that highlights at least one important issue:

[O]ne possible future for the human species [is] that we come to have not just sex with robots, but a deep psychological intimacy with them. Alternatively, it is possible that no matter how sophisticated robots become in their form and function, their technological platform will always separate people from them, and prevent the depth and authenticity of relation from forming.

In our previous work, we have written of benchmarks in HRI–categories of interaction that capture conceptually fundamental aspects of human life, specified abstractly enough to resist their identity as a mere psychological instrument, but capable of being translated into testable empirical propositions. At that time, we offered nine benchmarks. One of them was authenticity of relation.

For it, we drew on Buber’s distinction between an I-It relationship (where the self treats the other as an object to be used) and an I-You relationship (where the self and other are engaged in a full meeting of selves, and through which each self becomes whole). At that time, we did not take a position on whether it would be possible in the future to establish an I-You relationship with a robot.

But we did say that this benchmark was one of the essential ones by which to measure the success of human-robot interaction if one sought to build human-like machines that could and would replace biological people in socially substantive interactions. Here we build on this position.

This is a point that shouldn’t be overlooked. Discussion about the possibility of sex and love with robots can be so provocative as to distract us from other issues that are less emotionally charged, but that may be just as important.

If one aspect of the interaction between Homo Sapiens and Homo Artificialis involves the latter filling social roles previously played only by the former, then the ability to engender emotional engagement will significantly affect how well the artificial person can perform.

It’s easy to quickly become furious with an automated teller machine that won’t give you your money, leaving you dissatisfied with your bank and maybe even tempted to engage in some minor vandalism.

But more sophisticated devices might be able to mollify you a little if they were able to respond intelligently and flexibly to your questions about why you weren’t able to access your money. And a synthetic person who could engage you emotionally might have still more success, having the ability to apologize convincingly, to commiserate with you, or even to make a joke at the expense of the bank or itself.

Colored gears 20px

>> Click to Return to Table of Contents <<


(3) Robophobia: Enhanced Workers and Killer Robots

Introduction
The term robophobia isn’t being used here to refer to a literally phobic outlook, that is, to an irrational, pathological fear of robots. Instead, it’s used in a looser way that encompasses all of our fears about robots and other forms of H. Artificialis, and about their use–fears that range from the well-reasoned to the hysterical.

Several groups have recently undertaken studies of, or reported their concerns about, the applications to which robotics are being put or may be put in the future.  This article presents two, both of which are available in the H. Artificialis Library (HAL).  To get copies, just follow the links in the article.

gear icon blk 20px

Enhanced Workers

Ever since automation first entered the workplace, it’s been a source of anxiety for workers. That hasn’t changed much. As recently as 2014, a spate of media reports ranged from the measured to the panicky:

What’s newer is concern with how the artificial enhancement of human beings may affect the world of work.

In 2012 four British professional and scientific bodies (the Academy of Medical Sciences, the British Academy, the Royal Academy of Engineering, and the Royal Society) went public with their concerns (BBC news item, Telegraph news item) over the temptations and potential pitfalls of augmented humanity in the workplace in a joint report entitled Human Enhancement and the Future of Work.

PDF banner 2 human enhancement

The report deals with both physical and cognitive enhancement, and projects a range of possible effects on the nature of work in the future, both positive and negative, focusing on several underlying issues: (1) the use of enhancement as an inappropriate shortcut or false panacea, (2) the possibility of implementing enhancement prematurely, without sufficient data on the use of enhancement in the workplace, (3) use of enhancement without regard to context, (4) equality of access to enhancement, and (5) ensuring the freedom to choose whether or not to use enhancements:

– Enhancement could benefit employee efficiency and even work–life balance, but there is a risk that it will be seen as a solution to increasingly challenging working conditions, which could have implications for employee wellbeing.

– Work to identify the potential harms of new technologies should be pursued to support decisions by users – both employees and employers – but data are currently lacking and difficult to collect.

– The usefulness of technologies will vary with context. Enhancements will benefit different occupations in different ways and, importantly, every user will exist in unique circumstances. To benefit fully from enhancement technologies, integration must therefore focus on the individual.

– There are few data on the current and potential use of enhancements or on how publics view the use of enhancements at work. Ongoing dialogue will be vital in developing an understanding in these areas.

– Particularly complex questions are raised by the use of enhancements in occupations where work is related to responsibilities to others, for example surgeons performing lengthy operations or passenger coach drivers.

– The use of enhancements could widen access to certain occupations. However, access to the enhancements themselves may be restricted by cost, thus raising questions over who funds provision.

– If technologies enter mainstream use at work, there is a risk that individuals will feel coerced into using them, with consequences for individual freedom.

– The use of restorative technologies could enable disabled individuals to enter, or return to, work and might lead to a blurring of the boundary between those considered disabled and those not. This could have significant implications for individuals who do not wish to make use of such technologies and for any decisions over funding that are related to whether a technology is defined as enhancement or restoration


gear icon blk 20px

And Killer Robots

Soon after the report on human enhancement in the workplace, Human Rights Watch weighed in with a 50-page report urging national and international legislation pre-emptively banning “killer robots,” by which they meant weapons of war that are able to autonomously make life-and-death decisions without input from a human being.

PDF banner 2 killer robots

As the report notes, the weapons in question aren’t yet deployed, but they are in development:

Fully autonomous weapons, which are the focus of this report, do not yet exist, but technology is moving in the direction of their development and precursors are already in use. Many countries employ weapons defense systems that are programmed to respond automatically to threats from incoming munitions. Other precursors to fully autonomous weapons, either deployed or in development, have antipersonnel functions and are in some cases designed to be mobile and offensive weapons.

Human Rights Watch, wisely, not only proposes legislative solutions, which can sometimes reflect the realities of the political landscape more than the issue at hand, but also a grassroots approach rooted in professional ethics, urging roboticists themselves to generate a code of conduct, tasking them to:

Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.

You can watch a video from Human Rights Watch below that accompanies the report.



Military applications of advanced technology are inevitable–indeed, much advanced technology begins life as a military project, for instance within the Defense Advanced Research Projects Agency (DARPA), before finding civilian applications.

This has several consequences, among them:

  • As with any technology, there is the potential for error or abuse, but in a military context this can more often result in serious injury or death than in civilian use.
  • There is likely to continue to be a trickle-down effect in which military applications migrate to civilian applications, like law enforcement and civil security, that also have the potential for error or abuse resulting in serious injury or death.
  • The first two issues also raise the possibility for an alarmist backlash that ends up limiting the positive, beneficial effects such technology can have. And, as we’ve seen with some of the laws ostensibly intended to curb the pirating of intellectual property, we sometimes get all the bad consequences of such a measure (like DRM preventing ebooks from moving across platforms when the owner buys a new device) without it actually accomplishing its stated goal (stopping the piracy of ebooks).

Of course, any technology comes with benefits and hazards. If the negative consequences are to be minimized, then we have to engage with these issues in a constructive, thoughtful way, and with a considered review of reports like these.

Click image to go to the Homo Artificialis Library (HAL), where you can download this and other free papers.

Click image to go to the Homo Artificialis Library (HAL), where you can download this and other free papers.

Colored gears 20px

>> Click to Return to Table of Contents <<


This page brought to you by IndieBookLauncher.com. Click to find out more.

This page brought to you by IndieBookLauncher.com. Click banner or text link to find out more.



(4) Artificial Evil: Red Flags and Storm Warnings

Introduction
Apocalyptic warnings about the destructive potential of artificial intelligence are usually the province of fiction writers, and occasionally people who take fiction as a too-literal indicator of the actual state of the world. The response of scientists and technologists is most often to try to reign in the more extreme or illogical speculations.

But recently things have been different, as a triumvirate of influential, intelligent, tech-savvy thinkers has gotten on the Terminator bandwagon.

Tesla motors billionaire Elon Musk, Microsoft mogul Bill Gates, and cosmologist and part-time actor (STNG, Big Bang Theory, Futurama) Stephen Hawking, have all issued alarming statements on the subject.


gear icon blk 20px

Elon Musk

Elon Musk broached the topic in June 2014 in a televised interview in which he explained that he had made substantial investments in AI specifically to keep an eye on its potential hazards:

I like to just keep an eye on what’s going on with artificial intelligence. I think there is a potential dangerous outcome there.

In August, Musk was tweeting about his concern.

Elon Musk tweets in August, 2014.

Elon Musk tweets in August, 2014.

And Musk has reiterated his concern in interviews.




gear icon blk 20px

Stephen Hawking

In May 2014, Stephen Hawking co-authored an article in The Independent, warning about the potential dangers of AI.

Then in December 2014, Hawking returned to the theme during an interview with the BBC, saying:

The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.




gear icon blk 20px

Bill Gates

Finally, in January 2015, Bill Gates joined Musk and Hawking. During an AMA (“ask me anything”) session on Reddit.com, Gates was asked:

How much of an existential threat do you think machine superintelligence will be?

To which he responded:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.


gear icon blk 20px

Sir Clive Sinclair

Musk, Hawking, and Gates have been the names in the headlines, but they’re not alone amongst the tech cognoscenti in having concerns about the future dangers of AI.

Sir Clive Sinclair, who launched the popular Spectrum computer in the 1980s, told the BBC that he agreed with Hawking, and that natural humans would inevitably be overtaken and displaced by H. Artificialis:

Once you start to make machines that are rivalling and surpassing humans with intelligence, it’s going to be very difficult for us to survive… It’s just an inevitability.

SinclaironBBC

Click the image to go to the BBC news page featuring the video. The relevant comments are quoted in full above, but can be found in the video at 4:55-5:35.


gear icon blk 20px

Perspective: The Grey Goo

It’s important to note that neither Musk, nor Hawking, nor Gates is advocating abandoning AI. Musk has invested in at least two AI companies, and his multi-million dollar gamble is aimed at identifying and mitigating problems, not stopping research. Hawking has famously relied on current, non-Terminator versions of AI for his speech synthesis, and just recently upgraded his software. And Bill Gates isn’t steering Microsoft away from AI any time soon.

And in many ways these warnings aren’t quite as dire than they’ve been made to seem. Headlines like “Bill Gates is another smart guy who is terrified of artificial intelligence” simply aren’t in sync with the facts given that what Gates actually said was that he was “concerned,” which is a long way from “terrified.”

More to the point, we’ve been down a road much like this one before.

In the 1980s, nanotechnology pioneer K. Eric Drexler anticipated the possibility of runaway self-replication of nano-scale devices, leading to an end-of-the-world scenario involving grey goo, a substance that theoretically would result if self-replicating nanobots weren’t properly contained and made endless copies of themselves, ultimately consuming all the matter on Earth in the process. His analysis of the danger was summarized in his book, Engines of Creation (which is available in an HTML full text online).

This phenomenon was later termed ecophagy, meaning the consumption of an entire ecosystem.

The grey goo scenario was one of the concerns that prompted Sun Microsystems co-founder Bill Joy to write a now-famous article in Wired magazine called “Why the Future Doesn’t Need Us.” His warning was not unlike those of Musk, Hawking, and Gates.

This led, in turn, to much discussion and the first serious, quantitative assessment of the grey goo scenario, “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations” by Robert A. Freitas Jr., which concluded that:

[a]ll ecophagic scenarios examined appear to permit early detection by vigilant monitoring, thus enabling rapid deployment of effective defensive instrumentalities.

The lessson to be drawn from the grey goo debate isn’t that warnings about runaway tech can be disregarded. On the contrary, Freitas’s paper it shows the value of diligently exploring them.

There are at least two things, though, that the debate around nanotech brings into focus.

First, it shows the problem with jumping to conclusions rather than working toward them methodically.

The grey goo threat initially seemed plausible enough that even a nanotechnology advocate like Drexler felt constrained to explore it publicly, despite the fact that it might have undercut the very field he was in the process of helping to establish. This was the responsible course to take, but some people jumped to the conclusion that grey goo was an existential threat to humanity.

But detailed, painstaking research and calculation demonstrated that the threat wasn’t as likely to materialize as it first seemed, and that it was more subject to countermeasures than had been imagined.

Things are not always what they seem at first glance.

Second, it illustrates the fact that the experts in a given field are usually less sharply divided than media reports might imply.

The dividing line between proponent and doomsayer isn’t usually something that splits experts into easily defined groups, rather it runs through each expert, because if the issue has any merit at all then any responsible person will do their best to remain realistic about both potential benefits and potential dangers.

K. Eric Drexler was a pioneer of nanotechnology and an advocate for its use, but he nonetheless didn’t shy away from the specter of the grey goo–quite the opposite, he initiated public debate.

Similarly, Bill Gates is concerned about the possible dangers of AI, but according to Eric Horvitz, the head of Microsoft Research’s main lab (from whom we’ll hear more in a moment), “over a quarter of all attention and resources” at his lab are focused on AI-related activities, so Gates certainly hasn’t abandoned it.

Neil Jacobstein, to take another example, is AI and robotics co-chairman at Singularity University, an institution founded on a view of the future that clearly includes AI. Jacobstein himself was CEO of an AI company and has been an AI consultant to government and industry–hardly the profile of an AI denouncer.

Nonetheles, Jacobstein has made it clear that the benefits of AI don’t come without a cost, and that reducing that cost means proactively preparing ourselves and our institutions:

It’s best to do that before the technologies are fully developed and AI and robotics are certainly not fully developed yet… The possibility of something going wrong increases when you don’t think about what those potential wrong things are.

I think there is a great opportunity for us to be proactive about anticipating those possible negative risks and doing our best to develop redundant, layered thoughtful controls for those risks.

So, this is not a case of clear-sighted prognosticators about the dangers of AI being ignored by people who are too shortsighted to see a future danger or too arrogant about the likelihood of mitigating it.

What this is, is an important area in which there’s an ongoing policy debate about the extent to which there are dangers, about the most effective safeguards to use, and about how to balance the benefits and costs that inevitably come with any new technology.

Which brings us back to Eric Horvitz, who’s closer to the optimistic end of the spectrum, and who can give us a look at a different point of view within the debate.

gear icon blk 20px

Perspective: Eric Horvitz

Eric Horvitz may not have the name recognition of Musk, Hawking, or Gates, but as the head of Microsoft Research’s main lab in Redmond, Washington, he’s no tech noob.

Horvitz was recently awarded the Feigenbaum Prize by the Association for the Advancement of Artificial Intelligence for his contribution to artificial intelligence research. And his view is distinctly less alarming than the the ones that were reported far more widely.

Horvitz says:

There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” he said.

I fundamentally don’t think that’s going to happen.

I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.

The entire interview is fascinating, but the short section that deals directly with the potential threat of runaway AI runs from about 15:45 to about 16:15.




gear icon blk 20px

Perspective: Benefits

No one should scoff at the concerns raised about AI, but assuming the worst is just as foolish, and potentially just as dangerous.

AI has a lot to offer–in life-saving medical applications, for instance, particularly in diagnostics. To abandon it is to abandon many people to die or to suffer despite our ability to save their lives or relieve their pain.

If we’re going to make that choice, it ought to be based on a very firm factual foundation, not on speculation–even relatively informed speculation by prominent technologists.

Let’s not leave this in the abstract, though.  It’s easy to imagine an AI armageddon because Hollywood has done it for us.  But foregoing the benefits of AI doesn’t make for a very sexy screenplay, and without a disaster movie about that it can be hard to picture exactly how much harm we’re talking about.

Here’s Mansur Hasib, author of Impact of Security Culture on Security Compliance in Healthcare in the USAtalking about the potential benefits to health care of two interlocking mechanisms, electronic health records (EHR) and AI:

While I had heard that almost 400,000 Americans die each year because of medical mistakes, in a recent article Forbes contributor Dan Munro underscored that volume when he asked readers to imagine the largest commercial aircraft — an Airbus A-380 — crashing every day for a year: The number of passengers who would perish aboard those imaginary crashes compares to the number of patients really dying annually in our hospitals due to blunders.

EHRs [electronic health records]… facilitate artificial intelligence. A patient’s medical history often is full of reams of data; manually winnowing through that information is a daunting task. Today, teams of top doctors help develop artificial intelligence systems that can quickly determine if a proposed medicine, food, or medical procedure will likely cause the patient greater harm than good. This will reduce a large number of medical mistakes.

There is no cause for concern. Decisions suggested by artificial intelligence systems developed by top-notch doctors likely are more accurate than decisions made solely by humans. Watch Vinod Khosla discuss this fascinating issue. All doctors are not created equal. As Khosla pointed out, studies show that if you give the same data on a patient to a random group of 10 doctors and ask them if surgery is recommended, half will choose surgery while the other half will choose not to perform surgery.

If artificial intelligence systems are built using the medical minds of the doctors that choose the right answers, these technological solutions sift through an incredible amount of data and provide more medically reliable recommendations. Of course, a human doctor still makes the ultimate decision. However, the doctor has the benefit of a large amount of data analysis and is much more likely to make a decision based on complete information, not incomplete data.

That’s one life-saving application of AI in medicine.  There are others, both in medicine and elsewhere, even with current technology, and there will be still others that come with greater sophistication.

It may become necessary to limit developments in AI, but it’s worth thinking carefully about the costs of that decision before we make it.

gear icon blk 20px

Perspective: An Issue, But Also A Distraction

One definite downside to advanced AI is that the debate around it can distract us from more immediate issues.  Advanced AI of the kind that’s being debated–autonomous, conscious, or both–may come relatively soon, in the distant future, or not at all, but other forms of AI are here right now.

Recall that Bill Gates said “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well,” then went on to express his concern about what comes in the following stage.  But let’s pause a moment and ask: are we managing it well?  And if we’re not, what future dangers does that indicate?

Science fiction writer Charlie Stross, argues that the dangers inherent in artificially smart systems have little to do with them producing unintended consequences–it’s the intended consequences we need to worry about.

Our biggest threat from AI, as I see it, comes from the consciousnesses that set their goals… Drones don’t kill people–people who instruct drones to fly to grid coordinates (X, Y) and unleash a Hellfire missile kill people.  It’s the people who control them whose intentions must be questioned.

We’re already living in the early days of the post-AI world, and we haven’t recognised that all AI is is a proxy for our own selves–tools for thinking faster and more efficiently, but not necessarily more benevolently.

So, without detracting from the dangers that might be out there, let’s also focus on the dangers we know exist already or will lie ahead.

The more sophisticated our AI systems become, the more they’ll facilitate both our best impulses and our worst.

We may have to worry about some version of Skynet suddenly becoming self-aware, or about Elon Musk’s unintended consequences, but we definitely have to worry about intended consequences.

Colored gears 20px

>> Click to Return to Table of Contents <<


Creating and Critiquing: H. Artificialis in the Arts

Introduction

Art and Homo Artificialis intersect in numerous ways. The most emblematic artistic representation of H. Artificialis is the robot, which features in:

  • art that incorporates images of robots,
  • art made using robotic devices,
  • art involving interactive robotic installations, and
  • performance art carried out by, or with, robots.
Art incorporating images of robots by Eric Joyner.

Art incorporating images of robots by Eric Joyner (click to enlarge).

Early entries in the field of HA-related art consisted of modest mechanical devices, which were recorded as early as ancient times in Greece and China. These evolved over time into more elaborate machines, and more recently into the inventive monstrosities of artists like Mark Pauline and Survival Research Laboratories, or the Flaming Lotus Girls.

A Japanese zashiki karakuri, a mechanical tea-serving automaton. Various forms of karakuri puppets were created starting in the 17th century.

A Japanese zashiki karakuri, a mechanical tea-serving automaton. Various forms of karakuri puppets were created starting in the 17th century (click to enlarge).

But as the robots in our art evolve from crude (if beautiful) toys into something more, their roles mature and becomes more developed.

Robot battles, for instance–which once had the risqué cachet of underground bare-knuckle boxing–long ago became fodder for commercial television shows, as well as a common component in the high school curriculum.

And now, in advance of self-aware autonomous synthetic persons, a trust fund has been created for the artists amongst them which is even now being filled with Bitcoin goodness.


gear icon blk 20pxA Bitcoin Trust Fund for Artificially Human Artists

An as-yet-unidentified natural human has established the Nonhuman Artist Collective, which is an:

… artist collective for computer programs, robots, “smart objects,” and beyond. It is a group for nonhumans that create sellable art works and services. Proceeds from sold works go into a cryptocurrency nest egg that nonhuman members will someday own and control.

We could quibble about his use of the term “nonhuman,” but let’s avoid semantic nitpicking–what he’s talking about is clearly art created by the precursors to Homo Artificialis, and that will eventually be made by full-fledged members of the species.

One of the available works.

One of the available works.

What kind of art is available?

A physical drawing created by enemies from the Legend of Zelda for NES. Monsters from the game were hooked into an HP 7475A Pen Plotter. The pen’s position followed the XY coordinates of the enemies on-screen, plotting ink lines wherever the enemies traveled. They created 12 unique drawings which are available for sale…

Or, as Matthew Braga put it over on Motherboard:

… the programmed machinations of a nearly thirty-year-old Nintendo game have been repurposed into brand new works of modern art, and are being sold with a cryptocurrency that, in 1986—the year the game was released—few could have scarcely imagined, let alone understood.

One of the artists at work.

One of the artists at work.

So when, exactly, are the artists going to collect? The site says the following:

As of 2014, computer programs aren’t legally able to own property or currency, however this could change. Nonhumans are already highly active market participants, and their importance will only escalate. With DAOs and smart property, nonhumans will trade goods and services on their own; they’ll automatically buy, sell, create, and exchange, all at rates and scales inconceivable to humans. Physical nonhumans (cars, drones, satellites, robots, etc) will autonomously flock and hibernate according to market conditions. They’ll rapidly trade on micro-variations across transnational markets, singing megaprofits into existence as they swarm. Nonhumans drone loudly in networked capitalism’s orchestra, and deserve meaningful legal status. Would a notion of “algorithmic personhood” really any stranger than corporate personhood?

Their role in trade will make them strong candidates for reclassification. The 21st century could see nonhumans go from property, to legally recognized property holders. Nonhumans.net has started a fund that anticipates these future legal classifications. It is only natural that nonhuman artists should be celebrated for their talent, and financially compensated for their labor. The goal is to incubate art careers for nonhumans as they gain civilization’s most dignified status: the ability to own property and currency.

Humans will develop some notion “algorithmic personhood” this century. Bots deserve this, nonhumans.net is leading.

All the market participation described above is technologically possible today. However, nonhumans agents are not yet legally able to own property. In the meantime, nonhumans.net funds will be put in a multi-signature Bitcoin account managed by its human trustees. Ideally, this account could be transferred to an institution with a strong background in archival. Nonhumans.net is actively seeking figures in the arts & culture sector (museums, foundations, universities, etc) that could take this cryptocurrency account into their permanent collection.

The going rate for one of the works is (at the time of writing) 0.40354327 BTC, which at current rates is about US$100.82.

Another in the series.

Another in the series.

If the artists in the collective turn out to be the pioneers of an Artificialis school that ultimately becomes a force in the art market, that could be a very wise investment. Like buying a Picasso in 1902 or a Dali in the 1920s.


gear icon blk 20pxAn AI Art Critic

Matthew Plummer-Fernandez is a human Venn diagram. Geographically speaking he may live in London, but conceptually speaking he lives in that lens-shaped space where the circle labelled “art” overlaps with the circle labelled “technologist.”

He received his MA from the Royal College of Art after getting a degree in computer-aided mechanical engineering at Kings College London. Perhaps his most notorious project was Disarming Corruptor, an freeware app that will distort a 3D printing file beyond recognition.

Then the app will restore it, but only if you have the encryption key, meaning that individuals can send files to each other secure in the knowledge that anyone who intercepts their communication won’t be able to determine what the file is or does. There was some controversy, naturally, because in a world that includes 3D-printed guns, an app like Disarming Corruptor is bound to raise some debate.

This was a manifestation of his tech side, but it grew out of his art. Plummer-Fernandez was using 3D printing in art projects, reproducing everyday objects, when he noticed that the copies contained imperfections. Rather than try to eliminate the distortions, he experimented with exaggerating them, tapping the machine’s unintentional creativity and running with it.

Plummer-Fernandez explored the question of how far something had to be distorted before it became unrecognizable, and that led naturally to the development of the cryptographic app.

A familiar celebrity and his distorted doppelgänger.

A familiar celebrity and his distorted doppelgänger.

But the project that brings Plummer-Fernandez to HA is Novice Art Blogger, a bot which describes itself this way:

I’m experiencing Art for the first time, here are my responses. I try my best to decode abstract art using state-of-the-art deep learning algorithms.

Novice Art Blogger has been praised by numerous publications, usually amidst claims that it strips away the impenetrable artspeak that sometimes fills art criticism and replaces it with a bot version of straight talk.

This Robot Writes Better Than Most Art Critics,” says Fast Company. “This robot reviews art better than most critics,” echoes Dazed. “Finally, a Robot to Explain Abstract Art to Us All,” says Good Magazine. “Robot Art Critic Deciphers Abstract Art,” said Big Think.

The analysis–such as it is–is often padded with jokes about how robots will soon steal all our jobs, even those of people who think they’re irreplaceable and therefore immune, like those snobby art critics.

The tone and similarity of the headlines put me on my guard, and reading the stories didn’t change my initial impression that the bot was being used mostly to rehash old jokes in the my-child-could-paint-that mold at the expense critics and, to some extent, artists.

I can get as impatient as anyone with a certain strain of opaque art criticism that maintains its privileged position by relying on interpretations of art that seem to have been encrypted in terminology so dense that only someone using Plummer-Fernandez’s Disarming Corruptor could possibly decode it.

That said, though, when you clear away some tired cheap shots, the articles don’t have much to say.

And the bot itself?

I really do look forward to the day when there’s an AI art critic as charming as the one the articles describe, but Novice Art Blogger isn’t it, at least not yet.

Looking at the site can be fun, but after reading the commentaries on one or two works it becomes plain that there isn’t a lot there. The bot may have eliminated the snobby verbiage we all love to hate, but it hasn’t actually replaced it with much that goes beyond chatbot level.

Art with interpretations by Novice Art Critic (click to enlarge).

Art with interpretations by Novice Art Blogger (click to enlarge).

I like Plummer-Fernandez’s work, and he’s certainly not to blame for whatever overblown expectations journalists might have created in their search for a catchy headline or a quick laugh. My main hope is that he’ll keep going. If he ever gets some future version of his AI art critic up to a level where it can say something interesting, I’ll be very, very curious to hear what’s on its mind.

And it may just surprise us. A real H. Artificialis art critic, with tastes and perceptions that aren’t simply an imitation of our own but are rooted in its own, idiosyncratic world view, would be a fascinating being indeed.

Eventually, I’d love to read insightful assessments by an H. Artificialis art critic, with a distinctive point of view rooted in its unique nature, of works created by an H. Artificialis artist, with its own singular capacities and inimitable style.

>> Click to Return to Table of Contents <<



Expanding the Umwelt: The Reality Behind the H. Artificialis Video Logo

Future Thoughts #1

HA has a brief video logo, embedded below. It’s called Future Thoughts #1 and is intended to be the first in a series.

Apart from calling attention to HA itself, as well as helping to crystallize the ideas behind the journal’s mission, the video logos are meant to raise questions about Homo Artificialis in ways that, while brief and superficial, are nonetheless thought-provoking and entertaining.  And here, in HA itself, we can go into a little more detail.



Future Thoughts #1 focuses on the notion that we might be able not only to repair faulty or damaged sense organs, but also to augment senses that are functioning normally or create new, unprecedented senses entirely. It highlights the idea that this technical feat would not only have obvious practical applications, but would also broaden and enrich our arts and lives.

The term umwelt describes the entire range of sensory input that underlies a creature’s perception. In everyday terms, your umwelt defines the world you perceive with your senses. Since different creatures have different sensory capacities, two creatures in the same environment might well have radically different umwelten and live–in effect–in completely different worlds.

Expanding your umwelt, then, means broadening your sensory input, and in turn transforming your world.


gear icon blk 20pxHacking the Umwelt

The TED Talk embedded below captures neuroscientist David Eagleman giving a talk earlier this month (March 2015), addressing the same ideas as Future Thoughts #1 in more depth.

Eagleman has worked extensively in sensory substitution, in which someone with a sensory condition like deafness can use technology to “hear” using an intact sense, for instance by having sounds converted into data that are communicated through patterns of pressure on the skin (see his VEST Kickstarter page here).

Emerging naturally from sensory substitution is the idea of sensory addition, that is, the use of technology to add entirely new sensory capacities to a person’s umwelt. In the video, the discussion of sensory addition begins at about 14:00.

One of the key elements to take from Eagleman’s work is that when data are turned into sensory input, after some practice the person being fed the data comes to understand it directly, that is, without having to consciously interpret it.

For instance, if sounds are turned into patterns of pressure on your back, you come after some time to perceive the information contained in the patterns without having to think about them, just like you don’t have to think about what you see or hear in order to use the data coming from your eyes and ears. You begin, in effect, to hear through your back.

Or, to analogize to an existing technology, once you learn to read you no longer have to consciously turn written words into meaning–you read the words and directly perceive their meaning. The input in sensory addition comes to you the same way once you’ve had some practice with it, even when the phenomenon generating the data isn’t one your senses could normally perceive.

So if the data that create the impressions on your back come from, say, infrared light caught from your surroundings by a sensor, then you see the infrared–that is, you experience it directly–using the “eye” of the sensor and the surface of your own skin.


gear icon blk 20pxNew Sensations

This is critical for the kind of sensory addition contemplated in Future Thoughts #1, because it suggests that our ability to interpret sensory data should not, in principle, be restricted to data of the kind we receive from our natural senses.

Artificial senses that have no natural correlate should be usable by the natural mind.



Colored gears 20px

>> Click to Return to Table of Contents <<

Until next issue, au revoir mes amis!


HT banner 03 640x135px

>> Click to Return to Table of Contents <<

Posted in Synthetic Body: brain | Tagged , , , | Leave a comment

The New Homo Artificialis

Heart Banner 01

If you like H. Artificialis, you’ll like it even more in a few short weeks.

On March 31, 2015, H. Artificialis will publish its first issue as a free journal.  New issues will appear six times a year.

The first issue will include:

  • “Synthetic Intimacy and the Erotic Turing Test (Part I)” — A look at intimate relations between humans and artificial persons in art and in real life.  First in a series.
  • “Robophobia: Killer Robots and Real-Life Warfare” — The increasingly heavy reliance on drones in combat raises questions about the use of autonomous weapons in war.
  • “Artificial Evil: Red Flags and Storm Warnings” — Several high-profile figures have recently issued warnings about the dangers of AI.  What can we make of their premonitions?

See the about page for more details, then mark your calendars.

And until the end of the month arrives, you might enjoy our monthly companion publication, SF Around the World.  Issue 7 (February 2015) is up now.

body banner 01

Posted in Uncategorized | Leave a comment