On the Moral Implications of Willful Acts of Virtual Harm

Perhaps you’ve seen the clip below in which a dog-like robot developed by Boston Dynamics, a Google-owned robotics company, receives a swift kick and manages to maintain its balance:

I couldn’t resist tweeting that clip with this text: “The mechanical Hound slept but did not sleep, lived but did not live in … a dark corner of the fire house.” That line, of course, is from Ray Bradbury’s Fahrenheit 451, in which the mechanical Hound is deployed to track down dissidents. The apt association was first suggested to me a few months back by a reader’s email occasioned by an earlier Boston Dynamics robot.

My glib tweet aside, many have found the clip disturbing for a variety of reasons. One summary of the concerns can be found in a CNN piece by Phoebe Parke titled, “Is It Cruel to Kick a Robot Dog?” (via Mary Chayko). That question reminded me of a 2013 essay by Richard Fisher posted at BBC Future, “Is It OK to Torture or Murder a Robot?”

Both articles discuss our propensity to anthropomorphize non-human entities and artifacts. Looked at in that way, the ethical concerns seem misplaced if not altogether silly. So, according to one AI researcher quoted by Parke, “The only way it’s unethical is if the robot could feel pain.” A robot cannot feel pain, thus there is nothing unethical about the way we treat robots.

But is that really all that needs to be said about the ethical implications?

Consider these questions raised by Fisher:

“To take another example: if a father is torturing a robot in front of his 4-year-old son, would that be acceptable? The child can’t be expected to have the sophisticated understanding of adults. Torturing a robot teaches them that acts that cause suffering – simulated or not – are OK in some circumstances.

Or to take it to an extreme: imagine if somebody were to take one of the childlike robots already being built in labs, and sell it to a paedophile who planned to live out their darkest desires. Should a society allow this to happen?

Such questions about apparently victimless evil are already playing out in the virtual world. Earlier this year, the New Yorker described the moral quandaries raised when an online forum discussing Grand Theft Auto asked players if rape was acceptable inside the game. One replied: ‘I want to have the opportunity to kidnap a woman, hostage her, put her in my basement and rape her everyday, listen to her crying, watching her tears.’ If such unpleasant desires could be actually lived with a physical robotic being that simulates a victim, it may make it more difficult to tolerate.”

These are challenging questions that, to my mind, expose the inadequacy of thinking about the ethics of technology, or ethics more broadly, from a strictly instrumental perspective.

Recently, philosopher Charlie Huenemann posed a similarly provocative reflection on killing dogs in Minecraft. His reflections led him to consider the moral standing of the attachments we form to objects, whether they be material or virtual, in a manner I found helpful. Here are his concluding paragraphs:

The point is that we form attachments to things that may have no feelings or rights whatsoever, but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern. (Yes, lots of complications can come in here – “What if I really care for the fire that is now engulfing your home?” – but the basic point stands: there is some initial reason, though not necessarily a final or decisive one.) I had some attachment to my Minecraft dogs, which is why I felt sorry when they died. Had you come along in a multiplayer setting and chopped them to death for the sheer malicious pleasure of doing so, I could rightly claim that you did something wrong.

Moreover, we can also speak of attachments – even to virtual objects – that we should form, just as part of being good people. Imagine if I were to gain a Minecraft dog that accompanied me on many adventures. I even offer it rotten zombie flesh to eat on several occasions. But then one day I tire of it and chop it into nonexistence. I think most of would be surprised: “Why did you do that? You had it a long time, and even took care of it. Didn’t you feel attached to it?” Suppose I say, “No, no attachment at all”. “Well, you should have”, we would mumble. It just doesn’t seem right not to have felt some attachment, even if it was overcome by some other concern. “Yes, I was attached to it, but it was getting in the way too much”, would have been at least more acceptable as a reply. (“Still, you didn’t have to kill it. You could have just clicked on it to sit forever….”)

The first of my 41 questions about the ethics of technology was a simple one: What sort of person will the use of this technology make of me?

It’s a simple question, but one we often fail to ask because we assume that ethical considerations apply only to what people do with technology, to the acts themselves. It is a question, I think, that helps us imagine the moral implications of willful acts of virtual harm.

Of course, it is also worth asking, “What sort of person does my use of this technology reveal me to be?”

Lethal Autonomous Weapons and Thoughtlessness

In the mid-twentieth century, Hannah Arendt wrote extensively about the critical importance of learning to think in the aftermath of a great rupture in our tradition of thought. She wrote of the desperate situation when “it began to dawn upon modern man that he had come to live in a world in which his mind and his tradition of thought were not even capable of asking adequate, meaningful questions, let alone of giving answers to its own perplexities.”

Frequently, Arendt linked this rupture in the tradition, this loss of a framework that made our thinking meaningful to the appearance of totalitarianism in the early twentieth century. But she also recognized that the tradition had by then been unravelling for some time, and technology played a not insignificant role in this unraveling and the final rupture. In “Tradition and the Modern Age,” for example, she argues that the “priority of reason over doing, of the mind’s prescribing its rules to the actions of men” had been lost as a consequence of “the transformation of the world by the Industrial Revolution–a transformation the success of which seemed to prove that man’s doings and fabrications prescribe their rules to reason.”

Moreover, in the Prologue to The Human Condition, after reflecting on Sputnik, computer automation, and the pursuit of what we would today call bio-engineering, Arendt worried that our Thinking would prove inadequate to our technologically-enhanced Doing. “If it should turn out to be true,” she added, “that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

That seems as good an entry as any into a discussion of Lethal Autonomous Robots. A short Wired piece on the subject has been making the rounds the past day or two with the rather straightforward title, “We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea.” The story takes as its point of departure the recent pledge on the part of a robotics company, Clearpath Robotics, never to build “killer robots.”

Clearpath’s Chief Technology Officer, Ryan Gariepy, explained the decision: “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now, but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

Not everyone shares Gariepy’s trepidation. Writing for the blog of the National Defense Industrial Association, Sarah Sicard discussed the matter with Ronald Arkin, a dean at Georgia Tech’s School of Interactive Computing. “Unless regulated by international treaties,”Arkin believes, “lethal autonomy is inevitable.”

It’s worth pausing for a moment to explore the nature of this claim. It’s a Borg Complex claim, of course, although masked slightly by the conditional construction, but that doesn’t necessarily make it wrong. Indeed, claims of inevitability are especially plausible in the context of military technology, and it’s not hard to imagine why. Even if one nation entertained ethical reservations about a certain technology, they could never assure themselves that other nations would share their qualms. Better then to set their reservations aside than to be outpaced on the battlefield with disastrous consequences. The force of the logic is compelling. In such a case, however, the inevitability, such as it is, does not reside in the technology per se; it resides in human nature. But even to put it that way threatens to obscure the fact that choices are being and they could be made otherwise. The example set by Clearpath Robotics, a conscious decision to forego research and development on principle only reinforces this conclusion.

But Arkin doesn’t just believe the advent of Lethal Autonomous Robots to be inevitable; he seems to think that it will be a positive good. Arkin believes that the human beings are the “weak link” in the “kill chain.” The question for roboticists is this: “Can we find out ways that can make them outperform human warfighters with respect to ethical performance?” Arkin appears to be fairly certain that the answer will be a rather uncomplicated “yes.”

For a more complicated look at the issue, consider the report (PDF) on Lethal Autonomous Weapons presented to the UN’s Human Rights Council by special rapporteur, Christof Heyns. The report was published in 2013 and first brought to my attention by a post on Nick Carr’s blog. The report explores a variety of arguments for and against the development and deployment of autonomous weapons systems and concludes, “There is clearly a strong case for approaching the possible introduction of LARs with great caution.” It continues:

“If used, they could have far-reaching effects on societal values, including fundamentally on the protection and the value of life and on international stability and security. While it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements in many respects, it is foreseeable that they could comply under certain circumstances, especially if used alongside human soldiers. Even so, there is widespread concern that allowing LARs to kill people may denigrate the value of life itself.”

Among the more salient observations made by the report there is this note of concern about unintended consequences:

“Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”

As with the concern about the denigration of the value of life itself, this worry about the normalization of armed conflict is difficult to empirically verify (although US drone operations in Afghanistan, Pakistan, and the Arabian peninsula are certainly far from irrelevant to the discussion). Consequently, such considerations tend to carry little weight when the terms of the debate are already compromised by technocratic assumptions regarding what counts as compelling reasons, proofs, or evidence.

Such assumptions appear to be all that we have left to go on in light of the rupture Arendt described of the tradition of thought. Or, to put that a bit more precisely, it may not be all that we have left, but it is what we have gotten. We have precious little to fall back on when we begin to think about what we are doing when what we are doing involves, for instance, the fabrication of Lethal Autonomous Robots. There are no customs of thought and action, no traditions of justice, no culturally embodied wisdom to guide us, at least not in any straightforward and directly applicable fashion. We are thinking without a bannister, as Arendt put it elsewhere, if we are thinking at all.

Perhaps it is because I have been reading a good bit of Arendt lately, but I’m increasingly struck by situations we encounter, both ordinary and extraordinary, in which our default problem-solving, cost/benefit analysis mode of thinking fails us. In such situations, we must finally decide what is undecidable and take action, action for which we can be held responsible, action for which we can only hope for forgiveness, action made meaningful by our thinking.

Arendt distinguished this mode of thinking, that which seeks meaning and is a ground for action, from that which seeks to know with certainty what is true. This helps explain, I believe, what she meant in the passage I cited above when she feared that we would become “thoughtless” and slaves to our “know-how.” We are in such cases calculating and measuring, but not thinking, or willing, or judging. Consequently, under such circumstances, we are also perpetually deferring responsibility.

Considered in this light, Lethal Autonomous Weapons threaten to become a symbol of our age; not in their clinical lethality, but in their evacuation of human responsibility from one of the most profound and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mistakes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held accountable and they cannot be forgiven.

We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-scientific thinking. In the absence of such certainty, because we do not seem to know how to think or judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal necessity.

____________________________

Update: Dale Carrico has posted some helpful comments, particularly on Arendt.

Laborers Without Labor

Kevin Drum in Mother Jones (2013):

“This is a story about the future. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It’s the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they’re computers: They never get tired, they’re never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.

The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It’s up to us.”

Hannah Arendt in The Human Condition (1958):

“Closer at hand and perhaps equally decisive is another no less threatening event. This is the advent of automation, which in a few decades probably will empty the factories and liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity. Here, too, a fundamental aspect of the human condition is at stake, but the rebellion against it, the wish to be liberated from labor’s ‘toil and trouble,’ is not modern but as old as recorded history. Freedom from labor itself is not new; it once belonged among the most firmly established privileges of the few. In this instance, it seems as though scientific progress and technical developments had been only taken advantage of to achieve something about which all former ages dreamed but which none had been able to realize.

However, this is so only in appearance. The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society.  The fulfillment of the wish, therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaninfgul activities for the sake of which this freedom would deserve to be won. Within this society, which is egalitarian because this is labor’s way of making men live together, there is no class left, no aristocracy of either a political or spiritual nature from which a restoration of the other capacities of man could start anew . . . What we are confronted with is the prospect of a society of laborers without labor, that is, without the only activity left to them.  Surely, nothing could be worse.”

Update: A short while after I published this post, I was reminded of an article by Philip Blond I’d linked to a couple of years ago. It included this”

… according to Blond, “Neither Left nor Right can offer an answer because both ideologies have collapsed as both have become the same.”  The left lives by an “agenda of cultural libertarianism” while the right espouses an agenda of “economic libertarianism,” and there is, in Blond’s view, little or no difference between them.  They have both contributed to a shattered society.  “A vast body of citizens,” Blond argues, “has been stripped of its culture by the Left and its capital by the Right, and in such nakedness they enter the trading floor of life with only their labor to sell.”

“With only their labor to sell” – an arresting phrase that, in present context, raises the question: What if even this is taken away?