On the Moral Implications of Willful Acts of Virtual Harm

Perhaps you’ve seen the clip below in which a dog-like robot developed by Boston Dynamics, a Google-owned robotics company, receives a swift kick and manages to maintain its balance:

I couldn’t resist tweeting that clip with this text: “The mechanical Hound slept but did not sleep, lived but did not live in … a dark corner of the fire house.” That line, of course, is from Ray Bradbury’s Fahrenheit 451, in which the mechanical Hound is deployed to track down dissidents. The apt association was first suggested to me a few months back by a reader’s email occasioned by an earlier Boston Dynamics robot.

My glib tweet aside, many have found the clip disturbing for a variety of reasons. One summary of the concerns can be found in a CNN piece by Phoebe Parke titled, “Is It Cruel to Kick a Robot Dog?” (via Mary Chayko). That question reminded me of a 2013 essay by Richard Fisher posted at BBC Future, “Is It OK to Torture or Murder a Robot?”

Both articles discuss our propensity to anthropomorphize non-human entities and artifacts. Looked at in that way, the ethical concerns seem misplaced if not altogether silly. So, according to one AI researcher quoted by Parke, “The only way it’s unethical is if the robot could feel pain.” A robot cannot feel pain, thus there is nothing unethical about the way we treat robots.

But is that really all that needs to be said about the ethical implications?

Consider these questions raised by Fisher:

“To take another example: if a father is torturing a robot in front of his 4-year-old son, would that be acceptable? The child can’t be expected to have the sophisticated understanding of adults. Torturing a robot teaches them that acts that cause suffering – simulated or not – are OK in some circumstances.

Or to take it to an extreme: imagine if somebody were to take one of the childlike robots already being built in labs, and sell it to a paedophile who planned to live out their darkest desires. Should a society allow this to happen?

Such questions about apparently victimless evil are already playing out in the virtual world. Earlier this year, the New Yorker described the moral quandaries raised when an online forum discussing Grand Theft Auto asked players if rape was acceptable inside the game. One replied: ‘I want to have the opportunity to kidnap a woman, hostage her, put her in my basement and rape her everyday, listen to her crying, watching her tears.’ If such unpleasant desires could be actually lived with a physical robotic being that simulates a victim, it may make it more difficult to tolerate.”

These are challenging questions that, to my mind, expose the inadequacy of thinking about the ethics of technology, or ethics more broadly, from a strictly instrumental perspective.

Recently, philosopher Charlie Huenemann posed a similarly provocative reflection on killing dogs in Minecraft. His reflections led him to consider the moral standing of the attachments we form to objects, whether they be material or virtual, in a manner I found helpful. Here are his concluding paragraphs:

The point is that we form attachments to things that may have no feelings or rights whatsoever, but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern. (Yes, lots of complications can come in here – “What if I really care for the fire that is now engulfing your home?” – but the basic point stands: there is some initial reason, though not necessarily a final or decisive one.) I had some attachment to my Minecraft dogs, which is why I felt sorry when they died. Had you come along in a multiplayer setting and chopped them to death for the sheer malicious pleasure of doing so, I could rightly claim that you did something wrong.

Moreover, we can also speak of attachments – even to virtual objects – that we should form, just as part of being good people. Imagine if I were to gain a Minecraft dog that accompanied me on many adventures. I even offer it rotten zombie flesh to eat on several occasions. But then one day I tire of it and chop it into nonexistence. I think most of would be surprised: “Why did you do that? You had it a long time, and even took care of it. Didn’t you feel attached to it?” Suppose I say, “No, no attachment at all”. “Well, you should have”, we would mumble. It just doesn’t seem right not to have felt some attachment, even if it was overcome by some other concern. “Yes, I was attached to it, but it was getting in the way too much”, would have been at least more acceptable as a reply. (“Still, you didn’t have to kill it. You could have just clicked on it to sit forever….”)

The first of my 41 questions about the ethics of technology was a simple one: What sort of person will the use of this technology make of me?

It’s a simple question, but one we often fail to ask because we assume that ethical considerations apply only to what people do with technology, to the acts themselves. It is a question, I think, that helps us imagine the moral implications of willful acts of virtual harm.

Of course, it is also worth asking, “What sort of person does my use of this technology reveal me to be?”

Lethal Autonomous Weapons and Thoughtlessness

In the mid-twentieth century, Hannah Arendt wrote extensively about the critical importance of learning to think in the aftermath of a great rupture in our tradition of thought. She wrote of the desperate situation when “it began to dawn upon modern man that he had come to live in a world in which his mind and his tradition of thought were not even capable of asking adequate, meaningful questions, let alone of giving answers to its own perplexities.”

Frequently, Arendt linked this rupture in the tradition, this loss of a framework that made our thinking meaningful to the appearance of totalitarianism in the early twentieth century. But she also recognized that the tradition had by then been unravelling for some time, and technology played a not insignificant role in this unraveling and the final rupture. In “Tradition and the Modern Age,” for example, she argues that the “priority of reason over doing, of the mind’s prescribing its rules to the actions of men” had been lost as a consequence of “the transformation of the world by the Industrial Revolution–a transformation the success of which seemed to prove that man’s doings and fabrications prescribe their rules to reason.”

Moreover, in the Prologue to The Human Condition, after reflecting on Sputnik, computer automation, and the pursuit of what we would today call bio-engineering, Arendt worried that our Thinking would prove inadequate to our technologically-enhanced Doing. “If it should turn out to be true,” she added, “that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

That seems as good an entry as any into a discussion of Lethal Autonomous Robots. A short Wired piece on the subject has been making the rounds the past day or two with the rather straightforward title, “We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea.” The story takes as its point of departure the recent pledge on the part of a robotics company, Clearpath Robotics, never to build “killer robots.”

Clearpath’s Chief Technology Officer, Ryan Gariepy, explained the decision: “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now, but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

Not everyone shares Gariepy’s trepidation. Writing for the blog of the National Defense Industrial Association, Sarah Sicard discussed the matter with Ronald Arkin, a dean at Georgia Tech’s School of Interactive Computing. “Unless regulated by international treaties,”Arkin believes, “lethal autonomy is inevitable.”

It’s worth pausing for a moment to explore the nature of this claim. It’s a Borg Complex claim, of course, although masked slightly by the conditional construction, but that doesn’t necessarily make it wrong. Indeed, claims of inevitability are especially plausible in the context of military technology, and it’s not hard to imagine why. Even if one nation entertained ethical reservations about a certain technology, they could never assure themselves that other nations would share their qualms. Better then to set their reservations aside than to be outpaced on the battlefield with disastrous consequences. The force of the logic is compelling. In such a case, however, the inevitability, such as it is, does not reside in the technology per se; it resides in human nature. But even to put it that way threatens to obscure the fact that choices are being and they could be made otherwise. The example set by Clearpath Robotics, a conscious decision to forego research and development on principle only reinforces this conclusion.

But Arkin doesn’t just believe the advent of Lethal Autonomous Robots to be inevitable; he seems to think that it will be a positive good. Arkin believes that the human beings are the “weak link” in the “kill chain.” The question for roboticists is this: “Can we find out ways that can make them outperform human warfighters with respect to ethical performance?” Arkin appears to be fairly certain that the answer will be a rather uncomplicated “yes.”

For a more complicated look at the issue, consider the report (PDF) on Lethal Autonomous Weapons presented to the UN’s Human Rights Council by special rapporteur, Christof Heyns. The report was published in 2013 and first brought to my attention by a post on Nick Carr’s blog. The report explores a variety of arguments for and against the development and deployment of autonomous weapons systems and concludes, “There is clearly a strong case for approaching the possible introduction of LARs with great caution.” It continues:

“If used, they could have far-reaching effects on societal values, including fundamentally on the protection and the value of life and on international stability and security. While it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements in many respects, it is foreseeable that they could comply under certain circumstances, especially if used alongside human soldiers. Even so, there is widespread concern that allowing LARs to kill people may denigrate the value of life itself.”

Among the more salient observations made by the report there is this note of concern about unintended consequences:

“Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”

As with the concern about the denigration of the value of life itself, this worry about the normalization of armed conflict is difficult to empirically verify (although US drone operations in Afghanistan, Pakistan, and the Arabian peninsula are certainly far from irrelevant to the discussion). Consequently, such considerations tend to carry little weight when the terms of the debate are already compromised by technocratic assumptions regarding what counts as compelling reasons, proofs, or evidence.

Such assumptions appear to be all that we have left to go on in light of the rupture Arendt described of the tradition of thought. Or, to put that a bit more precisely, it may not be all that we have left, but it is what we have gotten. We have precious little to fall back on when we begin to think about what we are doing when what we are doing involves, for instance, the fabrication of Lethal Autonomous Robots. There are no customs of thought and action, no traditions of justice, no culturally embodied wisdom to guide us, at least not in any straightforward and directly applicable fashion. We are thinking without a bannister, as Arendt put it elsewhere, if we are thinking at all.

Perhaps it is because I have been reading a good bit of Arendt lately, but I’m increasingly struck by situations we encounter, both ordinary and extraordinary, in which our default problem-solving, cost/benefit analysis mode of thinking fails us. In such situations, we must finally decide what is undecidable and take action, action for which we can be held responsible, action for which we can only hope for forgiveness, action made meaningful by our thinking.

Arendt distinguished this mode of thinking, that which seeks meaning and is a ground for action, from that which seeks to know with certainty what is true. This helps explain, I believe, what she meant in the passage I cited above when she feared that we would become “thoughtless” and slaves to our “know-how.” We are in such cases calculating and measuring, but not thinking, or willing, or judging. Consequently, under such circumstances, we are also perpetually deferring responsibility.

Considered in this light, Lethal Autonomous Weapons threaten to become a symbol of our age; not in their clinical lethality, but in their evacuation of human responsibility from one of the most profound and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mistakes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held accountable and they cannot be forgiven.

We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-scientific thinking. In the absence of such certainty, because we do not seem to know how to think or judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal necessity.

____________________________

Update: Dale Carrico has posted some helpful comments, particularly on Arendt.

The Tourist and the Pilgrim: Essays on Life and Technology in the Digital Age

A few days ago, I noted, thanks to a WordPress reminder, that The Frailest Thing had turned thee. I had little idea what I was doing when I started blogging, and wasn’t even very clear on why I was doing so. I had just started my graduate program in earnest, so I was reading a good bit and, in part at least, I thought it would be useful to process the ideas I was engaging by writing about them. Because I was devoting myself to course work, I was also out of the classroom for the first time in ten years, and the teacher in me wanted to keeping teaching somehow.

So I began blogging and have kept it up these three years and counting.

The best of these three years of writing is, I’m happy to announce, now available in an e-book titled, The Tourist and the Pilgrim: Essays on Life and Technology in the Digital Age.

Forty-six essays are gathered into eight chapters:

1. Technology Criticism
2. Technology Has a History
3. Technology and Memory
4. Technology and the Body
5. Ethics, Religion, and Technology
6. Being Online
7. Our Mediated Lives
8. Miscellany

Not surprisingly, these chapters represent fairly well the major areas of interest that have animated my writing.

Right now, the e-book is only available through Gumroad. Of course, feel free to share the link: https://gumroad.com/l/UQBM. You will receive four file formats (PDF, .epub, .mobi, .azw3). The .mobi file will work best with your Kindle. Some formatting issues are holding up availability through Amazon, but it should also be available there in the next couple of days for those who find that more convenient.

Each of the essays can be found in some form online, but I have revised many of them to correct obvious errors, improve the quality of the prose, and make them read more naturally as stand-alone pieces. Nonetheless, the substance remains freely available through this site.

Convenience and a few improvements aside, those of you who have been reading along with me for some time will not find much you haven’t seen before. You might then consider Gumroad something akin to a tip jar!

Finally, because I would not presume they would see it otherwise, I’d like to share the Acknowledgements section here:

Each of these essays first appeared in some form on The Frailest Thing, a blog that I launched in the summer of 2010. I’m not sure how long the blogging venture would have lasted were it not for the encouragement of readers along the way. I’m especially grateful for those who through their kind words, generous linking, and invitations to write for their publications have given my writing a wider audience than it would’ve had otherwise. On that score, my thanks especially to Adam Thierer, Nathan Jurgenson, Rob Horning, Emily Anne Smith, Alan Jacobs, Nick Carr, Cheri Lucas Rowlands, Matthew Lee Anderson, and Evan Selinger.

But I must also acknowledge a small cadre of friends who read and engaged with my earliest offerings when there was no other audience of which to speak. JT, Kevin, Justin, Mark, David, Randy – Cheers!

And, of course, my thanks and love to my wife, Sarah, who has patiently tolerated and supported my online scribblings these three years.

Deo Gratias

My thanks, of course, are owed to all of you who have stopped by along the way. While it may sound sappy and trite, I have to say there is still something quite humbling about the fact that when I offer up my words, which is to say something of my self, there are those who come around and take the time to read them.

There is a sense in which I’ve written for myself. The writing has helped me in my effort to understand, or, as Hannah Arendt put, “think what we are doing.” It is no small thing to me that in making that process public, some have found a thing or two of some value.

Cheers!

cropped-picture-0062.jpg

The Transhumanist Logic of Technological Innovation

What follows are a series of underdeveloped thoughts for your consideration:

Advances in robotics, AI, and automation promise to liberate human beings from labor.

The Programmable World promises to liberate us from mundane, routine, everyday tasks.

Big Data and algorithms promise to liberate us from the imperatives of understanding and deliberation.

Google promises to liberate us from the need to learn things, drive cars, or even become conscious of what we need before it is provided for us.

But what are we being liberated for? What is the end which this freedom will enable us to pursue?

What sort of person do these technologies invite us to become?

Or, if we maximized their affordances, what sort of engagement with the world would they facilitate?

In the late 1950s, Hannah Arendt worried that automated technology was closing in on the elusive promise of a world without labor at a point in history when human beings could understand themselves only as laborers. She knew that in earlier epochs the desire to transcend labor was animated by a political, philosophical, or theological anthropology that assumed there was a teleology inherent in human nature — the contemplation of the true, the good, and the beautiful or of the beatific vision of God.

But she also knew that no such teleology now animates Western culture. In fact, a case could be made that Western culture now assumes that such a teleology does not and could not exist. Unless, that is, we made it for ourselves. This is where transhumanism, extropianism, and singularity come in. If there is no teleology inherent to human nature, then the transcendence of human nature becomes the default teleology.

This quasi-religious pursuit has deep historical roots, but the logic of technological innovation may make the ideology more plausible.

Around this time last year, Nick Carr proposed that technological innovation tracks neatly with Maslow’s hierarchy of human needs (see Carr’s chart below). I found this a rather compelling and elegant thesis. But, what if innovation is finally determined by something other than strictly human needs? What if beyond self-actualization, there lay the realm of self-transcendence?

After all, when, as an article of faith, we must innovate, and no normative account of human nature serves to constrain innovation, then we arrive at a point where we ourselves will be the final field for innovation.

The technologies listed above, while not directly implicated in the transhumanist project (excepting perhaps dreams of a Google implant), tend in the same direction to the degree that they render human action in the world obsolete. The liberation they implicitly offer, in other words, is a liberation from fundamental aspects of what it has meant to be a human being.

hierarchy of innovation

Miscellaneous Observations

So here are a few thoughts in no particular order:

I have nothing of great depth to say about Google’s decision to shut down Google Reader. I use it, and I’m sorry to hear that it’s going away. (At the moment, I’m planning to use Feedly as a replacement. If you’ve got a better option, let me know.) But it is clear that a lot of folks are not at all happy with Google. My Twitter feed lit up with righteous indignation seconds after the announcement was made. What came to my mind was a wonderfully understated line from Conrad’s Heart of Darkness. When a relative goes on and on about the Belgian trading company bringing the light of civilization to the Congo, etc., etc., Marlow responds: “I ventured to suggest that the Company was run for profit.”

Over at Cyborgology, more work is being done to refine the critique of digital dualism, especially by Whitney Boesel. She does a remarkably thorough job of documenting the digital dualism debates over the last year or two here, and here she offers the first part of her own effort to further clarify the terms of the digital dualism debates. I may be making some comments when the series of posts is complete, for now I’ll just throw out a reminder of my own effort a few months ago to provide a phenomenological taxonomy of online experience, “Varieties of Online Experience.”

Speaking of online and offline and also the Internet or technology – definitions can be elusive. A lot of time and effort has been and continues to be spent trying to delineate the precise referent for these terms. But what if we took a lesson from Wittgenstein? Crudely speaking, Wittgenstein came to believe that meaning was a function of use (in many, but not all cases). Instead of trying to fix an external referent for these terms and then call out those who do not use the term as we have decided it must be used or not used, perhaps we should, as Wittgenstein put it, “look and see” the diversity of uses to which the words are meaningfully put in ordinary conversation. I understand the impulse to demystify terms, such as technology, whose elasticity allows for a great deal confusion and obfuscation. But perhaps we ought also to allow that even when these terms are being used without analytic precision, they are still conveying sense.

As an example, take the way the names of certain philosophers are tossed around by folks whose expertise is not philosophy. Descartes, I think, is a common example. The word Descartes, or better yet Cartesian, no longer has a strong correlation to the man Rene Descartes or his writings. The word tends to be used by non-philosophers as a placeholder for the idea of pernicious dualism (another word that is used in casually imprecise ways). The word has a sense and a meaning, but it is not narrowly constrained by its ostensible referent. When this is the case, it might be valuable to correct the speaker by saying something like, “Descartes didn’t actually believe …” or “You’re glossing over some important nuances …” or “Have you ever read a page of Descartes?” Alternatively, it may be helpful to realize that the speaker doesn’t really care about Descartes and is only using the word Descartes as a carrier of some notions that may best be addressed without reference to the philosopher.

This, in turn, leads me to say that, while I’ve always admired the generalist or interdisciplinary tendency, it is difficult to pull off well. In the midst of making a series of astute observations about the difference between academics and intellectuals, Jack Miles writes, “A generalist is someone with a keener-than-average awareness of how much there is to be ignorant about.” This seems to me to be the indispensable starting point for generalist or inter-disciplinary work that will be of value. The faux-generalist or the lazy inter-disciplinarian merely re-combines shallow forms knowledge. This accomplishes very little, if anything at all.

Come to think of it, I think we would all be better off if we were to develop a “keener-than-average awareness” of our own ignorance.