Lethal Autonomous Weapons and Thoughtlessness

In the mid-twentieth century, Hannah Arendt wrote extensively about the critical importance of learning to think in the aftermath of a great rupture in our tradition of thought. She wrote of the desperate situation when “it began to dawn upon modern man that he had come to live in a world in which his mind and his tradition of thought were not even capable of asking adequate, meaningful questions, let alone of giving answers to its own perplexities.”

Frequently, Arendt linked this rupture in the tradition, this loss of a framework that made our thinking meaningful to the appearance of totalitarianism in the early twentieth century. But she also recognized that the tradition had by then been unravelling for some time, and technology played a not insignificant role in this unraveling and the final rupture. In “Tradition and the Modern Age,” for example, she argues that the “priority of reason over doing, of the mind’s prescribing its rules to the actions of men” had been lost as a consequence of “the transformation of the world by the Industrial Revolution–a transformation the success of which seemed to prove that man’s doings and fabrications prescribe their rules to reason.”

Moreover, in the Prologue to The Human Condition, after reflecting on Sputnik, computer automation, and the pursuit of what we would today call bio-engineering, Arendt worried that our Thinking would prove inadequate to our technologically-enhanced Doing. “If it should turn out to be true,” she added, “that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

That seems as good an entry as any into a discussion of Lethal Autonomous Robots. A short Wired piece on the subject has been making the rounds the past day or two with the rather straightforward title, “We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea.” The story takes as its point of departure the recent pledge on the part of a robotics company, Clearpath Robotics, never to build “killer robots.”

Clearpath’s Chief Technology Officer, Ryan Gariepy, explained the decision: “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now, but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

Not everyone shares Gariepy’s trepidation. Writing for the blog of the National Defense Industrial Association, Sarah Sicard discussed the matter with Ronald Arkin, a dean at Georgia Tech’s School of Interactive Computing. “Unless regulated by international treaties,”Arkin believes, “lethal autonomy is inevitable.”

It’s worth pausing for a moment to explore the nature of this claim. It’s a Borg Complex claim, of course, although masked slightly by the conditional construction, but that doesn’t necessarily make it wrong. Indeed, claims of inevitability are especially plausible in the context of military technology, and it’s not hard to imagine why. Even if one nation entertained ethical reservations about a certain technology, they could never assure themselves that other nations would share their qualms. Better then to set their reservations aside than to be outpaced on the battlefield with disastrous consequences. The force of the logic is compelling. In such a case, however, the inevitability, such as it is, does not reside in the technology per se; it resides in human nature. But even to put it that way threatens to obscure the fact that choices are being and they could be made otherwise. The example set by Clearpath Robotics, a conscious decision to forego research and development on principle only reinforces this conclusion.

But Arkin doesn’t just believe the advent of Lethal Autonomous Robots to be inevitable; he seems to think that it will be a positive good. Arkin believes that the human beings are the “weak link” in the “kill chain.” The question for roboticists is this: “Can we find out ways that can make them outperform human warfighters with respect to ethical performance?” Arkin appears to be fairly certain that the answer will be a rather uncomplicated “yes.”

For a more complicated look at the issue, consider the report (PDF) on Lethal Autonomous Weapons presented to the UN’s Human Rights Council by special rapporteur, Christof Heyns. The report was published in 2013 and first brought to my attention by a post on Nick Carr’s blog. The report explores a variety of arguments for and against the development and deployment of autonomous weapons systems and concludes, “There is clearly a strong case for approaching the possible introduction of LARs with great caution.” It continues:

“If used, they could have far-reaching effects on societal values, including fundamentally on the protection and the value of life and on international stability and security. While it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements in many respects, it is foreseeable that they could comply under certain circumstances, especially if used alongside human soldiers. Even so, there is widespread concern that allowing LARs to kill people may denigrate the value of life itself.”

Among the more salient observations made by the report there is this note of concern about unintended consequences:

“Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”

As with the concern about the denigration of the value of life itself, this worry about the normalization of armed conflict is difficult to empirically verify (although US drone operations in Afghanistan, Pakistan, and the Arabian peninsula are certainly far from irrelevant to the discussion). Consequently, such considerations tend to carry little weight when the terms of the debate are already compromised by technocratic assumptions regarding what counts as compelling reasons, proofs, or evidence.

Such assumptions appear to be all that we have left to go on in light of the rupture Arendt described of the tradition of thought. Or, to put that a bit more precisely, it may not be all that we have left, but it is what we have gotten. We have precious little to fall back on when we begin to think about what we are doing when what we are doing involves, for instance, the fabrication of Lethal Autonomous Robots. There are no customs of thought and action, no traditions of justice, no culturally embodied wisdom to guide us, at least not in any straightforward and directly applicable fashion. We are thinking without a bannister, as Arendt put it elsewhere, if we are thinking at all.

Perhaps it is because I have been reading a good bit of Arendt lately, but I’m increasingly struck by situations we encounter, both ordinary and extraordinary, in which our default problem-solving, cost/benefit analysis mode of thinking fails us. In such situations, we must finally decide what is undecidable and take action, action for which we can be held responsible, action for which we can only hope for forgiveness, action made meaningful by our thinking.

Arendt distinguished this mode of thinking, that which seeks meaning and is a ground for action, from that which seeks to know with certainty what is true. This helps explain, I believe, what she meant in the passage I cited above when she feared that we would become “thoughtless” and slaves to our “know-how.” We are in such cases calculating and measuring, but not thinking, or willing, or judging. Consequently, under such circumstances, we are also perpetually deferring responsibility.

Considered in this light, Lethal Autonomous Weapons threaten to become a symbol of our age; not in their clinical lethality, but in their evacuation of human responsibility from one of the most profound and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mistakes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held accountable and they cannot be forgiven.

We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-scientific thinking. In the absence of such certainty, because we do not seem to know how to think or judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal necessity.

____________________________

Update: Dale Carrico has posted some helpful comments, particularly on Arendt.

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

Do Artifacts Have Ethics?

Writing about “technology and the moral dimension,” tech writer and Gigaom founder, Om Malik made the following observation:

“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”

I’m not sure how many people in the tech industry would concur with Malik’s claim, but it is a remarkably telling admission from at least one well-placed individual. Happily, Malik realizes that “it is time to add an emotional and moral dimension to products.” But what exactly does it mean to add an emotional and moral dimension to products?

Malik’s own ensuing discussion is brief and deals chiefly with using data ethically and producing clear, straightforward terms of service. This suggests that Malik is mostly encouraging tech companies to treat their customers in an ethically responsible manner. If so, it’s rather disconcerting that Malik takes this to be a discovery that he feels compelled to announce, prophetically, to his colleagues. Leaving that unfortunate indictment of the tech community aside, I want to suggest that there is no need to add a moral dimension to technology.

Years ago, Langdon Winner famously asked, “Do artifacts have politics?” In the article that bears that title, Winner went on to argue that they most certainly do. We might also ask, “Do artifacts have ethics?” I would argue that they do indeed. The question is not whether technology has a moral dimension, the question is whether we recognize it or not. In fact, technology’s moral dimension is inescapable, layered, and multi-faceted.

When we do think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about a technology on this view is the use to which it is put. This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash someones head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?

But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses?

Below are a few other questions that we might ask in order to get at the wide-ranging “moral dimension” of our technologies. There are, of course, many others that we could ask, but this is a start.

  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy?
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?

 

You can subscribe to The Convivial Society, my newsletter on tech, society, and the good life, here.

Friday Night Links

Here’s another round of items for your consideration.

At Balkinization, Frank Pasquale is interviewed about his forthcoming book, The Black Box Society: The Secret Algorithms that Control Money and Information.

Mike Bulajewski offers a characteristically insightful and well-written review of the movie Her. And while at his site, I was reminded of his essay on civility from late last year. In light of the recent discussion about civility and its uses, I’d encourage you to read it.

At the New Yorker, Nick Paumgarten reflects on experience and memory in the age of GoPro.

In the LARB, Nick Carr has a sharp piece on Facebook’s social experiments early this year.

At Wired, Patrick Lin looks at robot cars with adjustable ethics settings and, at The Boston Globe, Leon Neyfakh asks, “Can Robots Be Too Nice?”

And lastly, Evan Selinger considers one critical review Nick Carr’s The Glass Cage: Automation and Us and takes a moment to explore some of the fallacies deployed against critics of technology.

Cheers!

“Civility” Reconsidered

A few days ago, I wrote about why online communication so often turns vile and toxic. I did not, however, provide any examples of the problem; rather, I relied on a series of posts in which others had lodged their own complaints and provided illustrative instances of Internet awfulness. Basically, I took it for granted that readers would already know what I had in mind, and, of course, that’s always a hazardous assumption to make. I was, at the time, more interested in identifying the sources of the problem, than in clearly delineating the problem.

As I’ve thought about that post over the last couple of days, I’ve found myself a bit dissatisfied with what I had written. I couldn’t quite put my finger the problem, but a couple of recent posts, by Freddie deBoer and Elizabeth Stoker Bruenig respectively, have helped me think more clearly about the problem. DeBoer and Bruenig both vigorously criticized the rhetoric of civility. This initially struck me as a rather odd tact to take; after all, I’d just written myself about the lack of civility in public debate, particularly as it unfolds online. But, from a different angle, I’d already half-formulated my own critique of the concept of civility. I’ll start with that fledgling critique and then move on to the more developed concerns articulated by de Boer and Bruenig.

As I thought about my post, specifically its vagueness about the exact nature of the problem I was addressing, I wondered if I’d not inadvertently negated the possibility of vigorous, impassioned exchanges–exchanges which might verge on the uncivil, or at least seem to. I remembered, then, that I’d written about this very thing nearly three years ago in a post about civility and friendship occasioned by the passing of Christopher Hitchens. Think what you will of Hitchens, I wasn’t a great fan myself, but the man knew how to turn an acerbic phrase. In any event, I went on to make the following (slightly edited) observations about civility.

To some, the problems with our current public and political discourse stem from a lack of civility. Yet, this depends on what we might mean by civility. A friend recently suggested that the inverse is probably true. We are too civil to speak forthrightly and honestly, it is all obfuscation. In which case, it is not civility that is the problem, but civility’s unseemly counterfeits — slimy flattery, ingratiation, or cowardice. In any case, compared with previous ages, our political discourse is, in fact, remarkably tame.

More to the point, I would say, what we have is not so much a failure of civility as it is a failure of eloquence, made all the worse for the narcissism that frequently attends it. Few, I presume, would mind a little incivility so long as it was to the point and artfully delivered. Hitchens was the master of this sort of artfully acerbic incivility, and he deployed it to great effect. Nothing of the sort characterizes our political discourse. We are plagued instead with the shallow and inelegant shouting matches of cable news programs or that manner of speaking without saying anything mastered by politicians.

I closed, riffing on Aristotle, by suggesting that when people are friends they have no need of civility. In a subsequent comment on the post, I went on to clarify that claim as follows:

Aristotle’s claim is that when people are friends they have no need of justice. I read this along the same lines as C. S. Lewis’ observation that humility makes modesty unnecessary. One is a posture that becomes unnecessary when the true virtue is present. I realize a lot of this comes down to how we are defining terms, but what I was trying to capture is the sense that among friends I have to worry less about “civility” if civility is understood as a kind of artificial restraint. I rely instead on the bonds of friendship which allow for greater freedom of expression and even a little well placed humor or “incivility.”

I’d still stand by that, although, again, much of it hangs on how civility is defined.

What’s more, it struck me that, given my own standards of what is right and admirable, I’d better leave some room for the flipping of tables and rather pointed criticisms of personal character.

Taking all of this together, then, it would seem best to say that, first, civility can be a fuzzy category, and, secondly, that civility is not the only or final word in human communication. Indeed, in some situations, demands for civility may be downright perverse.

This is where deBoer and Bruenig come in. DeBoer’s post was occasioned by a heated controversy in the academic world, one, I’m afraid, I have simply not kept up with. Bruenig’s post, cited by deBoer, appears to have been inspired by her own recent experience with online debates. Both of them remind us that calls for civility sometimes mask and perpetuate asymmetrical relations of power. To put that less clinically, calls for civility sometimes allow the corrupt and powerful to obscure their corruption and retain their power.

For instance, deBoer closes his post with the following summation: “That’s what civility is, in real life: the powerful telling us that we must speak to them with deference and respect, while they are under no similar responsibility to us.”

Bruenig’s thoughts are more extensive and organized with almost scholastic clarity, so it is harder to select a shorter representative sample. That said, here is one passage for consideration: “If you don’t know how to ‘talk the talk’, if you’ve grown up speaking in slang and playing the dozens and you’re not really clear on the delicacies of civility, you’re going to be ruled out of the discourse at every turn. Not for any real reason of course, but because you can’t speak the way upper class parlor sitters do.”

And here is the passage that deBoer cited in his own post:

“It’s not an accident that civility forces you to adopt the framework it is premised upon — the one which preferences no values, which automatically considers all arguments potentially equal in merit, the one which supposes the particular aesthetics of the afternoon salon produce the richest debates, and that the richness of a debate is really its goal. It’s not an accident because — as even people who argue for civility will tell you — civility is about, at some level, establishing common ground. Supposedly this works the arguers to a mutually satisfactory resolution.

But there simply isn’t always common ground, and to be artificially placed on common ground is necessarily to lose some of the ground you were holding. So if you are arguing, for instance, that poor people are being mistreated, should be angry about it, and should lobby for change — civility will force you to give up the ‘angry’ part, or at least to hide it. But that was part of your ground! Now you’ve been muzzled.”

I’m not sure I would’ve said that civility is merely about establishing common ground, but I think Bruenig makes a sensible point here. She forced me to think more carefully about what I am asking for when I make my pleas for civility or lament the lack of it.

Indeed, I am at some level simply asking for people to employ the sort of rhetoric with which I am most comfortable. I prefer, as she puts it, “the aesthetics of the afternoon salon.” I’d like to think, of course, that I have good reasons for this and that it is not merely a matter of self-serving preference. But, the rhetoric of civility, insofar as it presumes a neutral common ground, can be deceptive. We might think of it as the communitarian critics of the liberal democratic project think of the modern secular state’s pretensions of neutrality toward competing visions of the good life.

In fact, by assuming a posture of ostensible neutrality, the liberal democratic state already smuggles in certain substantive judgements. In cases of morality, for example, the enforcement of neutrality is equitable only on the assumption that the matter is, finally, not one of moral consequence. The deck is stacked against those who would argue otherwise, and, coming back to the point at hand, it is easy to see how calls for civility may analogously stifle the voices of those who are morally outraged. From this view then, civility is, like certain calls for tolerance, the thin gruel we’re left with when we’ve been stripped of a more robust and sustaining moral grammar.

I’m not sure, however, that I want to abandon the pursuit of all that is wrapped up with the concept of civility. Perhaps we simply need a better, richer grammar of virtuous discourse. May be we do better to speak of humane discourse, rather than civil discourse. When, for instance, we condemn the death of innocents, it may not be very humane at all to speak with civility as some might define it. To speak of humane discourse also gestures toward an acknowledgement of the fullness of our humanity. We are not, as certain modern version of the self have it, merely thinking things. We are feeling being as well, and a well-ordered soul is one which not only thinks clearly about the world, but one whose whole being responds appropriately to the world it experiences. We should, in other words, be revolted by what is revolting, we should be enraged by pervasive injustice, and so on. Calls for civility may only be a way of hamstringing legitimate human responses to the very broken world we inhabit.

But, aye, there’s the rub. As I write that, I immediately realize that if only we could all agree on what is revolting and unjust, we wouldn’t have a problem adjudicating the proper place of civility rightly understood. I find myself coming back to one of my complaints in last week’s post. Part of our problem, as I see it, is that we are too damn cocksure about the moral uprightness of our own positions. But, again, perhaps civility is the wrong antidote to prescribe. Humility is what is needed, and humility is at once a more challenging and more effective cure. Unlike bare civility which may only deal with the surface, humility goes to the root.

All in all, then, even as I’ve been writing this post, I’ve talked myself into deeper agreement with Breunig. I encourage you to read all of what she has to say (as well as her follow-up post). I’ll leave you with her own closing remarks, which suggest that we might do well to reframe our civility talk as a matter of rightly ordered love instead.

“None of this is to argue for being cruel, vulgar, intentionally insulting, etc. But there’s a peculiar tyranny of ‘civility’, and it’s to argue that the good of civility should be judged according to the particular conditions of argument, and should always be balanced against the stakes of the actual content of the debate. We should all want to be the kind of person who is charitable, merciful, quick to forgive and quick to ask forgiveness; these are all better virtues than ‘civility’ anyway, which is by its own admission little more than a veneer of these genuine virtues. But we should also see that love is at times bracing, especially when it is operating in defense, and that a little rupture and agonism are sometimes necessary for an honest reconciliation.”

I take that back. I think I’ll leave you, instead, with W.H. Auden, who, as Richard Wilbur put it, “sustained the civil tongue / In a scattering time.” Here is Auden’s deceptively simple plea to which we should all frequently return: “You shall love your crooked neighbor with your crooked heart.”

__________________________________

UPDATE: Compare Alan Jacobs’ take on this whole “civility” thing. Basically, he thought Bruenig and deBoer went in the wrong direction with their mostly accurate assessment of the problem.