Lethal Autonomous Weapons and Thoughtlessness

In the mid-twentieth century, Hannah Arendt wrote extensively about the critical importance of learning to think in the aftermath of a great rupture in our tradition of thought. She wrote of the desperate situation when “it began to dawn upon modern man that he had come to live in a world in which his mind and his tradition of thought were not even capable of asking adequate, meaningful questions, let alone of giving answers to its own perplexities.”

Frequently, Arendt linked this rupture in the tradition, this loss of a framework that made our thinking meaningful to the appearance of totalitarianism in the early twentieth century. But she also recognized that the tradition had by then been unravelling for some time, and technology played a not insignificant role in this unraveling and the final rupture. In “Tradition and the Modern Age,” for example, she argues that the “priority of reason over doing, of the mind’s prescribing its rules to the actions of men” had been lost as a consequence of “the transformation of the world by the Industrial Revolution–a transformation the success of which seemed to prove that man’s doings and fabrications prescribe their rules to reason.”

Moreover, in the Prologue to The Human Condition, after reflecting on Sputnik, computer automation, and the pursuit of what we would today call bio-engineering, Arendt worried that our Thinking would prove inadequate to our technologically-enhanced Doing. “If it should turn out to be true,” she added, “that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

That seems as good an entry as any into a discussion of Lethal Autonomous Robots. A short Wired piece on the subject has been making the rounds the past day or two with the rather straightforward title, “We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea.” The story takes as its point of departure the recent pledge on the part of a robotics company, Clearpath Robotics, never to build “killer robots.”

Clearpath’s Chief Technology Officer, Ryan Gariepy, explained the decision: “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now, but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

Not everyone shares Gariepy’s trepidation. Writing for the blog of the National Defense Industrial Association, Sarah Sicard discussed the matter with Ronald Arkin, a dean at Georgia Tech’s School of Interactive Computing. “Unless regulated by international treaties,”Arkin believes, “lethal autonomy is inevitable.”

It’s worth pausing for a moment to explore the nature of this claim. It’s a Borg Complex claim, of course, although masked slightly by the conditional construction, but that doesn’t necessarily make it wrong. Indeed, claims of inevitability are especially plausible in the context of military technology, and it’s not hard to imagine why. Even if one nation entertained ethical reservations about a certain technology, they could never assure themselves that other nations would share their qualms. Better then to set their reservations aside than to be outpaced on the battlefield with disastrous consequences. The force of the logic is compelling. In such a case, however, the inevitability, such as it is, does not reside in the technology per se; it resides in human nature. But even to put it that way threatens to obscure the fact that choices are being and they could be made otherwise. The example set by Clearpath Robotics, a conscious decision to forego research and development on principle only reinforces this conclusion.

But Arkin doesn’t just believe the advent of Lethal Autonomous Robots to be inevitable; he seems to think that it will be a positive good. Arkin believes that the human beings are the “weak link” in the “kill chain.” The question for roboticists is this: “Can we find out ways that can make them outperform human warfighters with respect to ethical performance?” Arkin appears to be fairly certain that the answer will be a rather uncomplicated “yes.”

For a more complicated look at the issue, consider the report (PDF) on Lethal Autonomous Weapons presented to the UN’s Human Rights Council by special rapporteur, Christof Heyns. The report was published in 2013 and first brought to my attention by a post on Nick Carr’s blog. The report explores a variety of arguments for and against the development and deployment of autonomous weapons systems and concludes, “There is clearly a strong case for approaching the possible introduction of LARs with great caution.” It continues:

“If used, they could have far-reaching effects on societal values, including fundamentally on the protection and the value of life and on international stability and security. While it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements in many respects, it is foreseeable that they could comply under certain circumstances, especially if used alongside human soldiers. Even so, there is widespread concern that allowing LARs to kill people may denigrate the value of life itself.”

Among the more salient observations made by the report there is this note of concern about unintended consequences:

“Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”

As with the concern about the denigration of the value of life itself, this worry about the normalization of armed conflict is difficult to empirically verify (although US drone operations in Afghanistan, Pakistan, and the Arabian peninsula are certainly far from irrelevant to the discussion). Consequently, such considerations tend to carry little weight when the terms of the debate are already compromised by technocratic assumptions regarding what counts as compelling reasons, proofs, or evidence.

Such assumptions appear to be all that we have left to go on in light of the rupture Arendt described of the tradition of thought. Or, to put that a bit more precisely, it may not be all that we have left, but it is what we have gotten. We have precious little to fall back on when we begin to think about what we are doing when what we are doing involves, for instance, the fabrication of Lethal Autonomous Robots. There are no customs of thought and action, no traditions of justice, no culturally embodied wisdom to guide us, at least not in any straightforward and directly applicable fashion. We are thinking without a bannister, as Arendt put it elsewhere, if we are thinking at all.

Perhaps it is because I have been reading a good bit of Arendt lately, but I’m increasingly struck by situations we encounter, both ordinary and extraordinary, in which our default problem-solving, cost/benefit analysis mode of thinking fails us. In such situations, we must finally decide what is undecidable and take action, action for which we can be held responsible, action for which we can only hope for forgiveness, action made meaningful by our thinking.

Arendt distinguished this mode of thinking, that which seeks meaning and is a ground for action, from that which seeks to know with certainty what is true. This helps explain, I believe, what she meant in the passage I cited above when she feared that we would become “thoughtless” and slaves to our “know-how.” We are in such cases calculating and measuring, but not thinking, or willing, or judging. Consequently, under such circumstances, we are also perpetually deferring responsibility.

Considered in this light, Lethal Autonomous Weapons threaten to become a symbol of our age; not in their clinical lethality, but in their evacuation of human responsibility from one of the most profound and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mistakes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held accountable and they cannot be forgiven.

We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-scientific thinking. In the absence of such certainty, because we do not seem to know how to think or judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal necessity.

____________________________

Update: Dale Carrico has posted some helpful comments, particularly on Arendt.

The Tourist and the Pilgrim: Essays on Life and Technology in the Digital Age

A few days ago, I noted, thanks to a WordPress reminder, that The Frailest Thing had turned thee. I had little idea what I was doing when I started blogging, and wasn’t even very clear on why I was doing so. I had just started my graduate program in earnest, so I was reading a good bit and, in part at least, I thought it would be useful to process the ideas I was engaging by writing about them. Because I was devoting myself to course work, I was also out of the classroom for the first time in ten years, and the teacher in me wanted to keeping teaching somehow.

So I began blogging and have kept it up these three years and counting.

The best of these three years of writing is, I’m happy to announce, now available in an e-book titled, The Tourist and the Pilgrim: Essays on Life and Technology in the Digital Age.

Forty-six essays are gathered into eight chapters:

1. Technology Criticism
2. Technology Has a History
3. Technology and Memory
4. Technology and the Body
5. Ethics, Religion, and Technology
6. Being Online
7. Our Mediated Lives
8. Miscellany

Not surprisingly, these chapters represent fairly well the major areas of interest that have animated my writing.

Right now, the e-book is only available through Gumroad. Of course, feel free to share the link: https://gumroad.com/l/UQBM. You will receive four file formats (PDF, .epub, .mobi, .azw3). The .mobi file will work best with your Kindle. Some formatting issues are holding up availability through Amazon, but it should also be available there in the next couple of days for those who find that more convenient.

Each of the essays can be found in some form online, but I have revised many of them to correct obvious errors, improve the quality of the prose, and make them read more naturally as stand-alone pieces. Nonetheless, the substance remains freely available through this site.

Convenience and a few improvements aside, those of you who have been reading along with me for some time will not find much you haven’t seen before. You might then consider Gumroad something akin to a tip jar!

Finally, because I would not presume they would see it otherwise, I’d like to share the Acknowledgements section here:

Each of these essays first appeared in some form on The Frailest Thing, a blog that I launched in the summer of 2010. I’m not sure how long the blogging venture would have lasted were it not for the encouragement of readers along the way. I’m especially grateful for those who through their kind words, generous linking, and invitations to write for their publications have given my writing a wider audience than it would’ve had otherwise. On that score, my thanks especially to Adam Thierer, Nathan Jurgenson, Rob Horning, Emily Anne Smith, Alan Jacobs, Nick Carr, Cheri Lucas Rowlands, Matthew Lee Anderson, and Evan Selinger.

But I must also acknowledge a small cadre of friends who read and engaged with my earliest offerings when there was no other audience of which to speak. JT, Kevin, Justin, Mark, David, Randy – Cheers!

And, of course, my thanks and love to my wife, Sarah, who has patiently tolerated and supported my online scribblings these three years.

Deo Gratias

My thanks, of course, are owed to all of you who have stopped by along the way. While it may sound sappy and trite, I have to say there is still something quite humbling about the fact that when I offer up my words, which is to say something of my self, there are those who come around and take the time to read them.

There is a sense in which I’ve written for myself. The writing has helped me in my effort to understand, or, as Hannah Arendt put, “think what we are doing.” It is no small thing to me that in making that process public, some have found a thing or two of some value.

Cheers!

cropped-picture-0062.jpg

The Transhumanist Logic of Technological Innovation

What follows are a series of underdeveloped thoughts for your consideration:

Advances in robotics, AI, and automation promise to liberate human beings from labor.

The Programmable World promises to liberate us from mundane, routine, everyday tasks.

Big Data and algorithms promise to liberate us from the imperatives of understanding and deliberation.

Google promises to liberate us from the need to learn things, drive cars, or even become conscious of what we need before it is provided for us.

But what are we being liberated for? What is the end which this freedom will enable us to pursue?

What sort of person do these technologies invite us to become?

Or, if we maximized their affordances, what sort of engagement with the world would they facilitate?

In the late 1950s, Hannah Arendt worried that automated technology was closing in on the elusive promise of a world without labor at a point in history when human beings could understand themselves only as laborers. She knew that in earlier epochs the desire to transcend labor was animated by a political, philosophical, or theological anthropology that assumed there was a teleology inherent in human nature — the contemplation of the true, the good, and the beautiful or of the beatific vision of God.

But she also knew that no such teleology now animates Western culture. In fact, a case could be made that Western culture now assumes that such a teleology does not and could not exist. Unless, that is, we made it for ourselves. This is where transhumanism, extropianism, and singularity come in. If there is no teleology inherent to human nature, then the transcendence of human nature becomes the default teleology.

This quasi-religious pursuit has deep historical roots, but the logic of technological innovation may make the ideology more plausible.

Around this time last year, Nick Carr proposed that technological innovation tracks neatly with Maslow’s hierarchy of human needs (see Carr’s chart below). I found this a rather compelling and elegant thesis. But, what if innovation is finally determined by something other than strictly human needs? What if beyond self-actualization, there lay the realm of self-transcendence?

After all, when, as an article of faith, we must innovate, and no normative account of human nature serves to constrain innovation, then we arrive at a point where we ourselves will be the final field for innovation.

The technologies listed above, while not directly implicated in the transhumanist project (excepting perhaps dreams of a Google implant), tend in the same direction to the degree that they render human action in the world obsolete. The liberation they implicitly offer, in other words, is a liberation from fundamental aspects of what it has meant to be a human being.

hierarchy of innovation

Miscellaneous Observations

So here are a few thoughts in no particular order:

I have nothing of great depth to say about Google’s decision to shut down Google Reader. I use it, and I’m sorry to hear that it’s going away. (At the moment, I’m planning to use Feedly as a replacement. If you’ve got a better option, let me know.) But it is clear that a lot of folks are not at all happy with Google. My Twitter feed lit up with righteous indignation seconds after the announcement was made. What came to my mind was a wonderfully understated line from Conrad’s Heart of Darkness. When a relative goes on and on about the Belgian trading company bringing the light of civilization to the Congo, etc., etc., Marlow responds: “I ventured to suggest that the Company was run for profit.”

Over at Cyborgology, more work is being done to refine the critique of digital dualism, especially by Whitney Boesel. She does a remarkably thorough job of documenting the digital dualism debates over the last year or two here, and here she offers the first part of her own effort to further clarify the terms of the digital dualism debates. I may be making some comments when the series of posts is complete, for now I’ll just throw out a reminder of my own effort a few months ago to provide a phenomenological taxonomy of online experience, “Varieties of Online Experience.”

Speaking of online and offline and also the Internet or technology – definitions can be elusive. A lot of time and effort has been and continues to be spent trying to delineate the precise referent for these terms. But what if we took a lesson from Wittgenstein? Crudely speaking, Wittgenstein came to believe that meaning was a function of use (in many, but not all cases). Instead of trying to fix an external referent for these terms and then call out those who do not use the term as we have decided it must be used or not used, perhaps we should, as Wittgenstein put it, “look and see” the diversity of uses to which the words are meaningfully put in ordinary conversation. I understand the impulse to demystify terms, such as technology, whose elasticity allows for a great deal confusion and obfuscation. But perhaps we ought also to allow that even when these terms are being used without analytic precision, they are still conveying sense.

As an example, take the way the names of certain philosophers are tossed around by folks whose expertise is not philosophy. Descartes, I think, is a common example. The word Descartes, or better yet Cartesian, no longer has a strong correlation to the man Rene Descartes or his writings. The word tends to be used by non-philosophers as a placeholder for the idea of pernicious dualism (another word that is used in casually imprecise ways). The word has a sense and a meaning, but it is not narrowly constrained by its ostensible referent. When this is the case, it might be valuable to correct the speaker by saying something like, “Descartes didn’t actually believe …” or “You’re glossing over some important nuances …” or “Have you ever read a page of Descartes?” Alternatively, it may be helpful to realize that the speaker doesn’t really care about Descartes and is only using the word Descartes as a carrier of some notions that may best be addressed without reference to the philosopher.

This, in turn, leads me to say that, while I’ve always admired the generalist or interdisciplinary tendency, it is difficult to pull off well. In the midst of making a series of astute observations about the difference between academics and intellectuals, Jack Miles writes, “A generalist is someone with a keener-than-average awareness of how much there is to be ignorant about.” This seems to me to be the indispensable starting point for generalist or inter-disciplinary work that will be of value. The faux-generalist or the lazy inter-disciplinarian merely re-combines shallow forms knowledge. This accomplishes very little, if anything at all.

Come to think of it, I think we would all be better off if we were to develop a “keener-than-average awareness” of our own ignorance.

What Motivates the Critic of Technology?

Last year I wrote a few posts considering the motives that animate tech critics. I’ve slightly revised and collated three of those posts below.

_____________________________________________

Some time ago, I confessed my deeply rooted Arcadian disposition. I added, “The Arcadian is the critic of technology, the one whose first instinct is to mourn what is lost rather than celebrate what is gained.” This phrase prompted a reader to suggest that the critic of technology is preferably neither an Arcadian nor a Utopian. This better sort of critic, he wrote, “doesn’t ‘mourn what is lost’ but rather seeks an understanding of how the present arrived from the past and what it means for the future.” The reader also referenced an essay by the philosopher of technology Don Ihde in which Ihde reflected on the role of the critic of technology by analogy to the literary critic or the art critic. The comment triggered a series of questions in my mind: What exactly makes for a good critic of technology? What stance, if any, is appropriate to the critic of technology toward technology? Can the good critic mourn?

First, let me reiterate what I’ve written elsewhere: Neither unbridled optimism nor thoughtless pessimism regarding technology foster the sort of critical distance required to live wisely with technology. I stand by that.

Secondly, it is worth asking, what exactly does a critic of technology criticize? The objects of criticism are rather straightforward when we think of the food critic, the art critic, the music critic, the film critic, and so on. But what about the critic of technology? The trouble here, of course, stems from the challenge of defining technology. More often than not the word suggests the gadgets with which we surround ourselves. A little more reflection brings to mind a variety of different sorts of technologies: communication, military, transportation, energy, medical, agricultural, etc. The wheel, the factory, the power grid, the pen, the iPhone, the hammer, the space station, the water wheel, the plow, the sword, the ICBM, the film projector – it is a procrustean concept indeed that can accommodate all of this. What does it mean to be a critic of a field that includes such a diverse set of artifacts and systems?

I’m not entirely sure; let’s say, for present purposes, that critics of technology find their niche within certain subsets of the set that includes all of the above. The more interesting question, to me, is this: What does the critic love?*

If we think of all of the other sorts of critics, it seems reasonable to suppose that they are driven by a love for the objects and practices they criticize. The music critic loves music. The film critic loves film. The food critic loves food. (We might also grant that a certain variety of critic probably loves nothing so much as the sound of their own writing.) But does the technology critic love technology? Some of the best critics of technology have seemed to love technology not at all. What do we make of that?

What does the critic of technology love that is analogous to the love of the music critic for music, the food critic for food, etc.? Or does the critic of technology love anything at all in this way. Ihde seems not to think so when he writes that, unlike other sorts of critics, the critic of technology does not become so because they are “passionate” about the object of criticism.

Perhaps there is something about the instrumental character of technology that makes it difficult to complete the analogy. Music, art, literature, food, film – each of these requires technology of some sort. There are exceptions: dance and voice, for example. But for the most part, technology is involved in the creation of the works that are the objects of criticism. The pen, the flute, the camera – these tools are essential, but they are also subordinate to the finished works that they yield. The musician loves the instrument for the sake of the music that it allows them play. It would be odd indeed if a musician were to tell us that he loves his instrument, but is rather indifferent to the music itself. And this is our clue. The critic of technology is a critic of artifacts and systems that are always for the sake of something else. The critic of technology does not love technology because technology rarely exists for its own sake. Ihde is right in saying that the critic of technology is not, in fact, should not be passionate about the object of their criticism. But it doesn’t necessarily follow that no passion at all motivates their work.

So what does the critic of technology love? Perhaps it is the environment. Perhaps it is an ideal of community or friendship. Perhaps it is an ideal civil society. Perhaps it is health and vitality. Perhaps it is sound education. Perhaps liberty. Perhaps joy. Perhaps a particular vision of human flourishing. The critic of technology is animated by a love for something other than the technology itself. Returning to where we began, I would suggest that the critic may indeed mourn just as they may celebrate. They may do either to the degree that their critical work reveals technology’s complicity in either the destruction or promotion of that which they love.

––

Criticism of technology, if it moves beyond something like mere description and analysis, implies making what amount to moral and ethical judgments. The critic of technology, if they reach conclusions about the consequences of technology for the lives of individual persons and the health of institutions and communities, will be doing work that rests on ethical principles and carries ethical implications.

In this they are not altogether unlike the music critic or the literary critic who is excepted to make judgments about the merits of a work art given the established standards of their field. These standards take shape within an institutionalized tradition of criticism. Likewise, the critic of technology — if they move beyond questions such as “Does this technology work?” or “How does this technology work?” to questions such as “What are the social consequences of this technology?” — is implicated in judgments of value and worth.

But according to what standards and from within which tradition? Not the standards of “technology,” if such could even be delineated, because these would merely consider efficiency and functionality (although even these are not exactly “value neutral”). It was, for example, a refusal to evaluate technology on its own terms that characterized the vigorous critical work of the late Jacques Ellul. As Ellul saw it, technology had achieved its nearly autonomous position in society because it was shielded from substantive criticism — criticism, that is, which refused to evaluate technology by its own standards. The critic of technology, then, proceeds with an evaluative framework that is independent of the logic of “technoscience,” as philosopher Don Ihde called it, and so they become an outsider to the field.

__________________

“The contrast between art and literary criticism and what I shall call ‘technoscience criticism’ is marked. Few would call art or literary critics ‘anti-art’ or ‘anti-literature’ in the working out, however critically, of their products. And while it may indeed be true that given works of art or given texts are excoriated, demeaned, or severely dealt with, one does not usually think of the critic as generically ‘anti-art’ or ‘anti-literature.’ Rather, it is precisely because the critic is passionate about his or her subject matter that he or she becomes a ‘critic.’ That is simply not the case with science or technoscience criticism …. The critic—as I shall show below—is either regarded as an outsider, or if the criticism arises from the inside, is soon made to be a quasi-outsider.”

___________________

The libertarian critic, the Marxist critic, the Roman Catholic critic, the posthumanist critic, and so on — each advances their criticism of technology informed by their ethical commitments. Their criticism of technology flows from their loves. Each criticizes technology according to the larger moral and ethical framework implied by the movements, philosophies, and institutions that have shaped their identity. And, of course, so it must be. We are limited beings whose knowledge is always situated within particular contexts. There is no avoiding this, and there is nothing particularly undesirable about this state of affairs. The best critics will be self-aware of their commitments and work hard to sympathetically entertain divergent perspectives. They will also work patiently and diligently to understand a given technology before reaching conclusions about its moral and ethical consequences. But I suspect this work of understanding, precisely because it can be demanding, is typically driven by some deeper commitment that lends urgency and passion to the critic’s work.

Such underlying commitments are often veiled within certain rhetorical contexts that demand as much, the academy for example.  But debates about the merits of technology might be more fruitful if the participants acknowledged the tacit ethical frameworks that underlie the positions they stake out. This is because, in such cases, the technology in question is only a proxy for something else — the object of the critic’s love.

________________________

*Ultimately, I mean love in the Augustinian sense: the deep commitments and desires which drive and motivate action.