One Does Not Simply Add Ethics To Technology

In a twitter thread that has been retweeted over 17,000 times to date, the actor Kumail Nanjiani took the tech industry to task for its apparent indifference to the ethical consequences of their work.

Nanjiani stars in the HBO series Silicon Valley and, as part of his research for the role, he spends a good deal of time at tech conferences and visiting tech companies. When he brings up possible ethical concerns, he realizes “that ZERO consideration seems to be given to the ethical implications of tech.” “They don’t even have a pat rehearsed answer,” Nanjiani adds, “They are shocked at being asked. Which means nobody is asking those questions.” Read the whole thread. It ends on this cheery note: “You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end.”

Nanjiani’s thread appears to have struck a nerve. It was praised by many of the folks I follow on Twitter, and rightly so. Yes, he’s an actor, not a philosopher, historian, or sociologist, etc., but there’s much to commend in his observations and warnings.

But here’s what Nanjiani may not know: we had, in fact, been warned. Nanjiani believes  that “nobody is asking those questions,” questions about technology’s ethical consequence, but this is far from the truth. Technology critics have been warning us for a very long time about the disorders and challenges, ethical and otherwise, that attend contemporary technology. In 1977, for example, Langdon Winner wrote the following:

Different ideas of social and political life entail different technologies for their realization. One can create systems of production, energy, transportation, information handling, and so forth that are compatible with the growth of autonomous, self-determining individuals in a democratic polity. Or one can build, perhaps unwittingly, technical forms that are incompatible with this end and then wonder how things went strangely wrong. The possibilities for matching political ideas with technological configurations appropriate to them are, it would seem, almost endless. If, for example, some perverse spirit set out deliberately to design a collection of systems to increase the general feeling of powerlessness, enhance the prospects for the dominance of technical elites, create the belief that politics is nothing more than a remote spectacle to be experienced vicariously, and thereby diminish the chance that anyone would take democratic citizenship seriously, what better plan to suggest than that we simply keep the systems we already have?

It would not take very much time or effort to find similar expressions of critical concern about technology’s social and moral consequences from a wide array of writers, critics, historians, philosophers, sociologists, political theorists, etc. dating back at least a century.

My first response to Nanjiani’s thread is thus mild irritation, bemusement really, about how novel and daring his comments appear when, in fact, so many have for so long been saying just as much and more trenchantly and at great length.

Beyond this, however, there are a few other points worth noting.

First, we are, as a society, deeply invested in the belief that technology is ethically neutral if not, in fact, an unalloyed good. There are complex and longstanding reasons for this, which, in my view, involve both the history of politics and of religion in western society over the last few centuries. Crudely put, we have invested an immense measure of hope in technology and in order for these hopes to be realized it must be assumed that technology is ethically neutral or unfailingly beneficent. For example, if technology, in the form of Big Data driven algorithmic processes, is to function as arbiter of truth, it can only do so only to the degree that we perceive these processes to be neutral and above the biases and frailties that plague human reasoning.

Second, the tech industry is deeply invested in the belief that technology is ethically neutral. If technology is ethically neutral, then those who design, market, and manufacture technology cannot be held responsible for the consequences of their work. Moreover, we are, as consumers, more likely to adopt new technologies if we are wholly untroubled by ethical considerations. If it occurred to us that every device we buy was a morally fraught artifact, we might be more circumspect about what we purchase and adopt.

Third, it’s not as easy as saying we should throw some ethics at our technology. One should immediately wonder whose ethics are in view? We should not forget that ours is an ethically diverse society and simply noting that technology is ethically fraught does not immediately resolve the question of whose ethical vision should guide the design, development, and deployment of new technology. Indeed, this is one of the reasons we are invested in the myth of technology’s neutrality in the first place: it promises an escape from the messiness of living with competing ethical frameworks and accounts of human flourishing.

1yx66q

Fourth, in seeking to apply ethics to technology we would not be entering into a void. In Autonomous Technology, Langdon Winner observed that “while positive, utopian principles and proposals can be advanced, the real field is already taken. There are, one must admit, technologies already in existence-apparatus occupying space, techniques shaping human consciousness and behavior, organizations giving pattern to the activities of the whole society.”

Likewise, when we seek to apply ethics to technology, we must recognize that the field is already taken. Not only are particular artifacts and devices not ethically neutral, they also partake of a pattern that informs the broader technological project. Technology is not neutral and, in its contemporary manifestations, it embodies a positive ethic. It is unfashionable to say as much, but it seems no less true to me. I am here thinking of something like what Jacques Ellul called la technique or what Albert Borgmann called the device paradigm. The principles of this overarching but implicit ethic embodied by contemporary technology include axioms such as “faster is always better,” “efficiency is always good,” “reducing complexity is always desirable,” “means are always indifferent and interchangeable.”

Fifth, the very idea of a free-floating, abstract system of ethics that can simply be applied to technology is itself misleading and a symptom of the problem. Ethics are sustained within communities whose moral visions are shaped by narratives and practices. As Langdon Winner has argued, drawing on the work of Alasdair MacIntyre, “debates about technology policy confirm MacIntyre’s argument that modern societies lack the kinds of coherent social practice that might provide firm foundations for moral judgments and public policies.” “[T]he trouble,” Winner adds, “is not that we lack good arguments and theories, but rather that modern politics simply does not provide appropriate roles and institutions in which the goal of defining the common good in technology policy is a legitimate project.”

Contemporary technology undermines the communal and political structures that might sustain an ethical vision capable of directing and channeling the development of technology (creative destruction and what not). And, consequently, it thrives all the more because these structures are weakened. Indeed, alongside Ellul’s la technique and Borgmann’s device paradigm, we might add another pattern that characterizes contemporary technology: the design of contemporary technology is characterized by a tendency to veil or obscure its ethical ramifications. We can call it, with a nod to Borgmann, the ethical neutrality paradigm: contemporary technologies are becoming more ethically consequential while their design all the more successfully obscures their ethical import.

I do not mean to suggest that it is futile to think ethically about technology. That’s been more or less what I’ve been trying to do for the past seven years. But under these circumstances, what can be done? I have no obvious solutions. It would be helpful, though, if designers worked to foreground rather than veil the ethical consequences of their tools. That may be, in fact, the best we can hope for at present: technology that resists the ethical neutrality paradigm, yielding moral agency back to the user or, at least, bringing the moral valence of its use, distributed and mediated as it may be, more clearly into view.

28 thoughts on “One Does Not Simply Add Ethics To Technology

  1. Implicit in your argument it seems to me is that we have no choice about the technologies we use and how we use them. Of course technologists cannot easily take an ethical standpoint in an ethically diverse society. In a sense all they can do is put stuff out there and see what sells, and how it is used. That is our diverse society “making its mind up” and “changing its mind” as technologies are rolled out.

    But we can choose how and if we use technology. I can (and do) use social media to keep in touch with my real, if remote, physical friends and family. I also use it to stay in touch with individuals and groups (such as yourself) who share my ethical, moral, economic, political and intellectual concerns and interests. I use technology to inform myself about the world (NOT to be informed by technology). If the technology doesn’t work well or in the way that I want it to, I change the way I use it, or I abandon it and look for something better. I recently switched off all facebook notifications, and channelled all my various newsfeeds and other information sources such as medium.com into folders marked as read. (Not this one, by the way). I could just unsubscribe, but I want to stay in touch. But I also want to control how much information, opinion, “news” and hysteria I have to process each day. So far this is working quite well. Even my personal email contacts get an auto-response saying I’m not listening (every day, all the time) – if they want me they can call my mobile. So instead of spending hours each day processing all this noise, I spend minutes. And then spend my time on things that matter to me. Handwriting letters to good friends. Telephoning children and others. Spending more time working with my hands and going outside. I don’t think I am alone. I saw yesterday that young people in the UK spent an average of 3.8 hours on their smart phones this year, down from 3.9 hours last year.

    The alternative, also implicit in your piece, seems to be some species of top down ethical guidelines being used to steer technology in this or that direction. I think I prefer to keep my autonomy, thank you.

    And that perhaps is the nub – are there technologies which affect us, over which we have no control, no possibility of choosing not to use, and if so, how can we re-assert our individual autonomy over these technologies? The advances in automation, robotics and AI and developments in the defence industries spring to mind.

    Perhaps the answer is to provide every citizen with the means to protect herself from the effects of technologies which she does not like, want or approve of. The equivalent of the tinfoil hat (more effective, hopefully). In other words, to give everyone a genuine choice and a voice in the direction of travel that technologists and technology are heading. Improve, tighten and speed up the feedback loops that tell technologists where we as a society want to go.

    1. Ethics and technology get so much more complex when the environment with limited resources is thrown into the mix.
      For example what remains of your autonomy when to keep the Facebook and the phones and so running causes massive pollution and ecological destruction?

      How does if you choose to redirect a Facebook newsfeed affect strip mining in China, poisoning the groundwater to harvest the minerals for building the Facebook servers?

      People are missing the big picture if they only talk about ethics of end-consumer products.
      I am not saying we shouldn’t talk about ethics in end-consumer products, but we shouldn’t be under the illusion that is all there is.

      Thank you for reading. Enjoy every moment your day.

  2. Cactusneedle makes an important point. I would further point out that this otherwise good article continues to confuse the meaning of ‘technology’, as if it were only digital technology, a do many popular articles. Whereas, ‘technology’ has a much wider meaning. Weaving, metallurgy, the steam engine and plastics are also key facets of ‘technology’, with which humanity has had positive as well as negative relationships. But certainly technology is not neutral. Heidegger wrote extensively about this issue.

  3. Michael,

    I share your chagrin that people seem to think there’s something new about the notion that technology isn’t neutral, or the realization that it can lead, dangerously, to unintended consequences. In fact, concerns about tech’s inherent potential for deleterious impacts go back not one but several centuries, most directly to the early days of the Industrial Revolution and in the broadest sense as far back as the ancient Greeks. See Langdon Winner’s description of “the counter-tradition to the dream of mastery.” One of the reasons I find the writings of 19th-century writers like Thomas Carlyle, Ralph Waldo Emerson, Herman Melville and Henry Adams so compelling is that they lived at a time when they could see, from direct experience, what was being lost. They were ignored, of course, just as later prophets (Lewis Mumford, Jacques Ellul, Wendell Berry, and Václav Havel as well as Winner come to mind) have been ignored ever since. Technological enthusiasts seem congenitally disinterested in history. (I elaborate on these points in a recent blog post, “Silicon Valley’s Morning After”: http://thequestionconcerningtechnology.blogspot.com/2017/10/silicon-valleys-morning-after.html)

    Thank you, also, Michael, for noting Winner’s other point, that “modern politics simply does not provide appropriate roles and institutions in which the goal of defining the common good in technology policy is a legitimate project.” This is a crucial and dispiriting reality that contributes to a condition I call, in my book, “de facto technological autonomy.” Citizen apathy, fed by the proliferation of technologies of entertainment and propaganda, and corporate compromise of the political process have led not only to a lack of will but even a lack of interest in monitoring, not to mention challenging, the tech powers that be, both on the part of regulators and their constituents. (See my Boston Globe piece on technological assessment: https://www.bostonglobe.com/ideas/2016/08/20/staying-ahead-technology-curves/8PNs2JwSANX0G4WmByJJiI/story.html)

    Will the current wave of disillusionment with Silicon Valley spur a shift toward more engagement? The Congressional hearings regarding Russian’s use of Facebook, Twitter, and Google suggest it’s possible, but it’s too soon to tell. In any event, thanks as always for your insights and perspective on these vital questions.

  4. I really enjoyed reading about this. I’ve been reading articles lately about Stephen Hawkings opinion on technology, and he believes that contemporary technology is the worst thing that has happened to mankind. He describes the growth of technology and AI as dangerous to society. I recommend looking into Hawkings views if you enjoyed this article.

  5. “Contemporary technology undermines the communal and political structures that might sustain an ethical vision capable of directing and channeling the development of technology (creative destruction and what not). And, consequently, it thrives all the more because these structures are weakened.” Nicely put!

  6. Its only a fact that once you’re compromised and your files are out there, there is no telling of what can becoming of it. Its only its public use by big franchise that could be argued & dragged in the court of law.
    Once technology checks into the building, Ethics et al leaves through the window

  7. Pingback: Site Title
  8. Fine but study Libby Montana and WR Grace as a rebuttal to your post. Then get back after you have digested the ultimate ethical horror and the fact that corporations aren’t ethical. And that’s low-level technology which as a test case suffices to scale for how technology and ethics aren’t compatible. Bitcoin harvesting is however ethical given the use of power variation to mine a cryptocurrency.

    I don’t respond to much here in wordpress but had to say something here.

    Thanks for posting and it gets my blood boiling.

  9. There’s a big difference between technology in general and technology that is specifically designed to advance unethical purposes. The most “ethical” technological innovation can always be used for unethical purposes in the wrong hands. So the real issue is the tendency of human beings to use whatever is at hand to further themselves at the expense of someone else.

  10. I appreciate the historical perspective and thoughtful analysis your well-articulated essay provides; thank you. And I was struck organically by the parallel to pro-gun arguments in your discussion of technology’s supposed ethical neutrality. Sadly, “the common good” seems outdated — if not flatly irrelevant — in modern American culture.

  11. Ethical technology doesn’t sound erroneous to us. If we can replicate and apply a human’s intelligencei into A.I, and they are clearly on the same level if not smarter than us, I don’t see why we should not respect them. After all, we are all made of same chemical elements existent on Planet Earth. Check out my blog for my view on A.I.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s