The Ethics of Technological Mediation

Where do we look when we’re looking for the ethical implications of technology? A few would say that we look at the technological artifact itself. Many more would counter that the only place to look for matters of ethical concern is to the human subject. Philosopher of technology, Peter-Paul Verbeek, argues that there is another, perhaps more important place for us to look: the point of mediation, the point where the artifact and human subjectivity come together to create effects that cannot be located in either the artifact or the subject taken alone.

Early on in Moralizing Technology: Understanding and Designing the Morality of Things (2011), Verbeek briefly outlines the emergence of the field known as “ethics of technology.” “In its early days,” Verbeek notes, “ethical approaches to technology took the form of critique. Rather than addressing specific ethical problems related to actual technological developments, ethical reflection on technology focused on criticizing the phenomenon of ‘Technology’ itself.” Here we might think of Heidegger, critical theory, or Jacques Ellul. In time, “ethics of technology” emerged “seeking increased understanding of and contact with actual technological practices and developments,” and soon a host of sub-fields appeared: biomedical ethics, ethics of information technology, ethics of nanotechnology, engineering ethics, ethics of design, etc.

This approach remains, accordin to Verbeek, “merely instrumentalist.” “The central focus of ethics,” on this view, “is to make sure that technology does not have detrimental effects in the human realm and that human beings control the technological realm in morally justifiable ways.” It’s not that these considerations are unimportant, quite the contrary, but Verbeek believes that this approach “does not yet go far enough.”

Verbeek explains the problem:

“What remains out of sight in this externalist approach is the fundamental intertwining of these two domains [the human and the technological]. The two simply cannot be separated. Humans are technological beings, just as technologies are social entities. Technologies, after all, play a constitutive role in our daily lives. They help to shape our actions and experiences, they inform our moral decisions, and they affect the quality of our lives. When technologies are used, they inevitably help to shape the context in which they function. They help specific relations between human beings and reality to come about and coshape new practices and ways of living.”

Observing that technologies mediate perception, how we register the world, and action, how we act into the world, Verbeek elaborates a theory of technological mediation, built upon a postphenomenological approach to technology pioneered by Don Ihde. Rather than focus exclusively on either the artifact “out there,” the technological object, or the will “in here,” the human subject, Verbeek invites us to focus ethical attention on the constitution of both the perceived object and the subject’s intention in the act of technological mediation. In other words, how technology shapes perception and action is also of ethical consequence.

As Verbeek rightly insists, “Artifacts are morally charged; they mediate moral decisions, shape moral subjects, and play an important role in moral agency.”

Verbeek turns to the work of Ihde for some analytic tools and categories. Among the many ways humans might relate to technology, Ihde notes two relations of “mediation.” The first of these he calls “embodiment relations” in which the tools are incorporated by the user and the world is experienced through the tool (think of the blind man’s stick). The second he calls a “hermeneutic relation.” Verbeek explains:

“In this relation, technologies provide access to reality not because they are ‘incorporated,’ but because they provide a representation of reality, which requires interpretation [….] Ihde shows that technologies, when mediating our sensory relationship with reality, transform what we perceive. According to Ihde, the transformation of perception always has the structure of amplification and reduction.”

Verbeek gives us the example of looking at a tree through an infrared camera: most of what we see when we look at a tree unaided is “reduced,” but the heat signature of the tree is “amplified” and the tree’s health may be better assessed. Ihde calls this capacity of a tool to transform our perception “technological intentionality.” In other words, the technology directs and guides our perception and our attention. It says to us, “Look at this here not that over there” or “Look at this thing in this way.” This function is not morally irrelevant, especially when you consider that this effect is not contained within the digital platform but spills out into our experience of the world.

Verbeek also believes that our reflection on the moral consequences of technology would do well to take virtue ethics seriously. With regards to the ethics of technology, we typically ask, “What should I or should I not do with this technology?” and thus focus our attention on our actions. In this, we follow the lead of the two dominant modern ethical traditions: the deontological tradition stemming from Immanuel Kant, on the one hand, and the consequentialist tradition, closely associated with Bentham and Mill, on the other. In the case of both traditions, a particular sort of moral subject or person is in view—an autonomous and rational individual who acts freely and in accord with the dictates of reason.

In the Kantian tradition, the individual, having decided upon the right course of action through the right use of their reason, is duty bound to act thusly, regardless of consequences. In the consequentialist tradition, the individual rationally calculates which action will yield the greatest degree of happiness, variously understood, and acts accordingly.

If technology comes into play in such reasoning by such a person, it is strictly as an instrument of the individual will. The question, again, is simply, “What should I do or not do with it?” We ascertain the answer by either determining the dictates of subjective reasoning or calculating the objective consequences of an action, the latter approach is perhaps more appealing for its resonance with the ethos of technique.

We might conclude, then, that the popular instrumentalist view of technology—a view which takes technology to be a mere a tool, a morally neutral instrument of a sovereign will—is the natural posture of the sort of individual or moral subject that modernity yields. It is unlikely to occur to such an individual that technology is not only a tool with which moral and immoral actions are preformed but also an instrument of moral formation, informing and shaping the moral subject.

It is not that the instrumentalist posture is of no value, of course. On the contrary, it raises important questions that ought to be considered and investigated. The problem is that this approach is incomplete and too easily co-opted by the very realities that it seeks to judge. It is, on its own, ultimately inadequate to the task because it takes as its starting point an inadequate and incomplete understanding of the human person.

There is, however, another older approach to ethics that may help us fill out the picture and take into account other important aspects of our relation to technology: the tradition of virtue ethics in both its classical and medieval manifestations.

Verbeek comments on some of the advantages of virtue ethics. To begin with, virtue ethics does not ask, “What am I to do?” Rather, it asks, in Verbeek’s formulation, “What is the good life?” We might also add a related question that virtue ethics raises: “What sort of person do I want to be?” This is a question that Verbeek also considers, taking his cues from the later work of Michel Foucault.

The question of the good life, Verbeek adds,

“does not depart from a separation of subject and object but from the interwoven character of both. A good life, after all, is shaped not only on the basis of human decisions but also on the basis of the world in which it plays itself out (de Vries 1999). The way we live is determined not only by moral decision making but also by manifold practices that connect us to the material world in which we live. This makes ethics not a matter of isolated subjects but, rather, of connections between humans and the world in which they live.”

Virtue ethics, with its concern for habits, practices, and communities of moral formation, illuminates the various ways technologies impinge upon our moral lives. For example, a technologically mediated action that, taken on its own and in isolation, may be judged morally right or indifferent may appear in a different light when considered as one instance of a habit-forming practice that shapes our disposition and character.

Moreover, virtue ethics, which predates the advent of modernity, does not necessarily assume the sovereign individual as its point of departure. For this reason, it is more amenable to the ethics of technological mediation elaborated by Verbeek. Verbeek argues for “the distributed character of moral agency,” distributed that is among subject and the various technological artifacts that mediate the subject’s perception of and action in the world.

At the very least, asking the sorts of questions raised within a virtue ethic framework fills out our picture of technology’s ethical consequences.

In Susanna Clarke’s delightful novel, Jonathan Strange & Mr. Norrell, a fantastical story cast in realist guise about two magicians recovering the lost tradition of English magic in the context of the Napoleonic Wars, one of the main characters, Strange, has the following exchange with the Duke of Wellington:

“Can a magician kill a man by magic?” Lord Wellington asked Strange. Strange frowned. He seemed to dislike the question. “I suppose a magician might,” he admitted, “but a gentleman never would.”

Strange’s response is instructive and the context of magic more apropos than might be apparent. Technology, like magic, empowers the will, and it raises the sort of question that Wellington asks: can such and such be done?

Not only does Strange’s response make the ethical dimension paramount, he approaches the ethical question as a virtue ethicist. He does not run consequentialist calculations nor does he query the deliberations of a supposedly universal reason. Rather, he frames the empowerment availed to him by magic with a consideration of the kind of person he aspires to be, and he subjects his will to this larger project of moral formation. In so doing, he gives us a good model for how we might think about the empowerments availed to us by technology.

As Verbeek, reflecting on the aptness of the word subject, puts it, “The moral subject is not an autonomous subject; rather, it is the outcome of active subjection.” It is, paradoxically, this kind of subjection that can ground the relative freedom with which we might relate to technology.


Most of this material originally appeared on the blog of the Center for the Study of Ethics and Technology. I repost it here in light of recent interest in the ethical consequences of technology. Verbeek’s work does not, it seems to me, get the attention it deserves.

Solitude and Loneliness

In her posthumously published The Life of the Mind, Hannah Arendt distinguished between solitude and loneliness. The former is the condition that makes thought possible; in the latter state, even the consolations of thinking are absent.

“… to be by myself and to have intercourse with myself is the outstanding characteristic of the life of the mind. The mind can be said to have a life of its own only to the extent that it actualizes this intercourse in which, existentially speaking, plurality is reduced to the duality already implied in the fact and the word ‘consciousness,’ or syneidenai–to know with myself. I call this existential state in which I keep myself company ‘solitude’ to distinguish it from ‘loneliness,’ where I am also alone but now deserted not only by human company but also by the possible company of myself.

To be clear, Arendt understands thinking in a rather specific sense. For her, thinking is not mere problem solving or calculation or the pursuit of truth. It is rather the pursuit of meaning and the work of clearing the ground for the possibility of judgment.

That said, it would seem that in our desire to avoid loneliness we are eroding our capacity for solitude, and thus our ability to think.

The allure of our devices lies in the promise of connection. With smartphone in hand, I never have to be alone again. But in this constant connection we lose our taste and capacity for solitude. Moreover, we may find that connection does not necessarily alleviate loneliness. It does not alleviate loneliness because the devices and platforms that mediate connection are explicitly designed to keep us coming back to them. We will keep coming back to them only if we feel we need what they offer; we will keep coming back, that is, if we feel lonely. Furthermore, it is becoming ever more obvious that connection is like a drug we were offered, at no cost, of course, only to keep us coming back for more at ruinous cost to us and great profit to others.

The dark paradox, then, is this: the more we seek to alleviate our loneliness through digital connectivity, the more lonely we will feel. Along the way, we will forsake solitude as a matter of course. Curiously, it may not even be loneliness as a desire for companionship that the design of social media fosters in us. Rather, it is a counterfeit longing that is generated: for stimulation rather than companionship.

In the end, we will be left with the most profound loneliness: perpetually feeling a need for connection that we cannot satisfy and finding that we have not even our own company.

To recap: no abiding sense of companionship, no solitude, no place for thought.

____________________________________________

See also Nicholas Carr’s recent post, How smartphones hijack our minds.

The Dystopia Is Already Here

Science fiction writer William Gibson coined the phrase, “The future is already here — it’s just not very evenly distributed.” It’s a well-known and oft-repeated line.

I’m proposing a slight variation, or perhaps a corollary principle: The dystopia is already here — it’s just not very evenly distributed.

Consider these comments by Facebook’s founding president, Sean Parker: “It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” The aim of Facebook’s designers: “How do we consume as much of your time and conscious attention as possible?”

Or take a look at Zeynep Tufekci’s recent TED talk, “We’re building a dystopia just to make people click on ads.”

Then there’s this fine company, Dopamine Labs, which is developing an “automated, intelligent approach to hooking people on apps” with an AI agent aptly named Skinner.

Here is James Bridle’s long exploration of the weird and disturbing world of Kids YouTube. “This is a deeply dark time,” Bridle concludes, “in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways.” Another writer, looking at this same content, concluded, “We can’t predict what wider impact a medium that incentivizes factory line production of mindless visual slurry for kids’ consumption might have on children’s development and on society as a whole.

And this article title would have seemed implausibly dystopian just a few years ago: Facebook is hiring 3,000 people to stop users from broadcasting murder and rape.

Meanwhile, Beijing is becoming a “frontline laboratory for surveillance” setting the pace for 21st century police states, and Facebook has found itself at the center of the brutal campaign against the Rohingya minority in Myanmar.

An earlier investor in Facebook and Google, making his penance, tells us that these two companies have “consciously combined persuasive techniques developed by propagandists and the gambling industry with technology in ways that threaten public health and democracy.” ” Thanks to smartphones,” he adds, “the battle for attention now takes place on a single platform that is available every waking moment.”

So, I don’t know, you tell me?

Lest we think that we cannot be in a dystopia because we appear to be relatively free, prosperous, and safe, here’s the final word to Neil Postman:

… we had forgotten that alongside Orwell’s dark vision, there was another – slightly older, slightly less well known, equally chilling: Aldous Huxley’s Brave New World. Contrary to common belief even among the educated, Huxley and Orwell did not prophesy the same thing. Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.

What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture …. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions.” In 1984, Orwell added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we fear will ruin us. Huxley feared that what we desire will ruin us.

[I’ve decided to make this post an archive of sorts, so I’ll keep adding items as I come across them. Feel free to offer submissions in the comments.]

One Does Not Simply Add Ethics To Technology

In a twitter thread that has been retweeted over 17,000 times to date, the actor Kumail Nanjiani took the tech industry to task for its apparent indifference to the ethical consequences of their work.

Nanjiani stars in the HBO series Silicon Valley and, as part of his research for the role, he spends a good deal of time at tech conferences and visiting tech companies. When he brings up possible ethical concerns, he realizes “that ZERO consideration seems to be given to the ethical implications of tech.” “They don’t even have a pat rehearsed answer,” Nanjiani adds, “They are shocked at being asked. Which means nobody is asking those questions.” Read the whole thread. It ends on this cheery note: “You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end.”

Nanjiani’s thread appears to have struck a nerve. It was praised by many of the folks I follow on Twitter, and rightly so. Yes, he’s an actor, not a philosopher, historian, or sociologist, etc., but there’s much to commend in his observations and warnings.

But here’s what Nanjiani may not know: we had, in fact, been warned. Nanjiani believes  that “nobody is asking those questions,” questions about technology’s ethical consequence, but this is far from the truth. Technology critics have been warning us for a very long time about the disorders and challenges, ethical and otherwise, that attend contemporary technology. In 1977, for example, Langdon Winner wrote the following:

Different ideas of social and political life entail different technologies for their realization. One can create systems of production, energy, transportation, information handling, and so forth that are compatible with the growth of autonomous, self-determining individuals in a democratic polity. Or one can build, perhaps unwittingly, technical forms that are incompatible with this end and then wonder how things went strangely wrong. The possibilities for matching political ideas with technological configurations appropriate to them are, it would seem, almost endless. If, for example, some perverse spirit set out deliberately to design a collection of systems to increase the general feeling of powerlessness, enhance the prospects for the dominance of technical elites, create the belief that politics is nothing more than a remote spectacle to be experienced vicariously, and thereby diminish the chance that anyone would take democratic citizenship seriously, what better plan to suggest than that we simply keep the systems we already have?

It would not take very much time or effort to find similar expressions of critical concern about technology’s social and moral consequences from a wide array of writers, critics, historians, philosophers, sociologists, political theorists, etc. dating back at least a century.

My first response to Nanjiani’s thread is thus mild irritation, bemusement really, about how novel and daring his comments appear when, in fact, so many have for so long been saying just as much and more trenchantly and at great length.

Beyond this, however, there are a few other points worth noting.

First, we are, as a society, deeply invested in the belief that technology is ethically neutral if not, in fact, an unalloyed good. There are complex and longstanding reasons for this, which, in my view, involve both the history of politics and of religion in western society over the last few centuries. Crudely put, we have invested an immense measure of hope in technology and in order for these hopes to be realized it must be assumed that technology is ethically neutral or unfailingly beneficent. For example, if technology, in the form of Big Data driven algorithmic processes, is to function as arbiter of truth, it can only do so only to the degree that we perceive these processes to be neutral and above the biases and frailties that plague human reasoning.

Second, the tech industry is deeply invested in the belief that technology is ethically neutral. If technology is ethically neutral, then those who design, market, and manufacture technology cannot be held responsible for the consequences of their work. Moreover, we are, as consumers, more likely to adopt new technologies if we are wholly untroubled by ethical considerations. If it occurred to us that every device we buy was a morally fraught artifact, we might be more circumspect about what we purchase and adopt.

Third, it’s not as easy as saying we should throw some ethics at our technology. One should immediately wonder whose ethics are in view? We should not forget that ours is an ethically diverse society and simply noting that technology is ethically fraught does not immediately resolve the question of whose ethical vision should guide the design, development, and deployment of new technology. Indeed, this is one of the reasons we are invested in the myth of technology’s neutrality in the first place: it promises an escape from the messiness of living with competing ethical frameworks and accounts of human flourishing.

1yx66q

Fourth, in seeking to apply ethics to technology we would not be entering into a void. In Autonomous Technology, Langdon Winner observed that “while positive, utopian principles and proposals can be advanced, the real field is already taken. There are, one must admit, technologies already in existence-apparatus occupying space, techniques shaping human consciousness and behavior, organizations giving pattern to the activities of the whole society.”

Likewise, when we seek to apply ethics to technology, we must recognize that the field is already taken. Not only are particular artifacts and devices not ethically neutral, they also partake of a pattern that informs the broader technological project. Technology is not neutral and, in its contemporary manifestations, it embodies a positive ethic. It is unfashionable to say as much, but it seems no less true to me. I am here thinking of something like what Jacques Ellul called la technique or what Albert Borgmann called the device paradigm. The principles of this overarching but implicit ethic embodied by contemporary technology include axioms such as “faster is always better,” “efficiency is always good,” “reducing complexity is always desirable,” “means are always indifferent and interchangeable.”

Fifth, the very idea of a free-floating, abstract system of ethics that can simply be applied to technology is itself misleading and a symptom of the problem. Ethics are sustained within communities whose moral visions are shaped by narratives and practices. As Langdon Winner has argued, drawing on the work of Alasdair MacIntyre, “debates about technology policy confirm MacIntyre’s argument that modern societies lack the kinds of coherent social practice that might provide firm foundations for moral judgments and public policies.” “[T]he trouble,” Winner adds, “is not that we lack good arguments and theories, but rather that modern politics simply does not provide appropriate roles and institutions in which the goal of defining the common good in technology policy is a legitimate project.”

Contemporary technology undermines the communal and political structures that might sustain an ethical vision capable of directing and channeling the development of technology (creative destruction and what not). And, consequently, it thrives all the more because these structures are weakened. Indeed, alongside Ellul’s la technique and Borgmann’s device paradigm, we might add another pattern that characterizes contemporary technology: the design of contemporary technology is characterized by a tendency to veil or obscure its ethical ramifications. We can call it, with a nod to Borgmann, the ethical neutrality paradigm: contemporary technologies are becoming more ethically consequential while their design all the more successfully obscures their ethical import.

I do not mean to suggest that it is futile to think ethically about technology. That’s been more or less what I’ve been trying to do for the past seven years. But under these circumstances, what can be done? I have no obvious solutions. It would be helpful, though, if designers worked to foreground rather than veil the ethical consequences of their tools. That may be, in fact, the best we can hope for at present: technology that resists the ethical neutrality paradigm, yielding moral agency back to the user or, at least, bringing the moral valence of its use, distributed and mediated as it may be, more clearly into view.

The Meaning of Luddism

In his recent book about the future of technology, Tim O’Reilly, sometimes called the Oracle of Silicon Valley, faults the Luddites for a failure of imagination. According to O’Reilly, they did not imagine

… that their descendants would have more clothing than the kings and queens of Europe, that ordinary people would eat the fruits of summer in the depths of winter. They couldn’t imagine that we’d tunnel through mountains and under the sea, that we’d fly through the air, crossing continents in hours, that we’d build cities in the desert with buildings a half mile high, that we’d stand on the moon and put spacecraft in orbit around distant planets.…

Of course, O’Reilly doesn’t care about the Luddites in their historical particularity, as actual human beings who lived and suffered. The Luddites are merely a placeholder for an idea: that opponents of technological “progress” are ridiculous, misguided, and doomed. Never mind that the Luddites were not opposed to new technology, only to the disempowering and inequitable deployment of new technology.

In a fine critical review of O’Reilly’s book, Molly Sauter offers this bracing rejoinder of the contemporary application of this logic:

If you’ve lost your job, and can’t find another one, or were never able to find steady full time employment in the first place between automation, outsourcing, and strings of financial meltdowns, Tim O’Reilly wants you to know you shouldn’t be mad. If you’ve been driven into the exploitative arms of the gig economy because the jobs you have been able to find don’t pay a living wage, Tim O’Reilly wants you to know this is a great opportunity. If ever you find yourself being evicted from an apartment you can’t afford because Airbnb has fatally distorted the rental economy in your city, wondering how you’ll pay for the health care you need and the food you need and the student loans you carry with your miscellaneous collection of gigs and jobs and plasma donations, feeling like you’re part of a generational sacrifice zone, Tim O’Reilly wants you to know that it will be worth it, someday, for someone, a long time from now, somewhere in the future.

This is exactly right. There is a certain moral tone-deafness to O’Reilly’s rhetoric. He imagines that a family faced with destitution would bear up happily if only they knew that their suffering was a necessary step toward a future of technological marvels. Your family may not be able to put food on the table, but, not to worry, somewhere down the line, a man will walk on the moon.

In fact, it would seem as if O’Reilly would fault them not only for failing to stoically bear their role as the stepping stones of progress but for not celebrating while they were being trampled on.

There is a cold, calculating utilitarianism at work here. Consequently, the enduring meaning of the Luddites may best be captured in Ursula Le Guin’s short story, “The Ones Who Walk Away from Omelas.” The people of Omelas are prosperous and happy beyond our wildest dreams, but, when they come of age, they are each let in on a secret: the city’s happiness depends on the suffering of one lone child who is kept in perpetual squalor and isolation. Upon discovering this fact about their glittering city, most overcome their initial horror and settle back into the enjoyments the city provides. There are a few, however, who walk away. They forsake their happiness because they can no longer live with the knowledge of the price at which it is purchased.

“The place they go towards is a place even less imaginable to most of us than the city of happiness,” the narrator concludes. “I cannot describe it at all. It is possible it does not exist. But they seem to know where they are going, the ones who walk away from Omelas.”

The point is a simple one: the story of technological progress is often told at the expense of those who have no share in that progress or whose prosperity and well-being were sacrificed for its sake. This is true of individuals, institutions, communities, whole peoples, and the swaths of the non-human world.

Here, then, is the meaning of Luddism: the Luddites are a sign to us of the often hidden costs of our prosperity. Perhaps this is why they are the objects of our willful misunderstanding and ridicule. Better to heap scorn upon the dead than reckon with our own failures.

In truth then, the failure of imagination is ours, not theirs. It is we who have not been able to imagine a more just society in which technological progress is directed toward human flourishing and its costs, such as they must be, are more equitably distributed.

luddite-shop


The blog Librarian Shipwreck has published a number of thoughtful posts on Luddism, its history and contemporary significance. They are collected here. I encourage you to not only read these posts, but to also follow the blog.