One Does Not Simply Add Ethics To Technology

In a twitter thread that has been retweeted over 17,000 times to date, the actor Kumail Nanjiani took the tech industry to task for its apparent indifference to the ethical consequences of their work.

Nanjiani stars in the HBO series Silicon Valley and, as part of his research for the role, he spends a good deal of time at tech conferences and visiting tech companies. When he brings up possible ethical concerns, he realizes “that ZERO consideration seems to be given to the ethical implications of tech.” “They don’t even have a pat rehearsed answer,” Nanjiani adds, “They are shocked at being asked. Which means nobody is asking those questions.” Read the whole thread. It ends on this cheery note: “You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end.”

Nanjiani’s thread appears to have struck a nerve. It was praised by many of the folks I follow on Twitter, and rightly so. Yes, he’s an actor, not a philosopher, historian, or sociologist, etc., but there’s much to commend in his observations and warnings.

But here’s what Nanjiani may not know: we had, in fact, been warned. Nanjiani believes  that “nobody is asking those questions,” questions about technology’s ethical consequence, but this is far from the truth. Technology critics have been warning us for a very long time about the disorders and challenges, ethical and otherwise, that attend contemporary technology. In 1977, for example, Langdon Winner wrote the following:

Different ideas of social and political life entail different technologies for their realization. One can create systems of production, energy, transportation, information handling, and so forth that are compatible with the growth of autonomous, self-determining individuals in a democratic polity. Or one can build, perhaps unwittingly, technical forms that are incompatible with this end and then wonder how things went strangely wrong. The possibilities for matching political ideas with technological configurations appropriate to them are, it would seem, almost endless. If, for example, some perverse spirit set out deliberately to design a collection of systems to increase the general feeling of powerlessness, enhance the prospects for the dominance of technical elites, create the belief that politics is nothing more than a remote spectacle to be experienced vicariously, and thereby diminish the chance that anyone would take democratic citizenship seriously, what better plan to suggest than that we simply keep the systems we already have?

It would not take very much time or effort to find similar expressions of critical concern about technology’s social and moral consequences from a wide array of writers, critics, historians, philosophers, sociologists, political theorists, etc. dating back at least a century.

My first response to Nanjiani’s thread is thus mild irritation, bemusement really, about how novel and daring his comments appear when, in fact, so many have for so long been saying just as much and more trenchantly and at great length.

Beyond this, however, there are a few other points worth noting.

First, we are, as a society, deeply invested in the belief that technology is ethically neutral if not, in fact, an unalloyed good. There are complex and longstanding reasons for this, which, in my view, involve both the history of politics and of religion in western society over the last few centuries. Crudely put, we have invested an immense measure of hope in technology and in order for these hopes to be realized it must be assumed that technology is ethically neutral or unfailingly beneficent. For example, if technology, in the form of Big Data driven algorithmic processes, is to function as arbiter of truth, it can do so only to the degree that we perceive these processes to be neutral and above the biases and frailties that plague human reasoning.

Second, the tech industry is deeply invested in the belief that technology is ethically neutral. If technology is ethically neutral, then those who design, market, and manufacture technology cannot be held responsible for the consequences of their work. Moreover, we are, as consumers, more likely to adopt new technologies if we are wholly untroubled by ethical considerations. If it occurred to us that every device we buy was a morally fraught artifact, we might be more circumspect about what we purchase and adopt.

Third, it’s not as easy as saying we should throw some ethics at our technology. One should immediately wonder whose ethics are in view? We should not forget that ours is an ethically diverse society and simply noting that technology is ethically fraught does not immediately resolve the question of whose ethical vision should guide the design, development, and deployment of new technology. Indeed, this is one of the reasons we are invested in the myth of technology’s neutrality in the first place: it promises an escape from the messiness of living with competing ethical frameworks and accounts of human flourishing.

1yx66q

Fourth, in seeking to apply ethics to technology we would not be entering into a void. In Autonomous Technology, Langdon Winner observed that “while positive, utopian principles and proposals can be advanced, the real field is already taken. There are, one must admit, technologies already in existence—apparatus occupying space, techniques shaping human consciousness and behavior, organizations giving pattern to the activities of the whole society.”

Likewise, when we seek to apply ethics to technology, we must recognize that the field is already taken. Not only are particular artifacts and devices not ethically neutral, they also partake of a pattern that informs the broader technological project. Technology is not neutral and, in its contemporary manifestations, it embodies a positive ethic. It is unfashionable to say as much, but it seems no less true to me. I am here thinking of something like what Jacques Ellul called la technique or what Albert Borgmann called the device paradigm. The principles of this overarching but implicit ethic embodied by contemporary technology include axioms such as “faster is always better,” “efficiency is always good,” “reducing complexity is always desirable,” “means are always indifferent and interchangeable.”

Fifth, the very idea of a free-floating, abstract system of ethics that can simply be applied to technology is itself misleading and a symptom of the problem. Ethics are sustained within communities whose moral visions are shaped by narratives and practices. As Langdon Winner has argued, drawing on the work of Alasdair MacIntyre, “debates about technology policy confirm MacIntyre’s argument that modern societies lack the kinds of coherent social practice that might provide firm foundations for moral judgments and public policies.” “[T]he trouble,” Winner adds, “is not that we lack good arguments and theories, but rather that modern politics simply does not provide appropriate roles and institutions in which the goal of defining the common good in technology policy is a legitimate project.”

Contemporary technology undermines the communal and political structures that might sustain an ethical vision capable of directing and channeling the development of technology (creative destruction and what not). And, consequently, it thrives all the more because these structures are weakened. Indeed, alongside Ellul’s la technique and Borgmann’s device paradigm, we might add another pattern that characterizes contemporary technology: the design of contemporary technology is characterized by a tendency to veil or obscure its ethical ramifications. We can call it, with a nod to Borgmann, the ethical neutrality paradigm: contemporary technologies are becoming more ethically consequential while their design all the more successfully obscures their ethical import.

I do not mean to suggest that it is futile to think ethically about technology. That’s been more or less what I’ve been trying to do for the past seven years. But under these circumstances, what can be done? I have no obvious solutions. It would be helpful, though, if designers worked to foreground rather than veil the ethical consequences of their tools. That may be, in fact, the best we can hope for at present: technology that resists the ethical neutrality paradigm, yielding moral agency back to the user or, at least, bringing the moral valence of its use, distributed and mediated as it may be, more clearly into view.


 

You can subscribe to my weekly newsletter, The Convivial Societyhere.

The Meaning of Luddism

In his recent book about the future of technology, Tim O’Reilly, sometimes called the Oracle of Silicon Valley, faults the Luddites for a failure of imagination. According to O’Reilly, they did not imagine

… that their descendants would have more clothing than the kings and queens of Europe, that ordinary people would eat the fruits of summer in the depths of winter. They couldn’t imagine that we’d tunnel through mountains and under the sea, that we’d fly through the air, crossing continents in hours, that we’d build cities in the desert with buildings a half mile high, that we’d stand on the moon and put spacecraft in orbit around distant planets.…

Of course, O’Reilly doesn’t care about the Luddites in their historical particularity, as actual human beings who lived and suffered. The Luddites are merely a placeholder for an idea: that opponents of technological “progress” are ridiculous, misguided, and doomed. Never mind that the Luddites were not opposed to new technology, only to the disempowering and inequitable deployment of new technology.

In a fine critical review of O’Reilly’s book, Molly Sauter offers this bracing rejoinder of the contemporary application of this logic:

If you’ve lost your job, and can’t find another one, or were never able to find steady full time employment in the first place between automation, outsourcing, and strings of financial meltdowns, Tim O’Reilly wants you to know you shouldn’t be mad. If you’ve been driven into the exploitative arms of the gig economy because the jobs you have been able to find don’t pay a living wage, Tim O’Reilly wants you to know this is a great opportunity. If ever you find yourself being evicted from an apartment you can’t afford because Airbnb has fatally distorted the rental economy in your city, wondering how you’ll pay for the health care you need and the food you need and the student loans you carry with your miscellaneous collection of gigs and jobs and plasma donations, feeling like you’re part of a generational sacrifice zone, Tim O’Reilly wants you to know that it will be worth it, someday, for someone, a long time from now, somewhere in the future.

This is exactly right. There is a certain moral tone-deafness to O’Reilly’s rhetoric. He imagines that a family faced with destitution would bear up happily if only they knew that their suffering was a necessary step toward a future of technological marvels. Your family may not be able to put food on the table, but, not to worry, somewhere down the line, a man will walk on the moon.

In fact, it would seem as if O’Reilly would fault them not only for failing to stoically bear their role as the stepping stones of progress but for not celebrating while they were being trampled on.

There is a cold, calculating utilitarianism at work here. Consequently, the enduring meaning of the Luddites may best be captured in Ursula Le Guin’s short story, “The Ones Who Walk Away from Omelas.” The people of Omelas are prosperous and happy beyond our wildest dreams, but, when they come of age, they are each let in on a secret: the city’s happiness depends on the suffering of one lone child who is kept in perpetual squalor and isolation. Upon discovering this fact about their glittering city, most overcome their initial horror and settle back into the enjoyments the city provides. There are a few, however, who walk away. They forsake their happiness because they can no longer live with the knowledge of the price at which it is purchased.

“The place they go towards is a place even less imaginable to most of us than the city of happiness,” the narrator concludes. “I cannot describe it at all. It is possible it does not exist. But they seem to know where they are going, the ones who walk away from Omelas.”

The point is a simple one: the story of technological progress is often told at the expense of those who have no share in that progress or whose prosperity and well-being were sacrificed for its sake. This is true of individuals, institutions, communities, whole peoples, and the swaths of the non-human world.

Here, then, is the meaning of Luddism: the Luddites are a sign to us of the often hidden costs of our prosperity. Perhaps this is why they are the objects of our willful misunderstanding and ridicule. Better to heap scorn upon the dead than reckon with our own failures.

In truth then, the failure of imagination is ours, not theirs. It is we who have not been able to imagine a more just society in which technological progress is directed toward human flourishing and its costs, such as they must be, are more equitably distributed.

luddite-shop


The blog Librarian Shipwreck has published a number of thoughtful posts on Luddism, its history and contemporary significance. They are collected here. I encourage you to not only read these posts, but to also follow the blog.


You can sign up for my newsletter, The Convivial Society, here.

The Paradoxes of Digitally Mediated Parenting

The novelist Karen Russell recently reflected on her experience as a new parent with a baby monitor, one that streams footage directly to a smartphone app and sends notifications whenever it registers certain kinds of movement in the room.

“I’ve become addicted to live-streaming plotless footage of our baby,” Russell admits, but her brief, poetic essay ends with a reflection on the limitations of such pervasive surveillance and documentation:

“Children vanish without dying,” Joy Williams wrote. Every time the app refreshes and shows an empty crib, I feel a stab of surprise. Children do endure in space and time, but they’re always changing, and no camera is sensitive enough to record the uncanny speed at which this transformation happens. Already the baby has doubled in size. “A slow-motion instant,” a friend and veteran parent told me, describing how the years would now pass. A camera is a tool that spools up time, but of course it cannot stop it.

This paragraph reveals a paradox at the heart of our obsessive documentation. When our documentation is motivated, as it so often is, by a rebellion against the unremitting passage of time, it will only accelerate the rate at which we experience time’s passing. If a camera spools up time without stopping it, then those same spools of time, the moments we have captured and horded, heighten our awareness of time’s ephemeral and fleeting nature.

It is striking, upon reflection, that we use the word capture to describe what we think we are doing when we visually document a moment. That we seek to capture these moments, these experiences, or, even more straightforwardly, these memories–as if we wanted to bypass the experience altogether and pass directly to its recollection … that capturing is what we think we are doing discloses our apprehension of time as something wild and unruly. We seek to master time, but it refuses to be domesticated.

Upon further reflection, it is also striking that it is a moment and not, say, a scene that we think we are capturing. That time and not space is the default category by which we understand what an image records suggests the true nature of the desires driving so much of our documentation. This is why we can never satisfactorily recreate a photograph. It was never the external and physical facts that we sought to document in the first place. It was more like a river of commingled sensation and emotion, one into which we can most certainly never step twice.

Concluding her essay, Russel writes, “When I’m unable to sleep, I can watch our baby. I am watching right now. I can see the bottoms of his feet and count his ten toes, virtually ruffle the pale cap of his hair. He is breathing, I am almost certain. Go in and check, says the Dark Voice of Unreason. Go in and touch him.”

And with these words, a second paradox emerges: we monitor in order to relieve anxiety, but our anxiety is heightened by our monitoring. Put another way, we will grow anxious about whatever we are able to monitor. We will monitor all the more insistently to relieve this anxiety, and our anxiety will intensify in turn.

There is nothing new about the anxieties parents feel when it comes to the security of their children, of course. And I suspect most parents have always felt a bittersweet joy in watching their children grow: saddened by the loss of all their child has been but gladdened by who they are becoming. But I do wonder whether or not these experiences are heightened and amplified by the very tools we deploy to overcome them.

Since Bacon’s day at least, we turn to technology for “the relief of man’s estate.” At what point, however, does the reasonable desire to alleviate human suffering morph into a misguided quest to escape the human condition altogether? Finding whatever joy and contentment we may reasonably aspire to in this life seems to depend on our answer to this question (at least, of course, in affluent and prosperous societies).

It is, to be sure, a line that is difficult to perceive and wisely navigate. But when our techniques yield only a heightened experience of the very disorders we seek to ameliorate, we may justly wonder whether we have crossed it.


This post is part of a series on being a parent in the digital age.

Democracy and Technology

Alexis Madrigal has written a long and thoughtful piece on Facebook’s role in the last election. He calls the emergence of social media, Facebook especially, “the most significant shift in the technology of politics since the television.” Madrigal is pointed in his estimation of the situation as it now consequently stands.

Early on, describing the widespread (but not total) failure to understand the effect Facebook could have on an election, Madrigal writes, “The informational underpinnings of democracy have eroded, and no one has explained precisely how.”

Near the end of the piece, he concludes, “The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

Madrigal’s piece brought to mind, not surprisingly, two important observations by Neil Postman that I’ve cited before.

My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.

Also:

Surrounding every technology are institutions whose organization–not to mention their reason for being–reflects the world-view promoted by the technology. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis.

In these two passages, I find the crux of Postman’s enduring insights, the insights, more generally, of the media ecology school of tech criticism. It seems to me that this is more or less where we are: a culture in crisis, as Madrigal’s comments suggest. Read what he has to say.

On Twitter, replying to a tweet from Christopher Mims endorsing Madrigal’s work, Zeynep Tufekci took issue with Madrigal’s framing. Madrigal, in fact, cited Tufekci as one of the few people who understood a good deal of what was happening and, indeed, saw it coming years ago. But Tufekci nonetheless challenged Madrigal’s point of departure, which is that the entirety of Facebook’s role caught nearly everyone by surprise and couldn’t have been foreseen.

Tufekci has done excellent work exploring the political consequences of Big Data, algorithms, etc. This 2014 article, for example, is superb. But in reading Tufekci’s complaint that her work and the work of many other academics was basically ignored, my first thought was that the similarly prescient work of technology critics has been more or less ignored for much longer. I’m thinking of Mumford, Jaspers, Ellul, Jonas, Grant, Winner, Mander, Postman and a host of others. They have been dismissed as too pessimistic, too gloomy, too conservative, too radical, too broad in their criticism and too narrow, as Luddites and reactionaries, etc. Yet here we are.

In a 1992 article about democracy and technology, Ellul wrote, “In my view, our Western political institutions are no longer in any sense democratic. We see the concept of democracy called into question by the manipulation of the media, the falsification of political discourse, and the establishment of a political class that, in all countries where it is found, simply negates democracy.”

Writing in the same special issue of the journal Philosophy and Technology edited by Langdon Winner, Albert Borgmann wrote, “Modern technology is the acknowledged ruler of the advanced industrial democracies. Its rule is not absolute. It rests on the complicity of its subjects, the citizens of the democracies. Emancipation from this complicity requires first of alI an explicit and shared consideration of the rule of technology.”

It is precisely such an “explicit and shared consideration of the rule of technology” that we have failed to seriously undertake. Again, Tufekci and her colleagues are hardly the first to have their warnings, measured, cogent, urgent as they may be, ignored.

Roger Berkowitz of the Hannah Arendt Center for Politics and the Humanities, recently drew attention to a commencement speech given by John F. Kennedy at Yale in 1962. Kennedy noted the many questions that America had faced throughout her history, from slavery to the New Deal. These were questions “on which the Nation was sharply and emotionally divided.” But now, Kennedy believed we were ready to move on:

Today these old sweeping issues very largely have disappeared. The central domestic issues of our time are more subtle and less simple. They relate not to basic clashes of philosophy or ideology but to ways and means of reaching common goals — to research for sophisticated solutions to complex and obstinate issues.

These issues were “administrative and executive” in nature. They were issues “for which technical answers, not political answers, must be provided,” Kennedy concluded. You should read the rest of Berkowitz reflections on the prejudices exposed by our current crisis, but I want to take Kennedy’s technocratic faith as a point of departure for some observations.

Kennedy’s faith in the technocratic management of society was just the latest iteration of modernity’s political project, the quest for a neutral and rational mode of politics for a pluralistic society.

I will put it this way: liberal democracy is a “machine” for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.

It was machine-like in its promised objectivity and efficiency. But, of course, it would work only to the degree that it generated the subjects it required for its own operation. (A characteristic it shares with all machines.) Human beings have been, on this score, rather recalcitrant, much to the chagrin of the administrators of the machine.

Kennedy’s own hopes were just a renewed version of this vision, only they had become more explicitly a-political and technocratic in nature. It was not enough that citizens check certain aspects of their person at the door to the public sphere, now it would seem that citizens would do well to entrust the political order to experts, engineers, and technicians.

Leo Marx recounts an important part of this story, unfolding throughout the 19th to early 20th century, in an article accounting for what he calls “postmodern pessimism” about technology. Marx outlines how “the simple [small r] republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.” I would also include the emergence of bureaucratic and scientific management in the telling of this story.

Presently we are witnessing a further elaboration of this same project along the same trajectory. It is the rise of governance by algorithm, a further, apparent distancing of the human from the political. I say apparent because, of course, the human is never fully out of the picture, we just create more elaborate technical illusions to mask the irreducibly human element. We buy into these illusions, in part, because of the initial trajectory set for the liberal democratic order, that of machine-like objectivity, rationality, and efficiency. It is on this ideal that Western society staked its hopes for peace and prosperity. At every turn, when the human element, in its complexity and messiness, broke through the facade, we doubled-down on the ideal rather than question the premises. Initially, at least the idea was that the “machine” would facilitate the deliberation of citizens by establishing rules and procedures to govern their engagement. When it became apparent that this would no longer work, we explicitly turned to technique as the common frame by which we would proceed. Now that technique has failed because again the human manifested itself, we overtly turn to machines.

This new digital technocracy takes two, seemingly paradoxical paths. One of these paths is the increasing reliance on Big Data and computing power in the actual work of governing. The other, however, is the deployment of these same tools for the manipulation of the governed. It is darkly ironic that this latter deployment of digital technology is intended to agitate the very passions liberal democracy was initially advanced to suppress (at least according to the story liberal democracy tells about itself). It is as if, having given up on the possibility of reasonable political discourse and deliberation within a pluralistic society, those with the means to control the new apparatus of government have simply decided to manipulate those recalcitrant elements of human nature to their own ends.

It is this latter path that Madrigal and Tufekci have done their best to elucidate. However, my rambling contention here is that the full significance of our moment is only intelligible within a much broader account of the relationship between technology and democracy. It is also my contention that we will remain blind to the true nature of our situation so long as we are unwilling to submit our technology to the kind of searching critique Borgmann advocated and Ellul thought hardly possible. But we are likely too invested in the promise of technology and too deeply compromised in our habits and thinking to undertake such a critique.