Hot Off the Digital Presses: A New Collection

Ethics of technology is suddenly a rather hot topic, and, as most of you reading this know, I’m a bit ambivalent about the development. Consider, for example, the attention garnered by Mark Zuckerberg’s testimony before Congress last week. The whole affair was widely anticipated, watched, and commented upon. Yet, it is likely that all of this attention will not ultimately amount to much of consequence. It was a blip on most people’s radar, and, even for those who cared, the moment has passed and the urgency of the immediate will direct our attention fleetingly elsewhere.

This is not to say that we ought to be fatalistic or despairing. It only means that thinking about technology’s moral and political implications, not to mention taking meaningful action based on that thinking, is not exactly an easy or straightforward affair. The roots of the problems we face often run much deeper than most of us, myself included, realize. And, if I may be permitted to stretch the metaphor a bit, when we do start to get at those roots, we discover a vast network of roots that extend farther and wider and feed more of our culture than we imagined.

Over the past few years, I’ve been attempting to do a little work in the direction of helping us see more clearly how the technological touches on, shapes, and otherwise relates to the moral and the political. Most of this work, persisting in my metaphor, I tend to think of as a modest bit of ground clearing that exposes the roots and at least helps us to see our situation a little more clearly.

Given the current interest in such matter, I’ve decided to collect into an eBook some of what I’ve written over the years, going back to 2011, that touches on the intersections of technology, ethics, and politics. It is a rather slim volume of about 70 pages or so were it printed. You can pick it up at Amazon should you so desire: Do Artifacts Have Ethics?: Technology, Politics, and the Moral LifeI think it can be a useful collection of pieces that will at least spur some important questions.

If you were to pass along the link or take a moment to write a review, I would be grateful.


[Update: Some of you inquired about alternatives to buying through Amazon. Below is one option via PayPal. For those who would like to understandably avoid both Amazon and PayPal, you may purchase a copy via Gumroad.]




Do Artifacts Have Ethics?: Technology, Politics, and the Moral Life


When “Maximizing Freedom” Becomes A Form of Bondage

In his brief but insightful book, Nature and Altering It, ethicist Allen Verhey discusses a series of myths that underlie our understanding of nature (at the outset of the book he cataloged 16 uses of the idea of “nature”). While discussing one of these myths, the myth of the project of liberal society, Verhey writes,

“Finally, however, the folly of the myth of liberal society is displayed in the pretense that ‘maximizing freedom’ is always morally innocent. ‘Maximizing freedom,’ however, can ironically increase our bondage. What is introduced as a way to increase our options can become socially enforced. The point can easily be illustrated with technology. New technologies are frequently introduced as ways to increase our options, as ways to maximize our freedom, but they can become socially enforced. The automobile was introduced as an option, as an alternative to the horse, but it is now socially enforced …. The technology that surrounds our dying was introduced to give doctors and patients options in the face of disease and death, but such ‘options’ have become socially enforced; at least one sometimes still hears, “We have no choice!” And the technology that may come to surround birth, including pre-natal diagnosis, for example, may come to be socially enforced. ‘What? You knew you were at risk for bearing a child with XYZ, and you did nothing about it? And now you expect help with this child?’ Now it is possible, of course, to claim that cars and CPR and pre-natal diagnosis are the path of progress, but then the argument has shifted from the celebration of options and the maximizing of freedom to something else, to the meaning of progress.”

It should not be hard to multiply examples.

It is worth noting, as well, that it is in his discussion of the myth associated with the project of liberal society that Verhey draws on examples from the realm of technology. The two phenomena are deeply intertwined as I’ve suggested a time or two.

Beyond the Trolley Car: The Moral Pedagogy of Ethical Tools

It is almost impossible to read about the ethics of autonomous vehicles without encountering some version of the trolley car problem. You’re familiar with the general outline of the problem, I’m sure. An out-of-control trolley car is barreling toward five unsuspecting people on a track. You are able to pull a lever and redirect the trolley toward another track, but there is one person on this track who will be hit by the trolley as a result. What do you do? Nothing and let five people die, or pull the lever and save five at the expense of one person’s life?

The thought experiment has its origins in a paper by the philosopher Phillipa Foot on abortion and the concept of double effect. I’m not sure when it was first invoked in the context of autonomous vehicles, but I first came across trolly car-style hypothesizing about the ethics of self-driving cars in a 2012 essay by Gary Marcus, which I learned about in a post on Nick Carr’s blog. The comments on that blog post, by the way, are worth reading. In response, I wrote my own piece reflecting on what I took to be more subtle issues arising from automated ethical systems.

More recently, following the death of a pedestrian who was struck by one of Uber’s self-driving vehicles in Arizona, Evan Selinger and Brett Frischmann co-authored a piece at Motherboard using the trolley car problem as a way of thinking about the moral and legal issues at stake. It’s worth your consideration. As Selinger and Frischmann point out, the trolley car problem tends to highlight drastic and deadly outcomes, but there are a host of non-lethal actions of moral consequence that an autonomous vehicle may be programmed to take. It’s important that serious thought be given to such matters now before technological momentum sets in.

“So, don’t be fooled when engineers hide behind technical efficiency and proclaim to be free from moral decisions,” the authors conclude. “’I’m just an engineer‘ isn’t an acceptable response to ethical questions. When engineered systems allocate life, death and everything in between, the stakes are inevitably moral.”

In a piece at the Atlantic, however, Ian Bogost recommends that we ditch the trolley car problem as a way of thinking about the ethics of autonomous vehicles. It is, in his view, too blunt an instrument for serious thinking about the ethical ramifications of autonomous vehicles. Bogost believes “that much greater moral sophistication is required to address and respond to autonomous vehicles.” The trolley car problem blinds us to the contextual complexity of morally consequential incidents that will inevitably arise as more and more autonomous vehicles populate our roads.

I wouldn’t go so far as to say that trolley car-style thought experiments are useless, but, with Bogost, I am inclined to believe that they threaten to eclipse the full range of possible ethical and moral considerations in play when we talk about autonomous vehicles.

For starters, the trolley car problem, as Bogost suggests, loads the deck in favor of a utilitarian mode of ethical reflection. I’d go further and say that it stacks the deck in favor of action-oriented approaches to moral reflection, whether rule-based or consequentialist. Of course, it is not altogether surprising that when thinking about moral decision making that must be programmed or engineered, one is tempted by ethical systems that may appear to reduce ethics to a set of rules to be followed or calculations to be executed.

In trolley car scenarios involving autonomous vehicles, it seems to me that two things are true: a choice must be made and there is no right choice.

There is no right answer to the trolley car problem. It is a tragedy either way. The trolley car problem is best thought of as a question to think with not a question to answer definitively. The point is not to find the one morally correct way to act but to come to feel the burden of moral responsibility.

Moreover, when faced with trolley car-like situations in real life, rare as they may be, human beings do not ordinarily have the luxury to reason their way to a morally acceptable answer. They react. It may be impossible to conclusively articulate the sources of that reaction. If there is an ethical theory that can account for it, it would be virtue ethics not varieties of deontology or consequentialism.

If there is no right answer, then, what are we left with?

Responsibility. Living with the consequences of our actions. Justice. The burdens of guilt. Forgiveness. Redemption.

Such things are obviously beyond the pale of programmable ethics. The machine, with which our moral lives are entwined, is oblivious to such subjective states. It cannot be meaningfully held to account. But this is precisely the point. The really important consideration is not what the machine will do, but what the human being will or will not experience and what human capacities will be sustained or eroded.

In short, the trolley car problem leads us astray in at least two related ways. First, it blinds us to the true nature of the equivalent human situation: we react, we do not reason. Second, building on this initial misconstrual, we then fail to see that what we are really outsourcing to the autonomous vehicle is not moral reasoning but moral responsibility.

Katherine Hayles has noted that distributed cognition (distributed, that is, among human and non-humans) implies distributed agency. I would add that distributed agency implies distributed moral responsibility. But it seems to me that moral responsibility is the sort of thing that does not survive such distribution. (At the very least, it requires new categories of moral, legal, and political thought.) And this, as I see it, is the real moral significance of autonomous vehicles: they are but one instance of a larger trend toward a material infrastructure that undermines the plausibility of moral responsibility.

Distributed moral responsibility is just another way of saying deferred or evaded moral responsibility.

The trajectory is longstanding. Here is Jacques Ellul commenting on the challenge modern society poses to the possibility of responsibility.

Let’s consider this from another angle. The trolley car problem focuses our ethical reflection on the accident. As I’ve suggested before, what if we were to ask not “What could go wrong?” but “What if it all goes right?” My point in inverting this query is to remind us that technologies that function exactly as they should and fade seamlessly into the background of our lived experience are at least as morally consequential as those that cause dramatic accidents.

Well-functioning technologies we come to trust become part of the material infrastructure of our experience, which plays an important role in our moral formation. This material infrastructure, the stuff of life with which we as embodied creatures constantly interact, both consciously and unconsciously, is partially determinative of our habitus, the set of habits, inclinations, judgments, and dispositions we bring to bear on the world. This includes, for example, our capacity to perceive the moral valence of our experiences or our capacity to subjectively experience the burden of moral responsibility. In other words, it is not so much a matter of specific decisions, although these are important, but of underlying capacities, orientations, and dispositions.

I suppose the question I’m driving at is this: What is the implicit moral pedagogy of tools to which we outsource acts of moral judgment?

While it might be useful to consider the trolley car, it’s important as well that we leave it behind for the sake of exploring the fullest possible range of challenges posed by emerging technologies with which our moral lives are increasingly entangled.


Tip the Writer


Survival in Justice


Reading Ivan Illich is an intellectually and morally challenging business. Below are two excerpts from Tools for Conviviality. I offer them to you for your consideration. I cannot say that I would endorse them without reservation. Nonetheless, they confront us with the uncomfortable possibility that the cure of our technological malaise will be more radical than most of us have wanted to believe and will require more than most of us are prepared to sacrifice.

These passages also remind us that the prospect of subjecting technology to serious moral critique amounts to a great deal more than intellectual parlor games or mere tinkering with the design of our digital tools. They remind us as well that when we finally come to the roots of our most serious problems, we will find wildly different conceptions of the good life and human flourishing in conflict with one another.

First, against the ideology of growth at all costs.

Our imaginations have been industrially deformed to conceive only what can be molded into an engineered system of social habits that fit the logic of large-scale production. We have almost lost the ability to frame in fancy a world in which sound and shared reasoning sets limits to everybody’s power to interfere with anybody’s equal power to shape the world […] Men with industrially distorted minds cannot grasp the rich texture of personal accomplishments within the range of modern though limited tools. There is no room in their imaginations for the qualitative change that the acceptance of a stable-state industry would mean; a society in which members are free from most of the multiple restraints of schedules and therapies now imposed for the sake of growing tools. Much less do most of our contemporaries experience the sober joy of life in this voluntary though relative poverty which lies within our grasp.

Second, what Illich considers will be the sacrifices required to move toward a more just society.

I argue that survival in justice is possible only at the cost of those sacrifices implicit in the adoption of a convivial mode of production and the universal renunciation of unlimited progeny, affluence, and power on the part of both individuals and groups. This price cannot be extorted by some despotic Leviathan, nor elicited by social engineering. People will rediscover the value of joyful sobriety and liberating austerity only if they relearn to depend on each other rather than on energy slaves. The price for a convivial society will be paid only as the result of a political process which reflects and promotes the society-wide inversion of present industrial consciousness. This political process will find its concrete expression not in some taboo, but in a series of temporary agreements on one or the other concrete limitation of means, constantly adjusted under the pressure of conflicting insights and interests.

As I’ve suggested before, it is often the case that “we want what we cannot possibly have on the terms that we want it.”



Tip the Writer


Facebook’s Very Own Mr. Kurtz

Facebook has been getting knocked around the ring pretty badly for some time now. Actually, it’s easy to forget, in the midst of a steady stream of controversies, just how long this sort of thing has been going on.

The very latest trouble for Zuckerberg and company came with the leak of an internal post written by Vice President Andrew Bosworth. Bosworth’s post was published in June 2016. Shortly after the post was leaked this past Thursday, Bosworth deleted it and distanced himself from what he had then written.

You can read the whole post at Buzzfeed, where the story was broken.

The post was titled “The Ugly,” and it put forward what I can only describe as a growth-at-all-cost philosophy for the company grounded in an ideology of connection.

After acknowledging that Facebook may be used alternatively to save the life of someone on the brink of suicide or to coordinate a terrorist attack, Bosworth writes, “The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.”

“That isn’t something we are doing for ourselves,” he adds. “Or for our stock price (ha!). It is literally just what we do. We connect people. Period.” It is for this reason, he goes on to explain, that Facebook’s questionable practices and its “pushing the envelope on growth” are justified.

This sentiment— “It is literally just what we do …. Period”—pronounced with such unblinking zeal and untroubled by doubt or conscience, amounts to an assertion of religious or ideological dogma. There is no rationale. There is no weighing of consequences. There is no higher purpose. Their’s not to reason why, their’s but to do and die.

In the aftermath of the leak, Bosworth published a new post in which he claimed the 2016 post “was definitely designed to provoke a response. It served effectively as a call for people across the company to get involved in the debate about how we conduct ourselves amid the ever changing mores of the online community.”

That valuable internal debate was now, in his view, effectively shut down because of the leak, and, Bosworth added, “I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.”

Do note the condescending gesture: the hoi polloi, uninitiated into the company and its mission, will simply not understand.

In an earlier tweet, Bosworth claimed he did not agree with that post today and did not even agree with what it claimed when it was written; it was intended as a provocation. Which raises the question of whether what Bosworth thinks the “broader population” is incapable of comprehending is simply the idea that something might be said hyperbolically so as to generate discussion. It makes more sense, it seems to me, to take his fear of being misunderstood to refer back to the contents of the post itself, which was never merely a provocation.

For his part, Mark Zuckerberg has said the post was something “that most people at Facebook including myself disagreed with strongly.” “We recognize that connecting people isn’t enough by itself,” Zuckerberg explained. “We also need to work to bring people closer together. We changed our whole mission and company focus to reflect this last year.” Like much of what Zuckerberg utters, of course, this borders on the meaningless. Zuckerberg is skilled in talking without saying anything.

Reaction among Facebook employees appears to be somewhat mixed, but mostly defensive of the company and desirous of both more stringent hiring practices and more punitive action against leakers.

Some outside the company have also spoken in measured defense of Bosworth’s post. Reporting on the leak in The Verge, Casey Newton concluded,

It’s ugly to read, but it also stands in stark counterpoint to a popular strain of Facebook criticism which holds that the company’s “move fast and break things” ethos is driven by an executive team that acts without considering its effect on the broader world. For better and for worse, the Bosworth memo shows the company reckoning with its unintended consequences and the ethics of its behavior — even before the 2016 election that caused so many of Facebook’s current problems.

At The Atlantic, Conor Friedersdorf approvingly cites Newton’s conclusion. Making some unfortunate and facile analogies to the advent of the printing press and radio, Friedersdorf generally takes Bosworth’s post as evidence that the company is having serious internal debates about its role in society, including the possible harm it may be causing.*

Its seems that we have two options here. On the one hand, we can take Bosworth’s most recent claims about the intent of his post at face value or we can take the post itself at face value.

Bosworth’s defense is not altogether implausible. Provocation is a well-worn strategy for promoting serious thought and debate. Moreover, the post as we now encounter it has been torn from its original context and we are encountering it in a wider social context of not un-deserved hostility toward the company. In short, it would be fair to acknowledge that we are more or less primed to read the post in the least favorable light.

As I’ve already suggested, however, there appear to be good reasons to take the post itself at face value. Of course, it is impossible to say conclusively what someone was thinking or intending when they did this or that. Ordinarily, what we have to go on is a person’s character as it is revealed to us by the confluence and divergence of their words and actions over time. Or, to put it in an older, more elegant idiom, by their fruit you shall know them.

In this case, what we have to work with is the company’s own record and, based on that record, I’m more inclined to take Bosworth’s post at something close to face value. It may have been put more pointedly than it would have been by others in the company, but, based on what we do know about Facebook’s practices, the post seems to be a fair representation of at least one powerful ideological current running through the company, one that may be justly summed up as “growth at all cost for the sake of connecting people.”

There is nothing particularly shocking about the revelation that a company is pursuing a growth-at-all-costs strategy. What is more disturbing is the rationale, which transmutes the practice into an ideology. It is one thing to pursue growth-at-all-costs because you want to maximize profits; it is another to do so because you believe that you are serving some higher, benevolent purpose.

At this point I’ll note one sense in which I would be inclined to believe Bosworth’s claim that he did not believe what he himself wrote. Not in the sense that he was innocently offering up a provocation toward deeper thought, but in the sense that he himself did not really believe the business about connecting people but was happy to have his employees motivated by that belief. Frankly, I’m not sure which would be worse.

Regarding the idea that connecting people is an ultimate good for the sake of which all else is done, it is impossible to know the degree to which Bosworth or anybody else at Facebook actually believes it, but it is worth noting that it is not without precedent in the the history of communication technology.

As Carolyn Marvin has documented in When Old Technologies Were New, the telegraph and the telephone were often promoted as tools of communication that would bring about cross-cultural harmony and peace merely by connecting people together. “The more any medium triumphed over distance, time, and embodied presence,” Marvin noted, “the more exciting it was, and the more it seemed to tread the path of the future.”

“And as always,” Marvin continued, “new media were thought to hail the dawning of complete cross-cultural understanding, since contact with other cultures would reveal people like those at home. Only physical barriers between cultures were acknowledged. When these were overcome, appreciation and friendliness would reign.”

Of course, such hopes never materialized. In part, this was because those who believed as much never imagined that the world they were connecting was neither like them or eager to become like them. “Assumptions like this,” Marvin observed, “required their authors to position themselves at the moral center of the universe, and they did. They were convinced that it belonged to them on the strength of their technological achievements.”

Some things never change. The myth of connection, then as now, fails on just this point: “The capacity to reach out to the Other seemed rarely to involve any obligation to behave as a guest in the Other’s domain, to learn or appreciate the Other’s customs, to speak his language, to share his victories and disappointments, or to change as a result of any encounter with him.” The true believers in the myth of connection never seem to understand just how impoverished and superficial the ideal of connection actually is.

But this may all be beside the point for someone who has bought into the mission: “It is literally just what we do. We connect people. Period.”

Bosworth, as he comes across in this post, resembles nothing more than Facebook’s very own Mr. Kurtz, a talented, charismatic man in whom an idea, comprehensible only to the true disciple, has taken hold with morally blinding intensity.

* Even on a generous reading, I’m not sure that the Newton/Friedersdorf defense holds up. I cannot know if robust debate was the point of the post or not, but the post itself does not show anything like what Newton and Friedersdorf claim. There is no reckoning whatsoever implied in anything that Bosworth wrote. There is no invitation to consider the implications of the company’s actions. There is only an overzealous coach delivering a locker room harangue to stiffen the will of his players.




Tip the Writer