Beyond the Trolley Car: The Moral Pedagogy of Ethical Tools

It is almost impossible to read about the ethics of autonomous vehicles without encountering some version of the trolley car problem. You’re familiar with the general outline of the problem, I’m sure. An out-of-control trolley car is barreling toward five unsuspecting people on a track. You are able to pull a lever and redirect the trolley toward another track, but there is one person on this track who will be hit by the trolley as a result. What do you do? Nothing and let five people die, or pull the lever and save five at the expense of one person’s life?

The thought experiment has its origins in a paper by the philosopher Phillipa Foot on abortion and the concept of double effect. I’m not sure when it was first invoked in the context of autonomous vehicles, but I first came across trolly car-style hypothesizing about the ethics of self-driving cars in a 2012 essay by Gary Marcus, which I learned about in a post on Nick Carr’s blog. The comments on that blog post, by the way, are worth reading. In response, I wrote my own piece reflecting on what I took to be more subtle issues arising from automated ethical systems.

More recently, following the death of a pedestrian who was struck by one of Uber’s self-driving vehicles in Arizona, Evan Selinger and Brett Frischmann co-authored a piece at Motherboard using the trolley car problem as a way of thinking about the moral and legal issues at stake. It’s worth your consideration. As Selinger and Frischmann point out, the trolley car problem tends to highlight drastic and deadly outcomes, but there are a host of non-lethal actions of moral consequence that an autonomous vehicle may be programmed to take. It’s important that serious thought be given to such matters now before technological momentum sets in.

“So, don’t be fooled when engineers hide behind technical efficiency and proclaim to be free from moral decisions,” the authors conclude. “’I’m just an engineer‘ isn’t an acceptable response to ethical questions. When engineered systems allocate life, death and everything in between, the stakes are inevitably moral.”

In a piece at the Atlantic, however, Ian Bogost recommends that we ditch the trolley car problem as a way of thinking about the ethics of autonomous vehicles. It is, in his view, too blunt an instrument for serious thinking about the ethical ramifications of autonomous vehicles. Bogost believes “that much greater moral sophistication is required to address and respond to autonomous vehicles.” The trolley car problem blinds us to the contextual complexity of morally consequential incidents that will inevitably arise as more and more autonomous vehicles populate our roads.

I wouldn’t go so far as to say that trolley car-style thought experiments are useless, but, with Bogost, I am inclined to believe that they threaten to eclipse the full range of possible ethical and moral considerations in play when we talk about autonomous vehicles.

For starters, the trolley car problem, as Bogost suggests, loads the deck in favor of a utilitarian mode of ethical reflection. I’d go further and say that it stacks the deck in favor of action-oriented approaches to moral reflection, whether rule-based or consequentialist. Of course, it is not altogether surprising that when thinking about moral decision making that must be programmed or engineered, one is tempted by ethical systems that may appear to reduce ethics to a set of rules to be followed or calculations to be executed.

In trolley car scenarios involving autonomous vehicles, it seems to me that two things are true: a choice must be made and there is no right choice.

There is no right answer to the trolley car problem. It is a tragedy either way. The trolley car problem is best thought of as a question to think with not a question to answer definitively. The point is not to find the one morally correct way to act but to come to feel the burden of moral responsibility.

Moreover, when faced with trolley car-like situations in real life, rare as they may be, human beings do not ordinarily have the luxury to reason their way to a morally acceptable answer. They react. It may be impossible to conclusively articulate the sources of that reaction. If there is an ethical theory that can account for it, it would be virtue ethics not varieties of deontology or consequentialism.

If there is no right answer, then, what are we left with?

Responsibility. Living with the consequences of our actions. Justice. The burdens of guilt. Forgiveness. Redemption.

Such things are obviously beyond the pale of programmable ethics. The machine, with which our moral lives are entwined, is oblivious to such subjective states. It cannot be meaningfully held to account. But this is precisely the point. The really important consideration is not what the machine will do, but what the human being will or will not experience and what human capacities will be sustained or eroded.

In short, the trolley car problem leads us astray in at least two related ways. First, it blinds us to the true nature of the equivalent human situation: we react, we do not reason. Second, building on this initial misconstrual, we then fail to see that what we are really outsourcing to the autonomous vehicle is not moral reasoning but moral responsibility.

Katherine Hayles has noted that distributed cognition (distributed, that is, among human and non-humans) implies distributed agency. I would add that distributed agency implies distributed moral responsibility. But it seems to me that moral responsibility is the sort of thing that does not survive such distribution. (At the very least, it requires new categories of moral, legal, and political thought.) And this, as I see it, is the real moral significance of autonomous vehicles: they are but one instance of a larger trend toward a material infrastructure that undermines the plausibility of moral responsibility.

Distributed moral responsibility is just another way of saying deferred or evaded moral responsibility.

The trajectory is longstanding. Here is Jacques Ellul commenting on the challenge modern society poses to the possibility of responsibility.

Let’s consider this from another angle. The trolley car problem focuses our ethical reflection on the accident. As I’ve suggested before, what if we were to ask not “What could go wrong?” but “What if it all goes right?” My point in inverting this query is to remind us that technologies that function exactly as they should and fade seamlessly into the background of our lived experience are at least as morally consequential as those that cause dramatic accidents.

Well-functioning technologies we come to trust become part of the material infrastructure of our experience, which plays an important role in our moral formation. This material infrastructure, the stuff of life with which we as embodied creatures constantly interact, both consciously and unconsciously, is partially determinative of our habitus, the set of habits, inclinations, judgments, and dispositions we bring to bear on the world. This includes, for example, our capacity to perceive the moral valence of our experiences or our capacity to subjectively experience the burden of moral responsibility. In other words, it is not so much a matter of specific decisions, although these are important, but of underlying capacities, orientations, and dispositions.

I suppose the question I’m driving at is this: What is the implicit moral pedagogy of tools to which we outsource acts of moral judgment?

While it might be useful to consider the trolley car, it’s important as well that we leave it behind for the sake of exploring the fullest possible range of challenges posed by emerging technologies with which our moral lives are increasingly entangled.


 

The Rhetorical “We” and the Ethics of Technology

“Questioning AI ethics does not make you a gloomy Luddite,” or so the title of a recent article in a London business newspaper assures us. The most important thing to be learned here is that someone feels this needs to be said. Beyond that, there is also something instructive about the concluding paragraphs. If we read them against the grain, these paragraphs teach us something about how difficult it is to bring ethics to bear on technology.

“I’m simply calling for us to use all the tools at our disposal to build a better digital future,” the author tells us.

In practice, this means never forgetting what makes us human. It means raising awareness and entering into dialogue about the issue of ethics in AI. It means using our imaginations to articulate visions for a future that’s appealing to us.

If we can decide on the type of society we’d like to create, and the type of existence we’d like to have, we can begin to forge a path there.

All in all, it’s essential that we become knowledgeable, active, and influential on AI in every small way we can. This starts with getting to grips with the subject matter and past extreme and sensationalised points of view. The decisions we collectively make today will influence many generations to come.

Here are the challenges:

We have no idea what makes us human. You may, but we don’t.

We have nowhere to conduct meaningful dialogue; we don’t even know how to have meaningful dialogue.

Our imaginations were long ago surrendered to technique.

We can’t decide on the type of society we’d like to create or the type of existence we’d like to have, chiefly because this “we” is rhetorical. It is abstract and amorphous.

There is no meaningful antecedent to the pronouns we and us used throughout these closing paragraphs. Ethics is a communal business. This is no less true with regards to technology, perhaps it is all the more true. There is, however, no we there.

As individuals, we are often powerless against larger forces dictating how we are to relate to technology. The state is in many respects beholden to the technological–ideologically, politically, economically. Regrettably, we have very few communities located between the individual and the state constituting a we that can meaningfully deliberate and effectively direct the use of technology.

Technologies like AI emerge and evolve in social spaces that are resistant to substantial ethical critique. They also operate at a scale that undermines the possibility of ethical judgment and responsibility. Moreover, our society is ordered in such a way that there is very little to be done about it, chiefly because of the absence of structures that would sustain and empower ethical reflection and practice, the absence, in other words, of a we that is not merely rhetorical.


Growing Up with AI

In an excerpt from her forthcoming bookWho Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart, Rachel Botsman reflects on her three-year-old’s interactions with Amazon’s AI assistant, Alexa.

Botsman found that her daughter took quite readily to Alexa and was soon asking her all manner of questions and even asking Alexa to make choices for her, about what to wear, for instance, or what she should do that day. “Grace’s easy embrace of Alexa was slightly amusing but also alarming,” Botsman admits. “Today,” she adds, “we’re no longer trusting machines just to do something, but to decide what to do and when to do it.” She then goes on to observe that the next generation will grow up surrounded by AI agents, so that the question will not be “Should we trust robots?” but rather “Do we trust them too much?”

Along with issues of privacy and data gathering, Botsman was especially concerned with the intersection of AI technology and commercial interests: “Alexa, after all, is not ‘Alexa.’ She’s a corporate algorithm in a black box.”

To these concerns, philosopher Mark White, elaborating on Botsman’s reflections, adds the following:

Again, this would not be as much of a problem if the choices we cede to algorithms only dealt with songs and TV shows. But as Botsman’s story shows, the next generation may develop a degree of faith in the “wisdom” of technology that leads them to give up even more autonomy to machines, resulting in a decline in individual identity and authenticity as more and more decisions are left to other parties to make in interests that are not the person’s own—but may be very much in the interests of those programming and controlling the algorithms.

These concerns are worth taking into consideration. I’m ambivalent about framing a critique of technology in terms of authenticity, or even individual identity, but I’m not opposed to a conversation along these lines. Such a conversation at least encourages us to think a bit more deeply about the role of technology in shaping the sorts of people we are always in the process of becoming. This is, of course, especially true of children.

Our identity, however, does not emerge in pristine isolation from other human beings or independently from the fabric of our material culture, technologies included. That is not the ideal to which we should aim. Technology will unavoidably be part of our children’s lives and ours. But which technologies? Under what circumstances? For what purposes? With what consequences? These are some of the questions we should be asking.

Of an AI assistant that becomes part of a child’s taken-for-granted environment, other more specific questions also come to mind.

What conversations or interactions will the AI assistant displace?

How will it effect the development of a child’s imagination?

How will it direct a child’s attention?

How will a child’s language acquisition be effected?

What expectations will it create regarding the solicitude they can expect from the world?

How will their curiosity be shaped by what the AI assistant can and cannot answer?

Will the AI assistants undermine the development of critical cognitive skills by their ability to immediately respond to simple questions?

Will their communication and imaginative life shrink to the narrow parameters within which they can interact with AI?

Will parents be tempted to offload their care and attentiveness to the AI assistant, and with what consequences?

Of AI assistants generally, we might conclude that what they do well–answer simple direct questions, for example–may, in fact, prove harmful to a child’s development, and what they do poorly–provide for rich, complex engagement with the world–is what children need most.

We tend to bend ourselves to fit the shape of our tools. Even as tech-savvy adults we do this. It seems just as likely that children will do likewise. For this reason, we do well to think long and hard about the devices that we bring to bear upon their lives.

We make all sorts of judgements as a society about when it is appropriate for children to experience certain realities, and this care for children is one of the marks of a healthy society. We do this through laws, policy, and cultural norms. With regards to the norms that govern the technology that we introduce into our children’s lifeworld, we would do well, it seems to me, to adopt a more cautionary stance. Sometimes this means shielding children from certain technologies if it is not altogether obvious that their impact will be helpful and beneficial. We should, in other words, shift the burden of proof so that a technology must earn its place in our children’s lives.

Botsman finally concluded that her child was not ready for Alexa to be a part of her life and that it was possibly usurping her own role as parent:

Our kids are going to need to know where and when it is appropriate to put their trust in computer code alone. I watched Grace hand over her trust to Alexa quickly. There are few checks and balances to deter children from doing just that, not to mention very few tools to help them make informed decisions about A.I. advice. And isn’t helping Gracie learn how to make decisions about what to wear — and many more even important things in life — my job? I decided to retire Alexa to the closet.

It is even better when companies recognize some of these problem and decide (from mixed motives, I’m sure) to pull a device whose place in a child’s life is at best ambiguous.


This post is part of a series on being a parent in the digital age.

The Consolations of a Technologically Re-enchanted World

Navneet Alang writes about digital culture with a rare combination of insight and eloquence. In a characteristically humane meditation on the perennial longings expressed by our use of social media and digital devices, Alang recounts a brief exchange he found himself having with Alexa, the AI assistant that accompanies Amazon Echo.

Alang had asked Alexa about the weather while he was traveling in an unfamiliar city. Alexa alerted him of the forecasted rain, and, without knowing why exactly, Alang thanked the device. “No problem,” Alexa replied.

It was Alang’s subsequent reflection on that exchange that I found especially interesting:

In retrospect, I had what was a very strange reaction: a little jolt of pleasure. Perhaps it was because I had mostly spent those two weeks alone, but Alexa’s response was close enough to the outline of human communication to elicit a feeling of relief in me. For a moment, I felt a little less lonely.

From there, Alang considers apps which allow users to anonymously publish their secrets to the world or to the void–who can tell–and little-used social media sites on which users compose surprisingly revealing messages seemingly directed at no one in particular. A reminder that, as Elizabeth Stoker Bruenig has noted, “Confession, once rooted in religious practice, has assumed a secular importance that can be difficult to describe.”

Part of what makes the effort to understand technology so fascinating and challenging is that we are not, finally, trying to understand discreet artifacts or even expansive systems; what we are really trying to understand is the human condition, alternatively and sometimes simultaneously expressed, constituted, and frustrated by our use of all that we call technology.

As Alang notes near the end of his essay, “what digital technologies do best, to our benefit and detriment, is to act as a canvas for our desires.” And, in his discussion, social media and confessional apps express “a wish to be seen, to be heard, to be apprehended as nothing less than who we imagine ourselves to be.” In the most striking paragraph of the piece, Alang expands on this point:

“Perhaps, then, that Instagram shot or confessional tweet isn’t always meant to evoke some mythical, pretend version of ourselves, but instead seeks to invoke the imagined perfect audience—the non-existent people who will see us exactly as we want to be seen. We are not curating an ideal self, but rather, an ideal Other, a fantasy in which our struggle to become ourselves is met with the utmost empathy.”

This strikes me as being rather near the mark. We might also consider the possibility that we seek this ideal Other precisely so that we might receive back from it a more coherent version of ourselves. The empathetic Other who comes to know me may then tell me what I need to know about myself. A trajectory begins to come into focus taking up both the confessional booth and the therapist’s office. Perhaps this presses the point too far, I don’t know. It is, in any case, a promise implicit in the rhetoric of Big Data, that it is the Other that knows us better than we know ourselves. If, to borrow St. Augustine’s formulation, we have become a question to ourselves, then the purveyors of Big Data proffer to us the answer.

It also strikes me that the yearning Alang describes, in another era, would have been understood chiefly as a deeply religious longing. We may see it as fantasy, or, as C.S. Lewis once put it, we may see it as “the truest index of our real situation.”

Interestingly, the paragraph from which that line is taken may bring us back to where we started: with Alang deriving a “little jolt of pleasure” from his exchange with Alexa. `Here is the rest of it:

“Apparently, then, our lifelong nostalgia, our longing to be reunited with something in the universe from which we now feel cut off, to be on the inside of some door which we have always seen from the outside, is no mere neurotic fancy, but the truest index of our real situation.”

For some time now, I’ve entertained the idea that the combination of technologies that promises to animate our mute and unresponsive material environment–think Internet of Things, autonomous machines, augmented reality, AI–entice us with a re-enchanted world: the human-built world, technologically enchanted. Which is to say a material world that flatters us by appearing to be responsive to our wishes and desires, even speaking to us when spoken to–in short, noting us and thereby marginally assuaging the loneliness for which our social media posts are just another sort of therapy.

On the Moral Implications of Willful Acts of Virtual Harm

Perhaps you’ve seen the clip below in which a dog-like robot developed by Boston Dynamics, a Google-owned robotics company, receives a swift kick and manages to maintain its balance:

I couldn’t resist tweeting that clip with this text: “The mechanical Hound slept but did not sleep, lived but did not live in … a dark corner of the fire house.” That line, of course, is from Ray Bradbury’s Fahrenheit 451, in which the mechanical Hound is deployed to track down dissidents. The apt association was first suggested to me a few months back by a reader’s email occasioned by an earlier Boston Dynamics robot.

My glib tweet aside, many have found the clip disturbing for a variety of reasons. One summary of the concerns can be found in a CNN piece by Phoebe Parke titled, “Is It Cruel to Kick a Robot Dog?” (via Mary Chayko). That question reminded me of a 2013 essay by Richard Fisher posted at BBC Future, “Is It OK to Torture or Murder a Robot?”

Both articles discuss our propensity to anthropomorphize non-human entities and artifacts. Looked at in that way, the ethical concerns seem misplaced if not altogether silly. So, according to one AI researcher quoted by Parke, “The only way it’s unethical is if the robot could feel pain.” A robot cannot feel pain, thus there is nothing unethical about the way we treat robots.

But is that really all that needs to be said about the ethical implications?

Consider these questions raised by Fisher:

“To take another example: if a father is torturing a robot in front of his 4-year-old son, would that be acceptable? The child can’t be expected to have the sophisticated understanding of adults. Torturing a robot teaches them that acts that cause suffering – simulated or not – are OK in some circumstances.

Or to take it to an extreme: imagine if somebody were to take one of the childlike robots already being built in labs, and sell it to a paedophile who planned to live out their darkest desires. Should a society allow this to happen?

Such questions about apparently victimless evil are already playing out in the virtual world. Earlier this year, the New Yorker described the moral quandaries raised when an online forum discussing Grand Theft Auto asked players if rape was acceptable inside the game. One replied: ‘I want to have the opportunity to kidnap a woman, hostage her, put her in my basement and rape her everyday, listen to her crying, watching her tears.’ If such unpleasant desires could be actually lived with a physical robotic being that simulates a victim, it may make it more difficult to tolerate.”

These are challenging questions that, to my mind, expose the inadequacy of thinking about the ethics of technology, or ethics more broadly, from a strictly instrumental perspective.

Recently, philosopher Charlie Huenemann posed a similarly provocative reflection on killing dogs in Minecraft. His reflections led him to consider the moral standing of the attachments we form to objects, whether they be material or virtual, in a manner I found helpful. Here are his concluding paragraphs:

The point is that we form attachments to things that may have no feelings or rights whatsoever, but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern. (Yes, lots of complications can come in here – “What if I really care for the fire that is now engulfing your home?” – but the basic point stands: there is some initial reason, though not necessarily a final or decisive one.) I had some attachment to my Minecraft dogs, which is why I felt sorry when they died. Had you come along in a multiplayer setting and chopped them to death for the sheer malicious pleasure of doing so, I could rightly claim that you did something wrong.

Moreover, we can also speak of attachments – even to virtual objects – that we should form, just as part of being good people. Imagine if I were to gain a Minecraft dog that accompanied me on many adventures. I even offer it rotten zombie flesh to eat on several occasions. But then one day I tire of it and chop it into nonexistence. I think most of would be surprised: “Why did you do that? You had it a long time, and even took care of it. Didn’t you feel attached to it?” Suppose I say, “No, no attachment at all”. “Well, you should have”, we would mumble. It just doesn’t seem right not to have felt some attachment, even if it was overcome by some other concern. “Yes, I was attached to it, but it was getting in the way too much”, would have been at least more acceptable as a reply. (“Still, you didn’t have to kill it. You could have just clicked on it to sit forever….”)

The first of my 41 questions about the ethics of technology was a simple one: What sort of person will the use of this technology make of me?

It’s a simple question, but one we often fail to ask because we assume that ethical considerations apply only to what people do with technology, to the acts themselves. It is a question, I think, that helps us imagine the moral implications of willful acts of virtual harm.

Of course, it is also worth asking, “What sort of person does my use of this technology reveal me to be?”