Beyond the Trolley Car: The Moral Pedagogy of Ethical Tools

It is almost impossible to read about the ethics of autonomous vehicles without encountering some version of the trolley car problem. You’re familiar with the general outline of the problem, I’m sure. An out-of-control trolley car is barreling toward five unsuspecting people on a track. You are able to pull a lever and redirect the trolley toward another track, but there is one person on this track who will be hit by the trolley as a result. What do you do? Nothing and let five people die, or pull the lever and save five at the expense of one person’s life?

The thought experiment has its origins in a paper by the philosopher Phillipa Foot on abortion and the concept of double effect. I’m not sure when it was first invoked in the context of autonomous vehicles, but I first came across trolly car-style hypothesizing about the ethics of self-driving cars in a 2012 essay by Gary Marcus, which I learned about in a post on Nick Carr’s blog. The comments on that blog post, by the way, are worth reading. In response, I wrote my own piece reflecting on what I took to be more subtle issues arising from automated ethical systems.

More recently, following the death of a pedestrian who was struck by one of Uber’s self-driving vehicles in Arizona, Evan Selinger and Brett Frischmann co-authored a piece at Motherboard using the trolley car problem as a way of thinking about the moral and legal issues at stake. It’s worth your consideration. As Selinger and Frischmann point out, the trolley car problem tends to highlight drastic and deadly outcomes, but there are a host of non-lethal actions of moral consequence that an autonomous vehicle may be programmed to take. It’s important that serious thought be given to such matters now before technological momentum sets in.

“So, don’t be fooled when engineers hide behind technical efficiency and proclaim to be free from moral decisions,” the authors conclude. “’I’m just an engineer‘ isn’t an acceptable response to ethical questions. When engineered systems allocate life, death and everything in between, the stakes are inevitably moral.”

In a piece at the Atlantic, however, Ian Bogost recommends that we ditch the trolley car problem as a way of thinking about the ethics of autonomous vehicles. It is, in his view, too blunt an instrument for serious thinking about the ethical ramifications of autonomous vehicles. Bogost believes “that much greater moral sophistication is required to address and respond to autonomous vehicles.” The trolley car problem blinds us to the contextual complexity of morally consequential incidents that will inevitably arise as more and more autonomous vehicles populate our roads.

I wouldn’t go so far as to say that trolley car-style thought experiments are useless, but, with Bogost, I am inclined to believe that they threaten to eclipse the full range of possible ethical and moral considerations in play when we talk about autonomous vehicles.

For starters, the trolley car problem, as Bogost suggests, loads the deck in favor of a utilitarian mode of ethical reflection. I’d go further and say that it stacks the deck in favor of action-oriented approaches to moral reflection, whether rule-based or consequentialist. Of course, it is not altogether surprising that when thinking about moral decision making that must be programmed or engineered, one is tempted by ethical systems that may appear to reduce ethics to a set of rules to be followed or calculations to be executed.

In trolley car scenarios involving autonomous vehicles, it seems to me that two things are true: a choice must be made and there is no right choice.

There is no right answer to the trolley car problem. It is a tragedy either way. The trolley car problem is best thought of as a question to think with not a question to answer definitively. The point is not to find the one morally correct way to act but to come to feel the burden of moral responsibility.

Moreover, when faced with trolley car-like situations in real life, rare as they may be, human beings do not ordinarily have the luxury to reason their way to a morally acceptable answer. They react. It may be impossible to conclusively articulate the sources of that reaction. If there is an ethical theory that can account for it, it would be virtue ethics not varieties of deontology or consequentialism.

If there is no right answer, then, what are we left with?

Responsibility. Living with the consequences of our actions. Justice. The burdens of guilt. Forgiveness. Redemption.

Such things are obviously beyond the pale of programmable ethics. The machine, with which our moral lives are entwined, is oblivious to such subjective states. It cannot be meaningfully held to account. But this is precisely the point. The really important consideration is not what the machine will do, but what the human being will or will not experience and what human capacities will be sustained or eroded.

In short, the trolley car problem leads us astray in at least two related ways. First, it blinds us to the true nature of the equivalent human situation: we react, we do not reason. Second, building on this initial misconstrual, we then fail to see that what we are really outsourcing to the autonomous vehicle is not moral reasoning but moral responsibility.

Katherine Hayles has noted that distributed cognition (distributed, that is, among human and non-humans) implies distributed agency. I would add that distributed agency implies distributed moral responsibility. But it seems to me that moral responsibility is the sort of thing that does not survive such distribution. (At the very least, it requires new categories of moral, legal, and political thought.) And this, as I see it, is the real moral significance of autonomous vehicles: they are but one instance of a larger trend toward a material infrastructure that undermines the plausibility of moral responsibility.

Distributed moral responsibility is just another way of saying deferred or evaded moral responsibility.

The trajectory is longstanding. Here is Jacques Ellul commenting on the challenge modern society poses to the possibility of responsibility.

Let’s consider this from another angle. The trolley car problem focuses our ethical reflection on the accident. As I’ve suggested before, what if we were to ask not “What could go wrong?” but “What if it all goes right?” My point in inverting this query is to remind us that technologies that function exactly as they should and fade seamlessly into the background of our lived experience are at least as morally consequential as those that cause dramatic accidents.

Well-functioning technologies we come to trust become part of the material infrastructure of our experience, which plays an important role in our moral formation. This material infrastructure, the stuff of life with which we as embodied creatures constantly interact, both consciously and unconsciously, is partially determinative of our habitus, the set of habits, inclinations, judgments, and dispositions we bring to bear on the world. This includes, for example, our capacity to perceive the moral valence of our experiences or our capacity to subjectively experience the burden of moral responsibility. In other words, it is not so much a matter of specific decisions, although these are important, but of underlying capacities, orientations, and dispositions.

I suppose the question I’m driving at is this: What is the implicit moral pedagogy of tools to which we outsource acts of moral judgment?

While it might be useful to consider the trolley car, it’s important as well that we leave it behind for the sake of exploring the fullest possible range of challenges posed by emerging technologies with which our moral lives are increasingly entangled.


 

Tip the Writer

$1.00

Jacques Ellul on Technique As An Obstacle To Ethics

The following excerpts are taken from “The Search for Ethics In a Technicist Society” (1983) by Jacques Ellul. In this essay, Ellul considers the challenges posed to traditional morality in a society dominated by technique.

James Fowler on what Ellul meant by technique: “Ellul’s issue was not with technological machines but with a society necessarily caught up in efficient methodological techniques. Technology, then, is but an expression and by-product of the underlying reliance on technique, on the proceduralization whereby everything is organized and managed to function most efficiently, and directed toward the most expedient end of the highest productivity.

In Ellul’s view, “The ethical problem, that is human behavior, can only be considered in relation to this system, not in relation to some particular technical object or other.” “If technique is a milieu and a system, ” he adds, “the ethical problem can only be posed in terms of this global operation. Behavior and particular choices no longer have much significance. What is required is thus a global change in our habits or values, the rediscovery of either an existential ethics or a new ontology.”

Emphasis in boldface below is mine.

On the call to subordinate means to ends:

“It is quite right to say that technique is only made of means, it is an ensemble of means (We shall return to this later), but only with the qualification that these means obey their own laws and are no longer subordinated to ends. Besides, one must distinguish ideal ends (values, for example), goals (national, for example), and the objectives (immediate objectives: a researcher who tries to solve some particular problem). Science and technique develop according to objectives, rarely and accidentally in relation to more general goals, and never for ethical or spiritual ideals. There is no relation between the proclamation of values (justice, freedom, etc.) and the orientation of technical development. Those who are concerned with values (theologians, philosophers, etc.) have no influence on the specialists of technique and cannot require, for example, that some aspect of current research or other means should be abandoned for the sake of some value.

On the difficulty of determining who exactly must act to subordinate technique to moral ends:

To adopt one of these first two ethical orientations is to argue that it is human beings who must create a good use for technique or impose ends on it, but always neglecting to specify which human beings. Is the “who” not important? Is technique able to be mastered by just any passer-by, every worker, some ordinary person? Is this person the politician? The public at large? The intellectual and technician? Some collectivity? Humanity as a whole? For the most part politicians cannot grasp technique, and each specialist can understand an infinitesimal portion of the technical universe, just as each citizen only makes use of an infinitesimal piece of the technical apparatus. How could such a person possibly modify the whole? As for the collectivity or some class (if they exist as specific entities) they are wholly ignorant of the problem of technique as a system. Finally, what might be called “Councils of the Wise” […] have often been set up only to demonstrate their own importance, just as have international commissions and international treaties [….] Who is supposed to impose ends or get hold of the technical apparatus? No one knows.

On the compromised position from which we try think ethically about technique:

At the same time, one should not forget the fact that human beings are themselves already modified by the technical phenomenon. When infants are born, the environment in which they find themselves is technique, which is a “given.” Their whole education is oriented toward adaptation to the conditions of technique (learning how to cross streets at traffic lights) and their instruction is destined to prepare them for entrance into some technical employment. Human beings are psychologically modified by consumption, by technical work, by news, by television, by leisure activities (currently, the proliferation of computer games), etc., all of which are techniques. In other words, it must not be forgotten that it is this very humanity which has been pre-adapted to and modified by technique that is supposed to master and reorient technique. It is obvious that this will not be able to be done with any independence.

On the pressure to adapt to technique:

Finally, one other ethical orientation in regard to technique is that of adaptation. And this can be added to the entire ideology of facts: technique is the ultimate Fact. Humanity must adapt to facts. What prevents technique from operating better is the whole stock of ideologies, feelings, principles, beliefs, etc. that people continue to carry around and which are derived from traditional situations. It is necessary (and this is the ethical choice!) to liquidate all such holdovers, and to lead humanity to a perfect operational adaptation that will bring about the greatest possible benefit from the technique. Adaptation becomes a moral criterion.

 

The Rhetorical “We” and the Ethics of Technology

“Questioning AI ethics does not make you a gloomy Luddite,” or so the title of a recent article in a London business newspaper assures us. The most important thing to be learned here is that someone feels this needs to be said. Beyond that, there is also something instructive about the concluding paragraphs. If we read them against the grain, these paragraphs teach us something about how difficult it is to bring ethics to bear on technology.

“I’m simply calling for us to use all the tools at our disposal to build a better digital future,” the author tells us.

In practice, this means never forgetting what makes us human. It means raising awareness and entering into dialogue about the issue of ethics in AI. It means using our imaginations to articulate visions for a future that’s appealing to us.

If we can decide on the type of society we’d like to create, and the type of existence we’d like to have, we can begin to forge a path there.

All in all, it’s essential that we become knowledgeable, active, and influential on AI in every small way we can. This starts with getting to grips with the subject matter and past extreme and sensationalised points of view. The decisions we collectively make today will influence many generations to come.

Here are the challenges:

We have no idea what makes us human. You may, but we don’t.

We have nowhere to conduct meaningful dialogue; we don’t even know how to have meaningful dialogue.

Our imaginations were long ago surrendered to technique.

We can’t decide on the type of society we’d like to create or the type of existence we’d like to have, chiefly because this “we” is rhetorical. It is abstract and amorphous.

There is no meaningful antecedent to the pronouns we and us used throughout these closing paragraphs. Ethics is a communal business. This is no less true with regards to technology, perhaps it is all the more true. There is, however, no we there.

As individuals, we are often powerless against larger forces dictating how we are to relate to technology. The state is in many respects beholden to the technological–ideologically, politically, economically. Regrettably, we have very few communities located between the individual and the state constituting a we that can meaningfully deliberate and effectively direct the use of technology.

Technologies like AI emerge and evolve in social spaces that are resistant to substantial ethical critique. They also operate at a scale that undermines the possibility of ethical judgment and responsibility. Moreover, our society is ordered in such a way that there is very little to be done about it, chiefly because of the absence of structures that would sustain and empower ethical reflection and practice, the absence, in other words, of a we that is not merely rhetorical.


Tip the Writer

$1.00

The Ethics of Technological Mediation

Where do we look when we’re looking for the ethical implications of technology? A few would say that we look at the technological artifact itself. Many more would counter that the only place to look for matters of ethical concern is to the human subject. Philosopher of technology, Peter-Paul Verbeek, argues that there is another, perhaps more important place for us to look: the point of mediation, the point where the artifact and human subjectivity come together to create effects that cannot be located in either the artifact or the subject taken alone.

Early on in Moralizing Technology: Understanding and Designing the Morality of Things (2011), Verbeek briefly outlines the emergence of the field known as “ethics of technology.” “In its early days,” Verbeek notes, “ethical approaches to technology took the form of critique. Rather than addressing specific ethical problems related to actual technological developments, ethical reflection on technology focused on criticizing the phenomenon of ‘Technology’ itself.” Here we might think of Heidegger, critical theory, or Jacques Ellul. In time, “ethics of technology” emerged “seeking increased understanding of and contact with actual technological practices and developments,” and soon a host of sub-fields appeared: biomedical ethics, ethics of information technology, ethics of nanotechnology, engineering ethics, ethics of design, etc.

This approach remains, accordin to Verbeek, “merely instrumentalist.” “The central focus of ethics,” on this view, “is to make sure that technology does not have detrimental effects in the human realm and that human beings control the technological realm in morally justifiable ways.” It’s not that these considerations are unimportant, quite the contrary, but Verbeek believes that this approach “does not yet go far enough.”

Verbeek explains the problem:

“What remains out of sight in this externalist approach is the fundamental intertwining of these two domains [the human and the technological]. The two simply cannot be separated. Humans are technological beings, just as technologies are social entities. Technologies, after all, play a constitutive role in our daily lives. They help to shape our actions and experiences, they inform our moral decisions, and they affect the quality of our lives. When technologies are used, they inevitably help to shape the context in which they function. They help specific relations between human beings and reality to come about and coshape new practices and ways of living.”

Observing that technologies mediate perception, how we register the world, and action, how we act into the world, Verbeek elaborates a theory of technological mediation, built upon a postphenomenological approach to technology pioneered by Don Ihde. Rather than focus exclusively on either the artifact “out there,” the technological object, or the will “in here,” the human subject, Verbeek invites us to focus ethical attention on the constitution of both the perceived object and the subject’s intention in the act of technological mediation. In other words, how technology shapes perception and action is also of ethical consequence.

As Verbeek rightly insists, “Artifacts are morally charged; they mediate moral decisions, shape moral subjects, and play an important role in moral agency.”

Verbeek turns to the work of Ihde for some analytic tools and categories. Among the many ways humans might relate to technology, Ihde notes two relations of “mediation.” The first of these he calls “embodiment relations” in which the tools are incorporated by the user and the world is experienced through the tool (think of the blind man’s stick). The second he calls a “hermeneutic relation.” Verbeek explains:

“In this relation, technologies provide access to reality not because they are ‘incorporated,’ but because they provide a representation of reality, which requires interpretation [….] Ihde shows that technologies, when mediating our sensory relationship with reality, transform what we perceive. According to Ihde, the transformation of perception always has the structure of amplification and reduction.”

Verbeek gives us the example of looking at a tree through an infrared camera: most of what we see when we look at a tree unaided is “reduced,” but the heat signature of the tree is “amplified” and the tree’s health may be better assessed. Ihde calls this capacity of a tool to transform our perception “technological intentionality.” In other words, the technology directs and guides our perception and our attention. It says to us, “Look at this here not that over there” or “Look at this thing in this way.” This function is not morally irrelevant, especially when you consider that this effect is not contained within the digital platform but spills out into our experience of the world.

Verbeek also believes that our reflection on the moral consequences of technology would do well to take virtue ethics seriously. With regards to the ethics of technology, we typically ask, “What should I or should I not do with this technology?” and thus focus our attention on our actions. In this, we follow the lead of the two dominant modern ethical traditions: the deontological tradition stemming from Immanuel Kant, on the one hand, and the consequentialist tradition, closely associated with Bentham and Mill, on the other. In the case of both traditions, a particular sort of moral subject or person is in view—an autonomous and rational individual who acts freely and in accord with the dictates of reason.

In the Kantian tradition, the individual, having decided upon the right course of action through the right use of their reason, is duty bound to act thusly, regardless of consequences. In the consequentialist tradition, the individual rationally calculates which action will yield the greatest degree of happiness, variously understood, and acts accordingly.

If technology comes into play in such reasoning by such a person, it is strictly as an instrument of the individual will. The question, again, is simply, “What should I do or not do with it?” We ascertain the answer by either determining the dictates of subjective reasoning or calculating the objective consequences of an action, the latter approach is perhaps more appealing for its resonance with the ethos of technique.

We might conclude, then, that the popular instrumentalist view of technology—a view which takes technology to be a mere a tool, a morally neutral instrument of a sovereign will—is the natural posture of the sort of individual or moral subject that modernity yields. It is unlikely to occur to such an individual that technology is not only a tool with which moral and immoral actions are preformed but also an instrument of moral formation, informing and shaping the moral subject.

It is not that the instrumentalist posture is of no value, of course. On the contrary, it raises important questions that ought to be considered and investigated. The problem is that this approach is incomplete and too easily co-opted by the very realities that it seeks to judge. It is, on its own, ultimately inadequate to the task because it takes as its starting point an inadequate and incomplete understanding of the human person.

There is, however, another older approach to ethics that may help us fill out the picture and take into account other important aspects of our relation to technology: the tradition of virtue ethics in both its classical and medieval manifestations.

Verbeek comments on some of the advantages of virtue ethics. To begin with, virtue ethics does not ask, “What am I to do?” Rather, it asks, in Verbeek’s formulation, “What is the good life?” We might also add a related question that virtue ethics raises: “What sort of person do I want to be?” This is a question that Verbeek also considers, taking his cues from the later work of Michel Foucault.

The question of the good life, Verbeek adds,

“does not depart from a separation of subject and object but from the interwoven character of both. A good life, after all, is shaped not only on the basis of human decisions but also on the basis of the world in which it plays itself out (de Vries 1999). The way we live is determined not only by moral decision making but also by manifold practices that connect us to the material world in which we live. This makes ethics not a matter of isolated subjects but, rather, of connections between humans and the world in which they live.”

Virtue ethics, with its concern for habits, practices, and communities of moral formation, illuminates the various ways technologies impinge upon our moral lives. For example, a technologically mediated action that, taken on its own and in isolation, may be judged morally right or indifferent may appear in a different light when considered as one instance of a habit-forming practice that shapes our disposition and character.

Moreover, virtue ethics, which predates the advent of modernity, does not necessarily assume the sovereign individual as its point of departure. For this reason, it is more amenable to the ethics of technological mediation elaborated by Verbeek. Verbeek argues for “the distributed character of moral agency,” distributed that is among subject and the various technological artifacts that mediate the subject’s perception of and action in the world.

At the very least, asking the sorts of questions raised within a virtue ethic framework fills out our picture of technology’s ethical consequences.

In Susanna Clarke’s delightful novel, Jonathan Strange & Mr. Norrell, a fantastical story cast in realist guise about two magicians recovering the lost tradition of English magic in the context of the Napoleonic Wars, one of the main characters, Strange, has the following exchange with the Duke of Wellington:

“Can a magician kill a man by magic?” Lord Wellington asked Strange. Strange frowned. He seemed to dislike the question. “I suppose a magician might,” he admitted, “but a gentleman never would.”

Strange’s response is instructive and the context of magic more apropos than might be apparent. Technology, like magic, empowers the will, and it raises the sort of question that Wellington asks: can such and such be done?

Not only does Strange’s response make the ethical dimension paramount, he approaches the ethical question as a virtue ethicist. He does not run consequentialist calculations nor does he query the deliberations of a supposedly universal reason. Rather, he frames the empowerment availed to him by magic with a consideration of the kind of person he aspires to be, and he subjects his will to this larger project of moral formation. In so doing, he gives us a good model for how we might think about the empowerments availed to us by technology.

As Verbeek, reflecting on the aptness of the word subject, puts it, “The moral subject is not an autonomous subject; rather, it is the outcome of active subjection.” It is, paradoxically, this kind of subjection that can ground the relative freedom with which we might relate to technology.


Most of this material originally appeared on the blog of the Center for the Study of Ethics and Technology. I repost it here in light of recent interest in the ethical consequences of technology. Verbeek’s work does not, it seems to me, get the attention it deserves.

Evaluating the Promise of Technological Outsourcing

“It is crucial for a resilient democracy that we better understand how these powerful, ubiquitous websites are changing the way we think, interact and behave.” The websites in question are chiefly Google and Facebook. The admonition to better understand their impact on our thinking and civic deliberations comes from an article in The Guardian by Evan Selinger and Brett Frischmann, “Why it’s dangerous to outsource our critical thinking to computers.”

Selinger and Frischmann are the authors of one the forthcoming books I am most eagerly anticipating, Being Human in the 21st Century to be published by Cambridge University Press. I’ve frequently cited Selinger’s outsourcing critique of digital technology (e.g., here and here), which the authors will be expanding and deepening in this study. In short, Selinger has explored how a variety of apps and devices outsource labor that is essential or fundamental to our humanity. It’s an approach that immediately resonated with me, primed as I had been for it by Albert Borgmann’s work. (You can read about Borgmann in the latter link above and here.)

In this case, the crux of Selinger and Frischmann’s critique can be found in these two key paragraphs:

Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.

This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

Their concern, then, is that we may be encouraged to outsource an essential skill to a device or application that promises to do the work for us. In this case, the skill we are tempted to outsource is a critical component of a healthy citizenry. As they put it, “Democracies don’t simply depend on well-informed citizens – they require citizens to be capable of exerting thoughtful, independent judgment.”

As I’m sure Selinger and Frischmann would agree, this outsourcing dynamic is one of the dominant features of the emerging techno-social landscape, and we should work hard to understand its consequences.

As some of you may remember, I’m fond of questions. They are excellent tools for thinking, including thinking about the ethical implications of technology. “Questioning is the piety of thought,” Heidegger once claimed in a famous essay about technology. With that in mind I’ll work my way to a few questions we can ask of outsourcing technologies.

My approach will take its point of departure from Marshall McLuhan’s Laws of Media, sometimes called the Four Effects or McLuhan’s tetrad. These four effects were offered by McLuhan as a compliment to Aristotle’s Four Causes and they were presented as a paradigm by which we might evaluate the consequences of both intellectual and material things, ideas and tools.

The four effects were Retrieval, Reversal, Obsolescence, and Enhancement. Here are a series of questions McLuhan and his son, Eric McLuhan, offered to unpack these four effects:

A. “What recurrence or RETRIEVAL of earlier actions and services is brought into play simultaneously by the new form? What older, previously obsolesced ground is brought back and inheres in the new form?”

B. “When pushed to the limits of its potential, the new form will tend to reverse what had been its original characteristics. What is the REVERSAL potential of the new form?”

C. “If some aspect of a situation is enlarged or enhanced, simultaneously the old condition or un-enhanced situation is displaced thereby. What is pushed aside or OBSOLESCED by the new ‘organ’?”

D. “What does the artefact ENHANCE or intensify or make possible or accelerate? This can be asked concerning a wastebasket, a painting, a steamroller, or a zipper, as well as about a proposition in Euclid or a law of physics. It can be asked about any word or phrase in any language.”

These are all useful questions, but for our purposes the focus will be on the third effect, Obsolescence. It’s in this class of effects that I think we can locate what Selinger calls digital outsourcing. I began by introducing all four, however, so that we wouldn’t be tempted to think that displacement or outsourcing is the only dynamic to which we should give our attention.

When McLuhan invites us to ask what a new technology renders obsolete, we may immediately imagine older technologies that are set aside in favor of the new. Following Borgmann, however, we can also frame the question as a matter of human labor or involvement. In other words, it is not only about older tools that we set aside but also about human faculties, skills, and subjective engagement with the world–these, too, can be displaced or outsourced by new tools. The point, of course, is not to avoid every form of technological displacement, this would be impossible and undesirable. Rather, what we need is a better way of thinking about and evaluating these displacements so that we might, when possible, make wise choices about our use of technology.

So we can begin to elaborate McLuhan’s third effect with this question:

1. What kind of labor does the tool/device/app displace? 

This question yields at least five possible responses:

a. Physical labor, the work of the body
b. Cognitive labor, the work of the mind
c. Emotional labor, the work of the heart
d. Ethical labor, the work of the conscience
e. Volitional labor, the work of the will

The schema implied by these five categories is, of course, like all such schemas, too neat. Take it as a heuristic device.

Other questions follow that help clarify the stakes. After all, what we’re after is not only a taxonomy but also a framework for evaluation.

2. What is the specific end or goal at which the displaced labor is aimed?

In other words, what am I trying to accomplish by the use the technology in question? But the explicit objective I set out to achieve may not be the only effect worth considering; there are implicit effects as well. Some of these implicit effects may be subjective and others may be social; in either case they are not always evident and may, in fact, be difficult to perceive. For example, in using GPS, navigating from Point A to Point B is the explicit objective. However, the use of GPS may also impact my subjective experience of place, for example, and this may carry political implications. So we should also consider a corollary question:

2a. Are there implicit effects associated with the displaced labor?

Consider the work of learning: If the work of learning is ultimately subordinate to becoming a certain kind of person, then it matters very much how we go about learning. This is because  the manner in which we go about acquiring knowledge constitutes a kind of practice that over the long haul shapes our character and disposition in non-trivial ways. Acquiring knowledge through apprenticeship, for example, shapes people in a certain way, acquiring knowledge through extensive print reading in another, and through web based learning in still another. The practice which constitutes our learning, if we are to learn by it, will instill certain habits, virtues, and, potentially, vices — it will shape the kind of person we are becoming.

3. Is the labor we are displacing essential or accidental to the achievement of that goal?

As I’ve written before, when we think of ethical and emotional labor, it’s hard to separate the labor itself from the good that is sought or the end that is pursued. For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous.

A related question arises when we remember the implicit effects we discussed above:

3a. Is the labor essential or accidental to the implicit effects associated with the displaced labor?

4. What skills are sustained by the labor being displaced? 

4a. Are these skills valuable for their own sake and/or transferable to other domains?

These two questions seem more straightforward, so I will say less about them. The key point is essentially the one made by Selinger and Frischmann in the article with which we began: the kind of critical thinking that demigrated require of their citizens should be actively cultivated. Outsourcing that work to an algorithm may, in fact, weaken the very skill it seeks to support.

These questions should help us think more clearly about the promise of technological outsourcing. They may also help us to think more clearly about what we have been doing all along. After all, new technologies often cast old experiences in new light. Even when we are wary or critical of the technologies in question, we may still find that their presence illuminates aspects of our experience by inviting us to think about what we had previously taken for granted.