Technology and the Inadequacy of Values Talk

Albert Borgmann has some useful, possibly urgent things to say to us in Technology and the Character of Contemporary Life: A Philosophical Inquiry. The book is, I believe, Borgmann’s most important work in the philosophy of technology. I recently discovered that Evgeny Morozov listed this work as on of the five books he commended a few years back as the best books on the philosophy of technology (it’s a solid list all around). Of Borgmann’s book, Morozov observed, “it was all hardcore philosophical theory, how to think and evaluate practices, and what to do about technology and what should be done.” Indeed. Borgmann’s work, though now 35 years old, seems to me as relevant as ever. I’ve drawn on his writing a number of times, and you can dip into the archive to read some of those posts if you’re so inclined.

For present purpose, I wanted to simply share a handful of excerpts drawn from the chapters dealing with technology and politics.

Writing about technology and social order, Borgmann observes that “it is widely admitted that there is a problem of orientation in the technologically advanced countries.” Citing a cheery paragraph from Buckminster Fuller, he acknowledges that not everyone is debilitated by the disorientation occasioned by modern technology, but he suspects that Fuller is an outlier. For those who do find modern technology disorienting, he notes that more often than not it is believed that “we can find our bearings in relation to technology by raising the questions of values”—ethics talk, we might say today.

But Borgmann is not impressed: “Such a procedure may only strengthen and conceal the reign of what we seek to question.” It would do so chiefly by reinforcing the means-ends distinction that Borgmann finds rather pernicious. “The relative stability of ends and the radical variability of means that again comes to fruition in the device is likewise congenial to values talk …”

Device is a technical term in Borgmann’s work, and a key component of what he termed the device paradigm. The device paradigm (or pattern), which Borgmann argues characterizes modern technology. It’s not an easy concept to summarize. It describes the tendency of machines to become simultaneously more commodious and more opaque, or, to put it another way, easier to use and harder to understand. Borgmann contrasted devices to focal things, and the differences was chiefly a matter of the form of engagement they generated. Basically, Borgmann believed that devices encourage what we might think of as shallow, superficial, ultimately unsatisfying engagement. I’ve suggested that we could get at this distinction by noting how we tend to call those who take up a device users. Such a term does not quite fit for those who take up with the sort of tool, artifact, or technology which Bormann labels a focal thing. It may be better to think of them as practitioners.

One aspect of the device paradigm is the radical interchangeability of means to which he alludes in the lines I cited above. The point of the device, in fact, is to offer us the same end we might have achieved through a focal thing but without the hassle, so to speak. Elsewhere, Borgmann spoke of what devices make technologically available being “instantaneous, ubiquitous, safe, and easy.” In so doing, however, they have radically altered the nature of the end they procure. One cannot get at the meaning or significance of technology by presuming at the outset that means are basically indifferent and inconsequential so long as we arrive at the desired end or goal.

We might begin to see then why “values talk” simply unfolds within the device paradigm rather than challenging it. “No matter how the question of value is raised and settled,” Borgmann writes, “the patter of technology itself is never in question. Technology comes into play as the indispensable and unequaled procurement of the means that allow us to realize our preferred values.”

Borgmann acknowledges both that it is politically useful to resort to values talk and that, for the same reasons, it is difficult to commend focal things. Values talk typically centers on “hard” or “measurable” values: employment, resources, or productivity, for example. These are instrumental values, Borgmann notes, but “one can appeal to them as guides or ends in political controversies because the ends proper that they serve are understood and granted by almost everyone. Those final values are commodities.”

Commodities, he adds, “are sharply defined and easily measured. Focal things, on the other hand, engage us in so many and subtle ways that no quantification can capture them.” This is not a matter of “mysterious unquantifiable properties,” rather “their significance is composed of so many, if not all, of their physically ascertainable properties that an explicit quantitative account must always impoverish the greatly.”

Penultimate thought from Bormann: “When values talk is about [focal] things, it falters, and the object of discourse slips from our grasp. Discourse that is appropriate to things must in its crucial occurrences abandon the means-ends distinction. It must be open to and guided by the fullness of the focal thing in the world, and it can communicate the thing only through testimony and appeal.”

Final word: “In spite of its shortcomings one should, as a matter of prudence and pedagogy, encourage discussions that raise the value question. Without this familiar if inadequate approach, a fundamental analysis of technology remains forbidding. Moreover, values will remain indispensable as ways of summarizing, recollecting, and preparing for our experience with things.”

More from Borgmann forthcoming.

Beyond the Trolley Car: The Moral Pedagogy of Ethical Tools

It is almost impossible to read about the ethics of autonomous vehicles without encountering some version of the trolley car problem. You’re familiar with the general outline of the problem, I’m sure. An out-of-control trolley car is barreling toward five unsuspecting people on a track. You are able to pull a lever and redirect the trolley toward another track, but there is one person on this track who will be hit by the trolley as a result. What do you do? Nothing and let five people die, or pull the lever and save five at the expense of one person’s life?

The thought experiment has its origins in a paper by the philosopher Phillipa Foot on abortion and the concept of double effect. I’m not sure when it was first invoked in the context of autonomous vehicles, but I first came across trolly car-style hypothesizing about the ethics of self-driving cars in a 2012 essay by Gary Marcus, which I learned about in a post on Nick Carr’s blog. The comments on that blog post, by the way, are worth reading. In response, I wrote my own piece reflecting on what I took to be more subtle issues arising from automated ethical systems.

More recently, following the death of a pedestrian who was struck by one of Uber’s self-driving vehicles in Arizona, Evan Selinger and Brett Frischmann co-authored a piece at Motherboard using the trolley car problem as a way of thinking about the moral and legal issues at stake. It’s worth your consideration. As Selinger and Frischmann point out, the trolley car problem tends to highlight drastic and deadly outcomes, but there are a host of non-lethal actions of moral consequence that an autonomous vehicle may be programmed to take. It’s important that serious thought be given to such matters now before technological momentum sets in.

“So, don’t be fooled when engineers hide behind technical efficiency and proclaim to be free from moral decisions,” the authors conclude. “’I’m just an engineer‘ isn’t an acceptable response to ethical questions. When engineered systems allocate life, death and everything in between, the stakes are inevitably moral.”

In a piece at the Atlantic, however, Ian Bogost recommends that we ditch the trolley car problem as a way of thinking about the ethics of autonomous vehicles. It is, in his view, too blunt an instrument for serious thinking about the ethical ramifications of autonomous vehicles. Bogost believes “that much greater moral sophistication is required to address and respond to autonomous vehicles.” The trolley car problem blinds us to the contextual complexity of morally consequential incidents that will inevitably arise as more and more autonomous vehicles populate our roads.

I wouldn’t go so far as to say that trolley car-style thought experiments are useless, but, with Bogost, I am inclined to believe that they threaten to eclipse the full range of possible ethical and moral considerations in play when we talk about autonomous vehicles.

For starters, the trolley car problem, as Bogost suggests, loads the deck in favor of a utilitarian mode of ethical reflection. I’d go further and say that it stacks the deck in favor of action-oriented approaches to moral reflection, whether rule-based or consequentialist. Of course, it is not altogether surprising that when thinking about moral decision making that must be programmed or engineered, one is tempted by ethical systems that may appear to reduce ethics to a set of rules to be followed or calculations to be executed.

In trolley car scenarios involving autonomous vehicles, it seems to me that two things are true: a choice must be made and there is no right choice.

There is no right answer to the trolley car problem. It is a tragedy either way. The trolley car problem is best thought of as a question to think with not a question to answer definitively. The point is not to find the one morally correct way to act but to come to feel the burden of moral responsibility.

Moreover, when faced with trolley car-like situations in real life, rare as they may be, human beings do not ordinarily have the luxury to reason their way to a morally acceptable answer. They react. It may be impossible to conclusively articulate the sources of that reaction. If there is an ethical theory that can account for it, it would be virtue ethics not varieties of deontology or consequentialism.

If there is no right answer, then, what are we left with?

Responsibility. Living with the consequences of our actions. Justice. The burdens of guilt. Forgiveness. Redemption.

Such things are obviously beyond the pale of programmable ethics. The machine, with which our moral lives are entwined, is oblivious to such subjective states. It cannot be meaningfully held to account. But this is precisely the point. The really important consideration is not what the machine will do, but what the human being will or will not experience and what human capacities will be sustained or eroded.

In short, the trolley car problem leads us astray in at least two related ways. First, it blinds us to the true nature of the equivalent human situation: we react, we do not reason. Second, building on this initial misconstrual, we then fail to see that what we are really outsourcing to the autonomous vehicle is not moral reasoning but moral responsibility.

Katherine Hayles has noted that distributed cognition (distributed, that is, among human and non-humans) implies distributed agency. I would add that distributed agency implies distributed moral responsibility. But it seems to me that moral responsibility is the sort of thing that does not survive such distribution. (At the very least, it requires new categories of moral, legal, and political thought.) And this, as I see it, is the real moral significance of autonomous vehicles: they are but one instance of a larger trend toward a material infrastructure that undermines the plausibility of moral responsibility.

Distributed moral responsibility is just another way of saying deferred or evaded moral responsibility.

The trajectory is longstanding. Here is Jacques Ellul commenting on the challenge modern society poses to the possibility of responsibility.

Let’s consider this from another angle. The trolley car problem focuses our ethical reflection on the accident. As I’ve suggested before, what if we were to ask not “What could go wrong?” but “What if it all goes right?” My point in inverting this query is to remind us that technologies that function exactly as they should and fade seamlessly into the background of our lived experience are at least as morally consequential as those that cause dramatic accidents.

Well-functioning technologies we come to trust become part of the material infrastructure of our experience, which plays an important role in our moral formation. This material infrastructure, the stuff of life with which we as embodied creatures constantly interact, both consciously and unconsciously, is partially determinative of our habitus, the set of habits, inclinations, judgments, and dispositions we bring to bear on the world. This includes, for example, our capacity to perceive the moral valence of our experiences or our capacity to subjectively experience the burden of moral responsibility. In other words, it is not so much a matter of specific decisions, although these are important, but of underlying capacities, orientations, and dispositions.

I suppose the question I’m driving at is this: What is the implicit moral pedagogy of tools to which we outsource acts of moral judgment?

While it might be useful to consider the trolley car, it’s important as well that we leave it behind for the sake of exploring the fullest possible range of challenges posed by emerging technologies with which our moral lives are increasingly entangled.


 

Jacques Ellul on Technique As An Obstacle To Ethics

The following excerpts are taken from “The Search for Ethics In a Technicist Society” (1983) by Jacques Ellul. In this essay, Ellul considers the challenges posed to traditional morality in a society dominated by technique.

James Fowler on what Ellul meant by technique: “Ellul’s issue was not with technological machines but with a society necessarily caught up in efficient methodological techniques. Technology, then, is but an expression and by-product of the underlying reliance on technique, on the proceduralization whereby everything is organized and managed to function most efficiently, and directed toward the most expedient end of the highest productivity.

In Ellul’s view, “The ethical problem, that is human behavior, can only be considered in relation to this system, not in relation to some particular technical object or other.” “If technique is a milieu and a system, ” he adds, “the ethical problem can only be posed in terms of this global operation. Behavior and particular choices no longer have much significance. What is required is thus a global change in our habits or values, the rediscovery of either an existential ethics or a new ontology.”

Emphasis in boldface below is mine.

On the call to subordinate means to ends:

“It is quite right to say that technique is only made of means, it is an ensemble of means (We shall return to this later), but only with the qualification that these means obey their own laws and are no longer subordinated to ends. Besides, one must distinguish ideal ends (values, for example), goals (national, for example), and the objectives (immediate objectives: a researcher who tries to solve some particular problem). Science and technique develop according to objectives, rarely and accidentally in relation to more general goals, and never for ethical or spiritual ideals. There is no relation between the proclamation of values (justice, freedom, etc.) and the orientation of technical development. Those who are concerned with values (theologians, philosophers, etc.) have no influence on the specialists of technique and cannot require, for example, that some aspect of current research or other means should be abandoned for the sake of some value.

On the difficulty of determining who exactly must act to subordinate technique to moral ends:

To adopt one of these first two ethical orientations is to argue that it is human beings who must create a good use for technique or impose ends on it, but always neglecting to specify which human beings. Is the “who” not important? Is technique able to be mastered by just any passer-by, every worker, some ordinary person? Is this person the politician? The public at large? The intellectual and technician? Some collectivity? Humanity as a whole? For the most part politicians cannot grasp technique, and each specialist can understand an infinitesimal portion of the technical universe, just as each citizen only makes use of an infinitesimal piece of the technical apparatus. How could such a person possibly modify the whole? As for the collectivity or some class (if they exist as specific entities) they are wholly ignorant of the problem of technique as a system. Finally, what might be called “Councils of the Wise” […] have often been set up only to demonstrate their own importance, just as have international commissions and international treaties [….] Who is supposed to impose ends or get hold of the technical apparatus? No one knows.

On the compromised position from which we try think ethically about technique:

At the same time, one should not forget the fact that human beings are themselves already modified by the technical phenomenon. When infants are born, the environment in which they find themselves is technique, which is a “given.” Their whole education is oriented toward adaptation to the conditions of technique (learning how to cross streets at traffic lights) and their instruction is destined to prepare them for entrance into some technical employment. Human beings are psychologically modified by consumption, by technical work, by news, by television, by leisure activities (currently, the proliferation of computer games), etc., all of which are techniques. In other words, it must not be forgotten that it is this very humanity which has been pre-adapted to and modified by technique that is supposed to master and reorient technique. It is obvious that this will not be able to be done with any independence.

On the pressure to adapt to technique:

Finally, one other ethical orientation in regard to technique is that of adaptation. And this can be added to the entire ideology of facts: technique is the ultimate Fact. Humanity must adapt to facts. What prevents technique from operating better is the whole stock of ideologies, feelings, principles, beliefs, etc. that people continue to carry around and which are derived from traditional situations. It is necessary (and this is the ethical choice!) to liquidate all such holdovers, and to lead humanity to a perfect operational adaptation that will bring about the greatest possible benefit from the technique. Adaptation becomes a moral criterion.

 

The Rhetorical “We” and the Ethics of Technology

“Questioning AI ethics does not make you a gloomy Luddite,” or so the title of a recent article in a London business newspaper assures us. The most important thing to be learned here is that someone feels this needs to be said. Beyond that, there is also something instructive about the concluding paragraphs. If we read them against the grain, these paragraphs teach us something about how difficult it is to bring ethics to bear on technology.

“I’m simply calling for us to use all the tools at our disposal to build a better digital future,” the author tells us.

In practice, this means never forgetting what makes us human. It means raising awareness and entering into dialogue about the issue of ethics in AI. It means using our imaginations to articulate visions for a future that’s appealing to us.

If we can decide on the type of society we’d like to create, and the type of existence we’d like to have, we can begin to forge a path there.

All in all, it’s essential that we become knowledgeable, active, and influential on AI in every small way we can. This starts with getting to grips with the subject matter and past extreme and sensationalised points of view. The decisions we collectively make today will influence many generations to come.

Here are the challenges:

We have no idea what makes us human. You may, but we don’t.

We have nowhere to conduct meaningful dialogue; we don’t even know how to have meaningful dialogue.

Our imaginations were long ago surrendered to technique.

We can’t decide on the type of society we’d like to create or the type of existence we’d like to have, chiefly because this “we” is rhetorical. It is abstract and amorphous.

There is no meaningful antecedent to the pronouns we and us used throughout these closing paragraphs. Ethics is a communal business. This is no less true with regards to technology, perhaps it is all the more true. There is, however, no we there.

As individuals, we are often powerless against larger forces dictating how we are to relate to technology. The state is in many respects beholden to the technological–ideologically, politically, economically. Regrettably, we have very few communities located between the individual and the state constituting a we that can meaningfully deliberate and effectively direct the use of technology.

Technologies like AI emerge and evolve in social spaces that are resistant to substantial ethical critique. They also operate at a scale that undermines the possibility of ethical judgment and responsibility. Moreover, our society is ordered in such a way that there is very little to be done about it, chiefly because of the absence of structures that would sustain and empower ethical reflection and practice, the absence, in other words, of a we that is not merely rhetorical.


The Ethics of Technological Mediation

Where do we look when we’re looking for the ethical implications of technology? A few would say that we look at the technological artifact itself. Many more would counter that the only place to look for matters of ethical concern is to the human subject. Philosopher of technology, Peter-Paul Verbeek, argues that there is another, perhaps more important place for us to look: the point of mediation, the point where the artifact and human subjectivity come together to create effects that cannot be located in either the artifact or the subject taken alone.

Early on in Moralizing Technology: Understanding and Designing the Morality of Things (2011), Verbeek briefly outlines the emergence of the field known as “ethics of technology.” “In its early days,” Verbeek notes, “ethical approaches to technology took the form of critique. Rather than addressing specific ethical problems related to actual technological developments, ethical reflection on technology focused on criticizing the phenomenon of ‘Technology’ itself.” Here we might think of Heidegger, critical theory, or Jacques Ellul. In time, “ethics of technology” emerged “seeking increased understanding of and contact with actual technological practices and developments,” and soon a host of sub-fields appeared: biomedical ethics, ethics of information technology, ethics of nanotechnology, engineering ethics, ethics of design, etc.

This approach remains, accordin to Verbeek, “merely instrumentalist.” “The central focus of ethics,” on this view, “is to make sure that technology does not have detrimental effects in the human realm and that human beings control the technological realm in morally justifiable ways.” It’s not that these considerations are unimportant, quite the contrary, but Verbeek believes that this approach “does not yet go far enough.”

Verbeek explains the problem:

“What remains out of sight in this externalist approach is the fundamental intertwining of these two domains [the human and the technological]. The two simply cannot be separated. Humans are technological beings, just as technologies are social entities. Technologies, after all, play a constitutive role in our daily lives. They help to shape our actions and experiences, they inform our moral decisions, and they affect the quality of our lives. When technologies are used, they inevitably help to shape the context in which they function. They help specific relations between human beings and reality to come about and coshape new practices and ways of living.”

Observing that technologies mediate perception, how we register the world, and action, how we act into the world, Verbeek elaborates a theory of technological mediation, built upon a postphenomenological approach to technology pioneered by Don Ihde. Rather than focus exclusively on either the artifact “out there,” the technological object, or the will “in here,” the human subject, Verbeek invites us to focus ethical attention on the constitution of both the perceived object and the subject’s intention in the act of technological mediation. In other words, how technology shapes perception and action is also of ethical consequence.

As Verbeek rightly insists, “Artifacts are morally charged; they mediate moral decisions, shape moral subjects, and play an important role in moral agency.”

Verbeek turns to the work of Ihde for some analytic tools and categories. Among the many ways humans might relate to technology, Ihde notes two relations of “mediation.” The first of these he calls “embodiment relations” in which the tools are incorporated by the user and the world is experienced through the tool (think of the blind man’s stick). The second he calls a “hermeneutic relation.” Verbeek explains:

“In this relation, technologies provide access to reality not because they are ‘incorporated,’ but because they provide a representation of reality, which requires interpretation [….] Ihde shows that technologies, when mediating our sensory relationship with reality, transform what we perceive. According to Ihde, the transformation of perception always has the structure of amplification and reduction.”

Verbeek gives us the example of looking at a tree through an infrared camera: most of what we see when we look at a tree unaided is “reduced,” but the heat signature of the tree is “amplified” and the tree’s health may be better assessed. Ihde calls this capacity of a tool to transform our perception “technological intentionality.” In other words, the technology directs and guides our perception and our attention. It says to us, “Look at this here not that over there” or “Look at this thing in this way.” This function is not morally irrelevant, especially when you consider that this effect is not contained within the digital platform but spills out into our experience of the world.

Verbeek also believes that our reflection on the moral consequences of technology would do well to take virtue ethics seriously. With regards to the ethics of technology, we typically ask, “What should I or should I not do with this technology?” and thus focus our attention on our actions. In this, we follow the lead of the two dominant modern ethical traditions: the deontological tradition stemming from Immanuel Kant, on the one hand, and the consequentialist tradition, closely associated with Bentham and Mill, on the other. In the case of both traditions, a particular sort of moral subject or person is in view—an autonomous and rational individual who acts freely and in accord with the dictates of reason.

In the Kantian tradition, the individual, having decided upon the right course of action through the right use of their reason, is duty bound to act thusly, regardless of consequences. In the consequentialist tradition, the individual rationally calculates which action will yield the greatest degree of happiness, variously understood, and acts accordingly.

If technology comes into play in such reasoning by such a person, it is strictly as an instrument of the individual will. The question, again, is simply, “What should I do or not do with it?” We ascertain the answer by either determining the dictates of subjective reasoning or calculating the objective consequences of an action, the latter approach is perhaps more appealing for its resonance with the ethos of technique.

We might conclude, then, that the popular instrumentalist view of technology—a view which takes technology to be a mere a tool, a morally neutral instrument of a sovereign will—is the natural posture of the sort of individual or moral subject that modernity yields. It is unlikely to occur to such an individual that technology is not only a tool with which moral and immoral actions are preformed but also an instrument of moral formation, informing and shaping the moral subject.

It is not that the instrumentalist posture is of no value, of course. On the contrary, it raises important questions that ought to be considered and investigated. The problem is that this approach is incomplete and too easily co-opted by the very realities that it seeks to judge. It is, on its own, ultimately inadequate to the task because it takes as its starting point an inadequate and incomplete understanding of the human person.

There is, however, another older approach to ethics that may help us fill out the picture and take into account other important aspects of our relation to technology: the tradition of virtue ethics in both its classical and medieval manifestations.

Verbeek comments on some of the advantages of virtue ethics. To begin with, virtue ethics does not ask, “What am I to do?” Rather, it asks, in Verbeek’s formulation, “What is the good life?” We might also add a related question that virtue ethics raises: “What sort of person do I want to be?” This is a question that Verbeek also considers, taking his cues from the later work of Michel Foucault.

The question of the good life, Verbeek adds,

“does not depart from a separation of subject and object but from the interwoven character of both. A good life, after all, is shaped not only on the basis of human decisions but also on the basis of the world in which it plays itself out (de Vries 1999). The way we live is determined not only by moral decision making but also by manifold practices that connect us to the material world in which we live. This makes ethics not a matter of isolated subjects but, rather, of connections between humans and the world in which they live.”

Virtue ethics, with its concern for habits, practices, and communities of moral formation, illuminates the various ways technologies impinge upon our moral lives. For example, a technologically mediated action that, taken on its own and in isolation, may be judged morally right or indifferent may appear in a different light when considered as one instance of a habit-forming practice that shapes our disposition and character.

Moreover, virtue ethics, which predates the advent of modernity, does not necessarily assume the sovereign individual as its point of departure. For this reason, it is more amenable to the ethics of technological mediation elaborated by Verbeek. Verbeek argues for “the distributed character of moral agency,” distributed that is among subject and the various technological artifacts that mediate the subject’s perception of and action in the world.

At the very least, asking the sorts of questions raised within a virtue ethic framework fills out our picture of technology’s ethical consequences.

In Susanna Clarke’s delightful novel, Jonathan Strange & Mr. Norrell, a fantastical story cast in realist guise about two magicians recovering the lost tradition of English magic in the context of the Napoleonic Wars, one of the main characters, Strange, has the following exchange with the Duke of Wellington:

“Can a magician kill a man by magic?” Lord Wellington asked Strange. Strange frowned. He seemed to dislike the question. “I suppose a magician might,” he admitted, “but a gentleman never would.”

Strange’s response is instructive and the context of magic more apropos than might be apparent. Technology, like magic, empowers the will, and it raises the sort of question that Wellington asks: can such and such be done?

Not only does Strange’s response make the ethical dimension paramount, he approaches the ethical question as a virtue ethicist. He does not run consequentialist calculations nor does he query the deliberations of a supposedly universal reason. Rather, he frames the empowerment availed to him by magic with a consideration of the kind of person he aspires to be, and he subjects his will to this larger project of moral formation. In so doing, he gives us a good model for how we might think about the empowerments availed to us by technology.

As Verbeek, reflecting on the aptness of the word subject, puts it, “The moral subject is not an autonomous subject; rather, it is the outcome of active subjection.” It is, paradoxically, this kind of subjection that can ground the relative freedom with which we might relate to technology.


Most of this material originally appeared on the blog of the Center for the Study of Ethics and Technology. I repost it here in light of recent interest in the ethical consequences of technology. Verbeek’s work does not, it seems to me, get the attention it deserves.