A Modest Defense of Political Talk at the Thanksgiving Table

Earlier this week, I listened to a radio segment that took as its point of departure a recent poll claiming that Americans are more anxious than usual about conversations around the Thanksgiving dinner table. Yesterday, a similar news story was trending on Facebook. Naturally, this anxiety is closely linked to the possibility of heated political debates emerging as relatives from across the political spectrum are forced by convention to gather near one another and possibly even talk to one another. While such anxiety is nothing new, its intensity surely owes something to the increasingly polarized and toxic political climate.

A few thoughts followed for me. First, I thought of the Friendsgiving trend, popular among younger adults. If Thanksgiving dinners revolve around relationships we did not choose, Friendsgiving meals center on relationships of affinity. We can’t choose our family, but we can and do choose our friends.

I then thought about how this seems to reflect a larger trend that extends beyond Thanksgiving: we are increasingly able to arrange our lives around relationships of affinity and escape relationships of necessity or obligation. I am, of course, thinking of relationships outside of the realm of work, wherein many know only relationships of necessity or obligation.

The Internet has played no small role in this. Whatever your niche, you’re likely to find an Internet community with which to connect. However obscure your interests, with a few clicks you’ll find others who share it. This is a fine development in many respects, and I suspect that a great deal of loneliness has been assuaged as a result. But there is a darker side to this as well. Not too long ago, for example, Facebook ran a series of commercials explicitly selling their app as a means of escaping, mentally and emotionally if not physically, from dinners with family members that drone on about matters we care little to nothing about.

Yet, we do not share our world, our country, our cities, or most of our public institutions only with those for whom we have some affinity. How and where, then, do we learn to relate with those we would not choose as our friends but with whom we must nonetheless share the world?

Perhaps there is no better place than a table. “To live together in the world,” Hannah Arendt observed in The Human Condition, “means essentially that a world of things is between those that have it in common, as a table is located between those who sit around it.” The table, she adds, “relates and separates men at the same time.”

The image is instructive. As we gather around a table, we constitute something like a micro-community. We are brought together but retain our separate identities. Perhaps today, around the Thanksgiving Day table, our micro-community will more perfectly reflect the lager, more fractious and contentious community we inhabit as citizens.

Under such circumstances, we ordinarily remember the counsel to avoid talk of religion and politics. Ordinarily, I would think this sound advice. There is something to be said for keeping the peace and for preserving spaces and times untouched by the turmoil of the political, for relating to another person without reducing them to their political opinions.

But today I am wondering if this is not one of the few opportunities we have left to us to practice the art of civil discourse. Social media has become, almost certainly for the worse, our default public sphere, and social media is an awful school of civil discourse: its structures actively undermine the possibility. The table, on the other hand, may be a more humane school, although it becomes so only by putting a great deal at risk.

Maybe the risk is too great. Maybe we do well to preserve the peace at the family table. I don’t know. It is the sort of thing we must each judge for ourselves, guided by wisdom and love. I do know, however, that there is another risk we are running: the risk of never learning how to relate well with those who do not share our own preferences and proclivities, especially on matters of grave and enduring importance.

Relating well with such individuals does not mean, in my view, that we will necessarily come to a point of agreement. It does mean that we never lose sight of our common humanity and that, as far as it is possible, we gain a deeper understanding of one another. It means as well that we learn the art of listening and the art of speaking so as to be heard, remembering that “it’s a curse to speak without some regard for the one I’m talking to.” It means finding grounds for hope rather than despair.

May each of your tables, however the conversation turns, be filled with joy and gratitude. For my part, I’m grateful that my affinities do not structure all of my relationships; I think I’d be poorer for it.

Mumford: “Men had become mechanical before they perfected complicated machines”

From the opening section of Lewis Mumford’s Technics and Civilization (1934):

While people often call our period the “Machine Age,” very few have any perspective on modern technics or any clear notion as to its origins. Popular historians usually date the great transformation in modern industry from Watt’s supposed invention of the steam engine; and in the conventional economics textbook the application of automatic machinery to spinning and weaving is often treated as an equally critical turning point. But the fact is that in Western Europe the machine had been developing steadily for at least seven centuries before the dramatic changes that accompanied the “industrial revolution” took place. Men had become mechanical before they perfected complicated machines to express their new bent and interest; and the will-to-order had appeared once more in the monastery and the army and the counting-house before it finally manifested itself in the factory. Behind all the great material inventions of the last century and a half was not merely a long internal development of technics: there was also a change of mind. Before the industrial precesses could take hold on a great scale, a reorientation of wishes, habits, ideas, goals was necessary.

For reasons that I may outline in a later post, I plan to regularly post excerpts from notable works of older and often forgotten works of tech criticism. I’ll usually refrain from adding any commentary to these excerpts. My hope is that they will sharpen our thinking and yield useful insights. We’ll call these posts readings in the tech critical canon.

The Ethics of Technological Mediation

Where do we look when we’re looking for the ethical implications of technology? A few would say that we look at the technological artifact itself. Many more would counter that the only place to look for matters of ethical concern is to the human subject. Philosopher of technology, Peter-Paul Verbeek, argues that there is another, perhaps more important place for us to look: the point of mediation, the point where the artifact and human subjectivity come together to create effects that cannot be located in either the artifact or the subject taken alone.

Early on in Moralizing Technology: Understanding and Designing the Morality of Things (2011), Verbeek briefly outlines the emergence of the field known as “ethics of technology.” “In its early days,” Verbeek notes, “ethical approaches to technology took the form of critique. Rather than addressing specific ethical problems related to actual technological developments, ethical reflection on technology focused on criticizing the phenomenon of ‘Technology’ itself.” Here we might think of Heidegger, critical theory, or Jacques Ellul. In time, “ethics of technology” emerged “seeking increased understanding of and contact with actual technological practices and developments,” and soon a host of sub-fields appeared: biomedical ethics, ethics of information technology, ethics of nanotechnology, engineering ethics, ethics of design, etc.

This approach remains, accordin to Verbeek, “merely instrumentalist.” “The central focus of ethics,” on this view, “is to make sure that technology does not have detrimental effects in the human realm and that human beings control the technological realm in morally justifiable ways.” It’s not that these considerations are unimportant, quite the contrary, but Verbeek believes that this approach “does not yet go far enough.”

Verbeek explains the problem:

“What remains out of sight in this externalist approach is the fundamental intertwining of these two domains [the human and the technological]. The two simply cannot be separated. Humans are technological beings, just as technologies are social entities. Technologies, after all, play a constitutive role in our daily lives. They help to shape our actions and experiences, they inform our moral decisions, and they affect the quality of our lives. When technologies are used, they inevitably help to shape the context in which they function. They help specific relations between human beings and reality to come about and coshape new practices and ways of living.”

Observing that technologies mediate perception, how we register the world, and action, how we act into the world, Verbeek elaborates a theory of technological mediation, built upon a postphenomenological approach to technology pioneered by Don Ihde. Rather than focus exclusively on either the artifact “out there,” the technological object, or the will “in here,” the human subject, Verbeek invites us to focus ethical attention on the constitution of both the perceived object and the subject’s intention in the act of technological mediation. In other words, how technology shapes perception and action is also of ethical consequence.

As Verbeek rightly insists, “Artifacts are morally charged; they mediate moral decisions, shape moral subjects, and play an important role in moral agency.”

Verbeek turns to the work of Ihde for some analytic tools and categories. Among the many ways humans might relate to technology, Ihde notes two relations of “mediation.” The first of these he calls “embodiment relations” in which the tools are incorporated by the user and the world is experienced through the tool (think of the blind man’s stick). The second he calls a “hermeneutic relation.” Verbeek explains:

“In this relation, technologies provide access to reality not because they are ‘incorporated,’ but because they provide a representation of reality, which requires interpretation [….] Ihde shows that technologies, when mediating our sensory relationship with reality, transform what we perceive. According to Ihde, the transformation of perception always has the structure of amplification and reduction.”

Verbeek gives us the example of looking at a tree through an infrared camera: most of what we see when we look at a tree unaided is “reduced,” but the heat signature of the tree is “amplified” and the tree’s health may be better assessed. Ihde calls this capacity of a tool to transform our perception “technological intentionality.” In other words, the technology directs and guides our perception and our attention. It says to us, “Look at this here not that over there” or “Look at this thing in this way.” This function is not morally irrelevant, especially when you consider that this effect is not contained within the digital platform but spills out into our experience of the world.

Verbeek also believes that our reflection on the moral consequences of technology would do well to take virtue ethics seriously. With regards to the ethics of technology, we typically ask, “What should I or should I not do with this technology?” and thus focus our attention on our actions. In this, we follow the lead of the two dominant modern ethical traditions: the deontological tradition stemming from Immanuel Kant, on the one hand, and the consequentialist tradition, closely associated with Bentham and Mill, on the other. In the case of both traditions, a particular sort of moral subject or person is in view—an autonomous and rational individual who acts freely and in accord with the dictates of reason.

In the Kantian tradition, the individual, having decided upon the right course of action through the right use of their reason, is duty bound to act thusly, regardless of consequences. In the consequentialist tradition, the individual rationally calculates which action will yield the greatest degree of happiness, variously understood, and acts accordingly.

If technology comes into play in such reasoning by such a person, it is strictly as an instrument of the individual will. The question, again, is simply, “What should I do or not do with it?” We ascertain the answer by either determining the dictates of subjective reasoning or calculating the objective consequences of an action, the latter approach is perhaps more appealing for its resonance with the ethos of technique.

We might conclude, then, that the popular instrumentalist view of technology—a view which takes technology to be a mere a tool, a morally neutral instrument of a sovereign will—is the natural posture of the sort of individual or moral subject that modernity yields. It is unlikely to occur to such an individual that technology is not only a tool with which moral and immoral actions are preformed but also an instrument of moral formation, informing and shaping the moral subject.

It is not that the instrumentalist posture is of no value, of course. On the contrary, it raises important questions that ought to be considered and investigated. The problem is that this approach is incomplete and too easily co-opted by the very realities that it seeks to judge. It is, on its own, ultimately inadequate to the task because it takes as its starting point an inadequate and incomplete understanding of the human person.

There is, however, another older approach to ethics that may help us fill out the picture and take into account other important aspects of our relation to technology: the tradition of virtue ethics in both its classical and medieval manifestations.

Verbeek comments on some of the advantages of virtue ethics. To begin with, virtue ethics does not ask, “What am I to do?” Rather, it asks, in Verbeek’s formulation, “What is the good life?” We might also add a related question that virtue ethics raises: “What sort of person do I want to be?” This is a question that Verbeek also considers, taking his cues from the later work of Michel Foucault.

The question of the good life, Verbeek adds,

“does not depart from a separation of subject and object but from the interwoven character of both. A good life, after all, is shaped not only on the basis of human decisions but also on the basis of the world in which it plays itself out (de Vries 1999). The way we live is determined not only by moral decision making but also by manifold practices that connect us to the material world in which we live. This makes ethics not a matter of isolated subjects but, rather, of connections between humans and the world in which they live.”

Virtue ethics, with its concern for habits, practices, and communities of moral formation, illuminates the various ways technologies impinge upon our moral lives. For example, a technologically mediated action that, taken on its own and in isolation, may be judged morally right or indifferent may appear in a different light when considered as one instance of a habit-forming practice that shapes our disposition and character.

Moreover, virtue ethics, which predates the advent of modernity, does not necessarily assume the sovereign individual as its point of departure. For this reason, it is more amenable to the ethics of technological mediation elaborated by Verbeek. Verbeek argues for “the distributed character of moral agency,” distributed that is among subject and the various technological artifacts that mediate the subject’s perception of and action in the world.

At the very least, asking the sorts of questions raised within a virtue ethic framework fills out our picture of technology’s ethical consequences.

In Susanna Clarke’s delightful novel, Jonathan Strange & Mr. Norrell, a fantastical story cast in realist guise about two magicians recovering the lost tradition of English magic in the context of the Napoleonic Wars, one of the main characters, Strange, has the following exchange with the Duke of Wellington:

“Can a magician kill a man by magic?” Lord Wellington asked Strange. Strange frowned. He seemed to dislike the question. “I suppose a magician might,” he admitted, “but a gentleman never would.”

Strange’s response is instructive and the context of magic more apropos than might be apparent. Technology, like magic, empowers the will, and it raises the sort of question that Wellington asks: can such and such be done?

Not only does Strange’s response make the ethical dimension paramount, he approaches the ethical question as a virtue ethicist. He does not run consequentialist calculations nor does he query the deliberations of a supposedly universal reason. Rather, he frames the empowerment availed to him by magic with a consideration of the kind of person he aspires to be, and he subjects his will to this larger project of moral formation. In so doing, he gives us a good model for how we might think about the empowerments availed to us by technology.

As Verbeek, reflecting on the aptness of the word subject, puts it, “The moral subject is not an autonomous subject; rather, it is the outcome of active subjection.” It is, paradoxically, this kind of subjection that can ground the relative freedom with which we might relate to technology.


Most of this material originally appeared on the blog of the Center for the Study of Ethics and Technology. I repost it here in light of recent interest in the ethical consequences of technology. Verbeek’s work does not, it seems to me, get the attention it deserves.

Solitude and Loneliness

In her posthumously published The Life of the Mind, Hannah Arendt distinguished between solitude and loneliness. The former is the condition that makes thought possible; in the latter state, even the consolations of thinking are absent.

“… to be by myself and to have intercourse with myself is the outstanding characteristic of the life of the mind. The mind can be said to have a life of its own only to the extent that it actualizes this intercourse in which, existentially speaking, plurality is reduced to the duality already implied in the fact and the word ‘consciousness,’ or syneidenai–to know with myself. I call this existential state in which I keep myself company ‘solitude’ to distinguish it from ‘loneliness,’ where I am also alone but now deserted not only by human company but also by the possible company of myself.

To be clear, Arendt understands thinking in a rather specific sense. For her, thinking is not mere problem solving or calculation or the pursuit of truth. It is rather the pursuit of meaning and the work of clearing the ground for the possibility of judgment.

That said, it would seem that in our desire to avoid loneliness we are eroding our capacity for solitude, and thus our ability to think.

The allure of our devices lies in the promise of connection. With smartphone in hand, I never have to be alone again. But in this constant connection we lose our taste and capacity for solitude. Moreover, we may find that connection does not necessarily alleviate loneliness. It does not alleviate loneliness because the devices and platforms that mediate connection are explicitly designed to keep us coming back to them. We will keep coming back to them only if we feel we need what they offer; we will keep coming back, that is, if we feel lonely. Furthermore, it is becoming ever more obvious that connection is like a drug we were offered, at no cost, of course, only to keep us coming back for more at ruinous cost to us and great profit to others.

The dark paradox, then, is this: the more we seek to alleviate our loneliness through digital connectivity, the more lonely we will feel. Along the way, we will forsake solitude as a matter of course. Curiously, it may not even be loneliness as a desire for companionship that the design of social media fosters in us. Rather, it is a counterfeit longing that is generated: for stimulation rather than companionship.

In the end, we will be left with the most profound loneliness: perpetually feeling a need for connection that we cannot satisfy and finding that we have not even our own company.

To recap: no abiding sense of companionship, no solitude, no place for thought.

____________________________________________

See also Nicholas Carr’s recent post, How smartphones hijack our minds.

The Dystopia Is Already Here

[This post is periodically updated.]

Science fiction writer William Gibson coined the phrase, “The future is already here — it’s just not very evenly distributed.” It’s a well-known and oft-repeated line.

I’m proposing a slight variation, or perhaps a corollary principle: The dystopia is already here — it’s just not very evenly distributed.

Consider these comments by Facebook’s founding president, Sean Parker: “It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” The aim of Facebook’s designers: “How do we consume as much of your time and conscious attention as possible?”

Or take a look at Zeynep Tufekci’s recent TED talk, “We’re building a dystopia just to make people click on ads.”

Then there’s this fine company, Dopamine Labs, which is developing an “automated, intelligent approach to hooking people on apps” with an AI agent aptly named Skinner.

Here is James Bridle’s long exploration of the weird and disturbing world of Kids YouTube. “This is a deeply dark time,” Bridle concludes, “in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways.” Another writer, looking at this same content, concluded, “We can’t predict what wider impact a medium that incentivizes factory line production of mindless visual slurry for kids’ consumption might have on children’s development and on society as a whole.

And this article title would have seemed implausibly dystopian just a few years ago: Facebook is hiring 3,000 people to stop users from broadcasting murder and rape.

Meanwhile, Beijing is becoming a “frontline laboratory for surveillance” setting the pace for 21st century police states, and Facebook has found itself at the center of the brutal campaign against the Rohingya minority in Myanmar.

An earlier investor in Facebook and Google, making his penance, tells us that these two companies have “consciously combined persuasive techniques developed by propagandists and the gambling industry with technology in ways that threaten public health and democracy.” ” Thanks to smartphones,” he adds, “the battle for attention now takes place on a single platform that is available every waking moment.”

“Across YouTube,” Buzzfeed reports, “an unsettling trend has emerged: Accounts are publishing disturbing and exploitative videos aimed at and starring children in compromising, predatory, or creepy situations — and racking up millions of views.”

So, I don’t know, you tell me?

Lest we think that we cannot be in a dystopia because we appear to be relatively free, prosperous, and safe, here’s the final word to Neil Postman:

… we had forgotten that alongside Orwell’s dark vision, there was another – slightly older, slightly less well known, equally chilling: Aldous Huxley’s Brave New World. Contrary to common belief even among the educated, Huxley and Orwell did not prophesy the same thing. Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.

What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture …. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions.” In 1984, Orwell added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we fear will ruin us. Huxley feared that what we desire will ruin us.

[I’ve decided to make this post an archive of sorts, so I’ll keep adding items as I come across them. Feel free to offer submissions in the comments.]