Democracy and Technology

Alexis Madrigal has written a long and thoughtful piece on Facebook’s role in the last election. He calls the emergence of social media, Facebook especially, “the most significant shift in the technology of politics since the television.” Madrigal is pointed in his estimation of the situation as it now consequently stands.

Early on, describing the widespread (but not total) failure to understand the effect Facebook could have on an election, Madrigal writes, “The informational underpinnings of democracy have eroded, and no one has explained precisely how.”

Near the end of the piece, he concludes, “The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

Madrigal’s piece brought to mind, not surprisingly, two important observations by Neil Postman that I’ve cited before.

My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.

Also:

Surrounding every technology are institutions whose organization–not to mention their reason for being–reflects the world-view promoted by the technology. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis.

In these two passages, I find the crux of Postman’s enduring insights, the insights, more generally, of the media ecology school of tech criticism. It seems to me that this is more or less where we are: a culture in crisis, as Madrigal’s comments suggest. Read what he has to say.

On Twitter, replying to a tweet from Christopher Mims endorsing Madrigal’s work, Zeynep Tufekci took issue with Madrigal’s framing. Madrigal, in fact, cited Tufekci as one of the few people who understood a good deal of what was happening and, indeed, saw it coming years ago. But Tufekci nonetheless challenged Madrigal’s point of departure, which is that the entirety of Facebook’s role caught nearly everyone by surprise and couldn’t have been foreseen.

Tufekci has done excellent work exploring the political consequences of Big Data, algorithms, etc. This 2014 article, for example, is superb. But in reading Tufekci’s complaint that her work and the work of many other academics was basically ignored, my first thought was that the similarly prescient work of technology critics has been more or less ignored for much longer. I’m thinking of Mumford, Jaspers, Ellul, Jonas, Grant, Winner, Mander, Postman and a host of others. They have been dismissed as too pessimistic, too gloomy, too conservative, too radical, too broad in their criticism and too narrow, as Luddites and reactionaries, etc. Yet here we are.

In a 1992 article about democracy and technology, Ellul wrote, “In my view, our Western political institutions are no longer in any sense democratic. We see the concept of democracy called into question by the manipulation of the media, the falsification of political discourse, and the establishment of a political class that, in all countries where it is found, simply negates democracy.”

Writing in the same special issue of the journal Philosophy and Technology edited by Langdon Winner, Albert Borgmann wrote, “Modern technology is the acknowledged ruler of the advanced industrial democracies. Its rule is not absolute. It rests on the complicity of its subjects, the citizens of the democracies. Emancipation from this complicity requires first of alI an explicit and shared consideration of the rule of technology.”

It is precisely such an “explicit and shared consideration of the rule of technology” that we have failed to seriously undertake. Again, Tufekci and her colleagues are hardly the first to have their warnings, measured, cogent, urgent as they may be, ignored.

Roger Berkowitz of the Hannah Arendt Center for Politics and the Humanities, recently drew attention to a commencement speech given by John F. Kennedy at Yale in 1962. Kennedy noted the many questions that America had faced throughout her history, from slavery to the New Deal. These were questions “on which the Nation was sharply and emotionally divided.” But now, Kennedy believed we were ready to move on:

Today these old sweeping issues very largely have disappeared. The central domestic issues of our time are more subtle and less simple. They relate not to basic clashes of philosophy or ideology but to ways and means of reaching common goals — to research for sophisticated solutions to complex and obstinate issues.

These issues were “administrative and executive” in nature. They were issues “for which technical answers, not political answers, must be provided,” Kennedy concluded. You should read the rest of Berkowitz reflections on the prejudices exposed by our current crisis, but I want to take Kennedy’s technocratic faith as a point of departure for some observations.

Kennedy’s faith in the technocratic management of society was just the latest iteration of modernity’s political project, the quest for a neutral and rational mode of politics for a pluralistic society.

I will put it this way: liberal democracy is a “machine” for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.

It was machine-like in its promised objectivity and efficiency. But, of course, it would work only to the degree that it generated the subjects it required for its own operation. (A characteristic it shares with all machines.) Human beings have been, on this score, rather recalcitrant, much to the chagrin of the administrators of the machine.

Kennedy’s own hopes were just a renewed version of this vision, only they had become more explicitly a-political and technocratic in nature. It was not enough that citizens check certain aspects of their person at the door to the public sphere, now it would seem that citizens would do well to entrust the political order to experts, engineers, and technicians.

Leo Marx recounts an important part of this story, unfolding throughout the 19th to early 20th century, in an article accounting for what he calls “postmodern pessimism” about technology. Marx outlines how “the simple [small r] republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.” I would also include the emergence of bureaucratic and scientific management in the telling of this story.

Presently we are witnessing a further elaboration of this same project along the same trajectory. It is the rise of governance by algorithm, a further, apparent distancing of the human from the political. I say apparent because, of course, the human is never fully out of the picture, we just create more elaborate technical illusions to mask the irreducibly human element. We buy into these illusions, in part, because of the initial trajectory set for the liberal democratic order, that of machine-like objectivity, rationality, and efficiency. It is on this ideal that Western society staked its hopes for peace and prosperity. At every turn, when the human element, in its complexity and messiness, broke through the facade, we doubled-down on the ideal rather than question the premises. Initially, at least the idea was that the “machine” would facilitate the deliberation of citizens by establishing rules and procedures to govern their engagement. When it became apparent that this would no longer work, we explicitly turned to technique as the common frame by which we would proceed. Now that technique has failed because again the human manifested itself, we overtly turn to machines.

This new digital technocracy takes two, seemingly paradoxical paths. One of these paths is the increasing reliance on Big Data and computing power in the actual work of governing. The other, however, is the deployment of these same tools for the manipulation of the governed. It is darkly ironic that this latter deployment of digital technology is intended to agitate the very passions liberal democracy was initially advanced to suppress (at least according to the story liberal democracy tells about itself). It is as if, having given up on the possibility of reasonable political discourse and deliberation within a pluralistic society, those with the means to control the new apparatus of government have simply decided to manipulate those recalcitrant elements of human nature to their own ends.

It is this latter path that Madrigal and Tufekci have done their best to elucidate. However, my rambling contention here is that the full significance of our moment is only intelligible within a much broader account of the relationship between technology and democracy. It is also my contention that we will remain blind to the true nature of our situation so long as we are unwilling to submit our technology to the kind of searching critique Borgmann advocated and Ellul thought hardly possible. But we are likely too invested in the promise of technology and too deeply compromised in our habits and thinking to undertake such a critique.

Growing Up with AI

In an excerpt from her forthcoming bookWho Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart, Rachel Botsman reflects on her three-year-old’s interactions with Amazon’s AI assistant, Alexa.

Botsman found that her daughter took quite readily to Alexa and was soon asking her all manner of questions and even asking Alexa to make choices for her, about what to wear, for instance, or what she should do that day. “Grace’s easy embrace of Alexa was slightly amusing but also alarming,” Botsman admits. “Today,” she adds, “we’re no longer trusting machines just to do something, but to decide what to do and when to do it.” She then goes on to observe that the next generation will grow up surrounded by AI agents, so that the question will not be “Should we trust robots?” but rather “Do we trust them too much?”

Along with issues of privacy and data gathering, Botsman was especially concerned with the intersection of AI technology and commercial interests: “Alexa, after all, is not ‘Alexa.’ She’s a corporate algorithm in a black box.”

To these concerns, philosopher Mark White, elaborating on Botsman’s reflections, adds the following:

Again, this would not be as much of a problem if the choices we cede to algorithms only dealt with songs and TV shows. But as Botsman’s story shows, the next generation may develop a degree of faith in the “wisdom” of technology that leads them to give up even more autonomy to machines, resulting in a decline in individual identity and authenticity as more and more decisions are left to other parties to make in interests that are not the person’s own—but may be very much in the interests of those programming and controlling the algorithms.

These concerns are worth taking into consideration. I’m ambivalent about framing a critique of technology in terms of authenticity, or even individual identity, but I’m not opposed to a conversation along these lines. Such a conversation at least encourages us to think a bit more deeply about the role of technology in shaping the sorts of people we are always in the process of becoming. This is, of course, especially true of children.

Our identity, however, does not emerge in pristine isolation from other human beings or independently from the fabric of our material culture, technologies included. That is not the ideal to which we should aim. Technology will unavoidably be part of our children’s lives and ours. But which technologies? Under what circumstances? For what purposes? With what consequences? These are some of the questions we should be asking.

Of an AI assistant that becomes part of a child’s taken-for-granted environment, other more specific questions also come to mind.

What conversations or interactions will the AI assistant displace?

How will it effect the development of a child’s imagination?

How will it direct a child’s attention?

How will a child’s language acquisition be effected?

What expectations will it create regarding the solicitude they can expect from the world?

How will their curiosity be shaped by what the AI assistant can and cannot answer?

Will the AI assistants undermine the development of critical cognitive skills by their ability to immediately respond to simple questions?

Will their communication and imaginative life shrink to the narrow parameters within which they can interact with AI?

Will parents be tempted to offload their care and attentiveness to the AI assistant, and with what consequences?

Of AI assistants generally, we might conclude that what they do well–answer simple direct questions, for example–may, in fact, prove harmful to a child’s development, and what they do poorly–provide for rich, complex engagement with the world–is what children need most.

We tend to bend ourselves to fit the shape of our tools. Even as tech-savvy adults we do this. It seems just as likely that children will do likewise. For this reason, we do well to think long and hard about the devices that we bring to bear upon their lives.

We make all sorts of judgements as a society about when it is appropriate for children to experience certain realities, and this care for children is one of the marks of a healthy society. We do this through laws, policy, and cultural norms. With regards to the norms that govern the technology that we introduce into our children’s lifeworld, we would do well, it seems to me, to adopt a more cautionary stance. Sometimes this means shielding children from certain technologies if it is not altogether obvious that their impact will be helpful and beneficial. We should, in other words, shift the burden of proof so that a technology must earn its place in our children’s lives.

Botsman finally concluded that her child was not ready for Alexa to be a part of her life and that it was possibly usurping her own role as parent:

Our kids are going to need to know where and when it is appropriate to put their trust in computer code alone. I watched Grace hand over her trust to Alexa quickly. There are few checks and balances to deter children from doing just that, not to mention very few tools to help them make informed decisions about A.I. advice. And isn’t helping Gracie learn how to make decisions about what to wear — and many more even important things in life — my job? I decided to retire Alexa to the closet.

It is even better when companies recognize some of these problem and decide (from mixed motives, I’m sure) to pull a device whose place in a child’s life is at best ambiguous.


This post is part of a series on being a parent in the digital age.

Finding A Place For Thought

Yesterday, I wrote briefly about how difficult it can be to find a place for thought when our attention, in both its mental and emotional dimensions, is set aimlessly adrift on the currents of digital media. Digital media, in fact, amounts to an environment that is inhospitable and, indeed, overtly hostile to thought.

Many within the tech industry are coming to a belated sense of responsibility for this world they helped fashion. A recent article in the Guardian tells their story. They include Justin Rosenstein, who helped design the “Like” button for Facebook but now realizes that it is common “for humans to develop things with the best of intentions and for them to have unintended, negative consequences” and James Williams, who worked on analytics for Google but who experienced an epiphany “when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on.”

Better late than never one might say, or perhaps it is too late. As per usual, there is a bit of ancient wisdom that speaks to the situation. In this case, the story of Pandora’s Box comes to mind. Nonetheless, when so many in the industry seem bent on evading responsibility for the consequences of their work, it is mildly refreshing to read about some who are at least willing to own the consequences of their work and even striving to somehow make ammends.

It is telling, though, that, as the article observes, “These refuseniks are rarely founders or chief executives, who have little incentive to deviate from the mantra that their companies are making the world a better place. Instead, they tend to have worked a rung or two down the corporate ladder: designers, engineers and product managers who, like Rosenstein, several years ago put in place the building blocks of a digital world from which they are now trying to disentangle themselves.”

Tristan Harris, formerly at Google, has been especially pointed in his criticism of the tech industries penchant for addictive design. Perhaps the most instructive part of Harris’s story is how he experienced a promotion to ethics position within Google as, in effect, a marginalization and silencing.

(It is also edifying to consider the steady drumbeat of stories about how tech executives stringently monitor and limit the access their own children have to devices and the Internet and why they send their children to expensive low tech schools.)

Informed as my own thinking has been by the work of Hannah Arendt, I see this hostility to thought as a serious threat to our society. Arendt believed that thinking was somehow intimately related to our moral judgment and an inability to think a gateway to grave evils. Of course, it was a particular kind of thinking that Arendt had in mind–thinking, one might say, for thinking’s sake. Or, thinking that was devoid of instrumentality.

Writing in Aeon recently, Jennifer Stitt drew on Arendt to argue for the importance of solitude for thought and thought for conscience and conscience for politics. As Stitt notes, Arendt believed that “living together with others begins with living together with oneself.” Here is Stitt’s concluding paragraph:

But, Arendt reminds us, if we lose our capacity for solitude, our ability to be alone with ourselves, then we lose our very ability to think. We risk getting caught up in the crowd. We risk being ‘swept away’, as she put it, ‘by what everybody else does and believes in’ – no longer able, in the cage of thoughtless conformity, to distinguish ‘right from wrong, beautiful from ugly’. Solitude is not only a state of mind essential to the development of an individual’s consciousness – and conscience – but also a practice that prepares one for participation in social and political life.

Solitude, then, is at least one practice that can help create a place for thought.

Paradoxically, in a connected world it is challenging to find either solitude or companionship. If we submit to a regime of constant connectivity, we end up with hybrid versions of both, versions which fail to yield their full satisfactions.

Additionally, as someone who works one and a half jobs and is also raising a toddler and an infant, I understand how hard it can be to find anything approaching solitude. In a real sense it is a luxury, but it is a necessary luxury and if the world won’t offer it freely then we must fight for it as best we can.

There was one thing left in Pandora’s Box after all the evils had flown irreversibly into the world: it was hope.

No Place for Thought

A few days ago Nathan Jurgenson tweeted a short thread commenting on what Twitter has become. “[I]t feels weird to tweet about things that arent news lately,” Jurgenson noted. A sociologist with an interest in social media and identity, he found that it felt rude to tweet about his interests when his feed seemed only concerned with the latest political news. About this tendency Jurgenson wisely observed, “following the news all day is the opposite of being more informed and it certainly isnt a kind ‘resistance.'”

These observations resonated with me. I’ve had a similar experience when logging in to Twitter, only to find that Twitter is fixated on the political (pseudo-)event of the moment. In those moments it seems if not rude then at least quixotic to link to a post that is not at all related to what everyone happens to be talking about. Sometimes, of course, it is not only political news that has this effect, it is also the all to frequent tragedy that can consume Twitter’s collective attention or the frivolous faux-controversy, etc.

In moments like these Twitter — and the same is true to some degree of other platforms — demands something of us, but it is not thought. It demands a reaction, one that is swift, emotionally charged, and in keeping with the affective tenor of the platform. In many respects, this entails not only an absence of thought but conditions that are overtly hostile to thought.

Even apart from crisis, controversies, and tragedies, however, the effect is consistent: the focus is inexorably on the fleeting present. The past has no hold, the future does not come into play. Our time is now, our place is everywhere. Of course, social media has only heightened a tendency critics have noted since at least Kierkegaard’s time. To be well-informed, meaning up with current events, undermines the possibility of serious thinking, mature emotional responses, sound judgment, and wise action.

It is important, in my view, to make clear that this is not merely a problem of information overload. If it were only information we were dealing with, then we might be better able to recognize the nature of the problem and act to correct it. It is also, as I’ve noted briefly before, an affect overload problem. It is the emotional register that accounts for the Pavlovian alacrity with which we attend to our devices and the digital flows for which they are a portal. These devices, then, are, in effect, Skinner boxes we willingly inhabit that condition our cognitive and emotional lives. Twitter says “feel this,” we say “how intensely?” Social media never invites us to step away, to think and reflect, to remain silent, to refuse a response for now or may be indefinitely.

Under these circumstances, there is no place for thought.

For the sake of the world thought must, at least for a time, take leave of the world, especially the world mediated to us by social media. We must, in other words, by deliberate action, make a place for thought.

Machines for the Evasion of Moral Responsibility

The title of a recent article by Virginia Hefferman in Wired asked, “Who Will Take Responsibility for Facebook?”

The answer, of course, is that no one will. Our technological systems, by nature of their design and the ideology that sustains them, are machines for the evasion of moral responsibility.

Hefferman focused on Facebook’s role in spreading misinformation during the last election, which has recently come to fuller and more damning light. Not long afterwards, in an post titled “Google and Facebook Failed Us,” Alexis Madrigal explored how misinformation about the Las Vegas shooting spread on both Google and Facebook. Castigating both companies for their failure to take responsibility for the results their algorithms generated, Madrigal concluded: “There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.”

Writing on the same topic, William Turton noted of Google that “the company’s statement cast responsibility on an algorithm as if it were an autonomous force.” “It’s not about the algorithm,” he adds. “It’s not about what the algorithm was supposed to do, except that it went off and did a bad thing instead. Google’s business lives and dies by these things we call algorithms; getting this stuff right is its one job.”

Siva Vaidhyanathan, a scholar at UVA whose book on Facebook, Anti-Social Media, is to be released next year, described his impression of Zuckerberg to Hefferman in this way: “He lacks an appreciation for nuance, complexity, contingency, or even difficulty. He lacks a historical sense of the horrible things that humans are capable of doing to each other and the planet.”

This leads Hefferman to conclude the following:  “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability [….] It’s hard to imagine he will submit to truth and reconciliation, or use Facebook’s humiliation as a chance to reconsider its place in the world. Instead, he will likely keep lawyering up and gun it on denial and optics, as he has during past ­litigation and conflict.”

This is an arresting observation: “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability.” Frankly, I suspect Zuckerberg is not the only one among our technologists who fits this description.

It immediately reminded me of Hannah Arendt’s efforts to understand the unique evils of mid-twentieth century totalitarianism, specifically the evil of the Holocaust. Thoughtlessness, or better an inability to think, Arendt believed, was near the root of this new kind of evil. Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”

I should immediately make clear that I do not mean to equate Facebook’s and Google’s very serious failures with the Holocaust. This is not at all my point. Rather, it is that, following Arendt’s analysis, we can see more clearly how a certain inability to think (not merely calculate or problem solve) and consequently to assume moral responsibility for one’s actions, takes hold and yields a troubling and pernicious species of ethical and moral failures.

It is one thing to expose and judge individuals whose actions are consciously intended to cause harm and work against the public good. It is another thing altogether to encounter individuals who, while clearly committing such acts, are, in fact, themselves oblivious to the true nature of their actions. They are so enclosed within an ideological shell that they seem unable to see what they are doing, much less assume responsibility for it.

It would seem that whatever else we may say about algorithms as technical entities, they also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility. As such, however, they are just a new iteration of the moral myopia that many of our best tech critics have been warning us about for a very long time.


Two years ago, I published a post, Resisting the Habits of the Algorithmic Mind, in which I explored this topic of thoughtlessness and moral responsibility. I think it remains a useful way of making sense of the peculiar contours of our contemporary moral topography.