Reframing Technocracy

In his 1977 classic, Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, Langdon Winner invites us to consider not the question “Who governs?” but rather “What governs?”

He further elaborates:

“Are there certain conditions, constraints, necessities, requirements, or imperatives effectively governing how an advanced technological society operates? Do such conditions predominate regardless of the specific character of the men who ostensibly hold power? This, it seems to me, is the most crucial problem raised by the conjunction of politics and technics. It is certainly the point at which the idea of autonomous  technology has its broadest significance.”

Earlier, he had discussed one way in which technocracy had been envisioned throughout the 20th century: as the emergence of an elite class of scientists, technicians, and engineers, who displace the traditional political class and become the rulers of society. This vision was popular among science fiction writers and theorists who were overtly technocratic. This vision never quite played out as these writers and theorists imagined. But this does not mean, in Winner’s view, that there is no meaningful sense in which we might speak about our political order being technocratic.

This is the significance of the question “What rules?” rather than “Who rules?”

Here is Winner again:

“If one returns to the modern writings on technocracy in this light, one finds that parallel to the conceptions about scientific and technical elites and their power is a notion of order— a technological order— in which in a true sense no persons or groups rule at all. Individuals and elites are present, but their roles and actions conform so closely to the framework established by the structures and processes of the technical system that any claim to determination by human choice becomes purely illusory. In this way of looking at things, technology itself is seen to have a distinctly political form. The technological order built since the scientific revolution now encompasses, interpenetrates, and incorporates all of society. Its standards of operation are the rules men must obey. Within this comprehensive order, government becomes the business of recognizing what is necessary and efficient for the continued functioning and elaboration of large-scale systems and the rational implementation of their manifest requirements. Politics becomes the acting out of the technical hegemony.”

Winner takes this to be the view generally represented, despite their differences, by Spengler, Juenger, Jaspers, Mumford, Marcuse, Giedion, and Ellul. “[I]n them,” Winner writes, “one finds a roughly shared notion of society and politics, a common set of observations, assumptions, modes of thinking and sense of the whole, which, I be-
lieve, unites them as an identifiable tradition.”

Throughout this section of Autonomous Technology, Winner sets out to update and refine their argument.

Along the way, Winner takes some passing shots at the work of what we might call the popular tech criticism current at the time Winner was writing, a little over forty years ago.

“Much of what now passes for incisive analysis,” Winner notes, “is actually nothing more than elaborate landscape, impressionistic, futuristic razzle-dazzle spewing forth in an endless stream of paperback non-books, media extravaganzas, and global village publicity.”

A little further on he catalogs a list of recurring tropes in popular writing about technology: overcrowded cities, “labyrinthine bureaucracies,” consumerism and waste, the rise of the military-industrial complex, etc.

Winner goes on:

“To go on describing such things endlessly does little to advance our insight. Neither is it helpful to devise new names for the world produced. The Postindustrial Society? The Technetronic Society? The Posthistoric Society? The Active Society? In an unconscious parody of the ancient belief that he who knows God’s secret name will have extraordinary powers, the idea seems to be that a stroke of nomenclature will bring light to the darkness. This does make for captivating book titles but little else. The fashion, furthermore, is to exclaim in apparent horror at the incredible scenes unfolding before one’s eyes and yet deep in one’s heart relish the excitement and perversity of it all. Alleged critiques turn out to be elaborate advertisements for the situations they ostensibly abhor.”

On all counts, it seems to me that Winner’s book has aged well.

Democracy and Technology

Alexis Madrigal has written a long and thoughtful piece on Facebook’s role in the last election. He calls the emergence of social media, Facebook especially, “the most significant shift in the technology of politics since the television.” Madrigal is pointed in his estimation of the situation as it now consequently stands.

Early on, describing the widespread (but not total) failure to understand the effect Facebook could have on an election, Madrigal writes, “The informational underpinnings of democracy have eroded, and no one has explained precisely how.”

Near the end of the piece, he concludes, “The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

Madrigal’s piece brought to mind, not surprisingly, two important observations by Neil Postman that I’ve cited before.

My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.

Also:

Surrounding every technology are institutions whose organization–not to mention their reason for being–reflects the world-view promoted by the technology. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis.

In these two passages, I find the crux of Postman’s enduring insights, the insights, more generally, of the media ecology school of tech criticism. It seems to me that this is more or less where we are: a culture in crisis, as Madrigal’s comments suggest. Read what he has to say.

On Twitter, replying to a tweet from Christopher Mims endorsing Madrigal’s work, Zeynep Tufekci took issue with Madrigal’s framing. Madrigal, in fact, cited Tufekci as one of the few people who understood a good deal of what was happening and, indeed, saw it coming years ago. But Tufekci nonetheless challenged Madrigal’s point of departure, which is that the entirety of Facebook’s role caught nearly everyone by surprise and couldn’t have been foreseen.

Tufekci has done excellent work exploring the political consequences of Big Data, algorithms, etc. This 2014 article, for example, is superb. But in reading Tufekci’s complaint that her work and the work of many other academics was basically ignored, my first thought was that the similarly prescient work of technology critics has been more or less ignored for much longer. I’m thinking of Mumford, Jaspers, Ellul, Jonas, Grant, Winner, Mander, Postman and a host of others. They have been dismissed as too pessimistic, too gloomy, too conservative, too radical, too broad in their criticism and too narrow, as Luddites and reactionaries, etc. Yet here we are.

In a 1992 article about democracy and technology, Ellul wrote, “In my view, our Western political institutions are no longer in any sense democratic. We see the concept of democracy called into question by the manipulation of the media, the falsification of political discourse, and the establishment of a political class that, in all countries where it is found, simply negates democracy.”

Writing in the same special issue of the journal Philosophy and Technology edited by Langdon Winner, Albert Borgmann wrote, “Modern technology is the acknowledged ruler of the advanced industrial democracies. Its rule is not absolute. It rests on the complicity of its subjects, the citizens of the democracies. Emancipation from this complicity requires first of alI an explicit and shared consideration of the rule of technology.”

It is precisely such an “explicit and shared consideration of the rule of technology” that we have failed to seriously undertake. Again, Tufekci and her colleagues are hardly the first to have their warnings, measured, cogent, urgent as they may be, ignored.

Roger Berkowitz of the Hannah Arendt Center for Politics and the Humanities, recently drew attention to a commencement speech given by John F. Kennedy at Yale in 1962. Kennedy noted the many questions that America had faced throughout her history, from slavery to the New Deal. These were questions “on which the Nation was sharply and emotionally divided.” But now, Kennedy believed we were ready to move on:

Today these old sweeping issues very largely have disappeared. The central domestic issues of our time are more subtle and less simple. They relate not to basic clashes of philosophy or ideology but to ways and means of reaching common goals — to research for sophisticated solutions to complex and obstinate issues.

These issues were “administrative and executive” in nature. They were issues “for which technical answers, not political answers, must be provided,” Kennedy concluded. You should read the rest of Berkowitz reflections on the prejudices exposed by our current crisis, but I want to take Kennedy’s technocratic faith as a point of departure for some observations.

Kennedy’s faith in the technocratic management of society was just the latest iteration of modernity’s political project, the quest for a neutral and rational mode of politics for a pluralistic society.

I will put it this way: liberal democracy is a “machine” for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.

It was machine-like in its promised objectivity and efficiency. But, of course, it would work only to the degree that it generated the subjects it required for its own operation. (A characteristic it shares with all machines.) Human beings have been, on this score, rather recalcitrant, much to the chagrin of the administrators of the machine.

Kennedy’s own hopes were just a renewed version of this vision, only they had become more explicitly a-political and technocratic in nature. It was not enough that citizens check certain aspects of their person at the door to the public sphere, now it would seem that citizens would do well to entrust the political order to experts, engineers, and technicians.

Leo Marx recounts an important part of this story, unfolding throughout the 19th to early 20th century, in an article accounting for what he calls “postmodern pessimism” about technology. Marx outlines how “the simple [small r] republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.” I would also include the emergence of bureaucratic and scientific management in the telling of this story.

Presently we are witnessing a further elaboration of this same project along the same trajectory. It is the rise of governance by algorithm, a further, apparent distancing of the human from the political. I say apparent because, of course, the human is never fully out of the picture, we just create more elaborate technical illusions to mask the irreducibly human element. We buy into these illusions, in part, because of the initial trajectory set for the liberal democratic order, that of machine-like objectivity, rationality, and efficiency. It is on this ideal that Western society staked its hopes for peace and prosperity. At every turn, when the human element, in its complexity and messiness, broke through the facade, we doubled-down on the ideal rather than question the premises. Initially, at least the idea was that the “machine” would facilitate the deliberation of citizens by establishing rules and procedures to govern their engagement. When it became apparent that this would no longer work, we explicitly turned to technique as the common frame by which we would proceed. Now that technique has failed because again the human manifested itself, we overtly turn to machines.

This new digital technocracy takes two, seemingly paradoxical paths. One of these paths is the increasing reliance on Big Data and computing power in the actual work of governing. The other, however, is the deployment of these same tools for the manipulation of the governed. It is darkly ironic that this latter deployment of digital technology is intended to agitate the very passions liberal democracy was initially advanced to suppress (at least according to the story liberal democracy tells about itself). It is as if, having given up on the possibility of reasonable political discourse and deliberation within a pluralistic society, those with the means to control the new apparatus of government have simply decided to manipulate those recalcitrant elements of human nature to their own ends.

It is this latter path that Madrigal and Tufekci have done their best to elucidate. However, my rambling contention here is that the full significance of our moment is only intelligible within a much broader account of the relationship between technology and democracy. It is also my contention that we will remain blind to the true nature of our situation so long as we are unwilling to submit our technology to the kind of searching critique Borgmann advocated and Ellul thought hardly possible. But we are likely too invested in the promise of technology and too deeply compromised in our habits and thinking to undertake such a critique.

What Do We Think We Are Doing When We Are Thinking?

Over the past few weeks, I’ve drafted about half a dozen posts in my mind that, sadly, I’ve not had the time to write. Among those mental drafts in progress is a response to Evgeny Morozov’s latest essay. The piece is ostensibly a review of Nick Carr’s The Glass Cage, but it’s really a broadside at the whole enterprise of tech criticism (as Morozov sees it). I’m not sure about the other mental drafts, but that is one I’m determined to see through. Look for it in the next few days … maybe.

In the meantime, here’s a quick reaction to a post by Steve Coast that has been making the rounds today.

In “The World Will Only Get Weirder,” Coast opens with some interesting observations about aviation safety. Taking the recent spate of bizarre aviation incidents as his point of departure, Coast argues that rules as a means of managing safety will only get you so far.

The history of aviation safety is the history of rule-making and checklists. Over time, this approach successfully addressed the vast majority of aviation safety issues. Eventually, however, you hit peak rules, as it were, and you enter a byzantine phase of rule making. Here’s the heart of the piece:

“We’ve reached the end of the useful life of that strategy and have hit severely diminishing returns. As illustration, we created rules to make sure people can’t get in to cockpits to kill the pilots and fly the plane in to buildings. That looked like a good rule. But, it’s created the downside that pilots can now lock out their colleagues and fly it in to a mountain instead.

It used to be that rules really helped. Checklists on average were extremely helpful and have saved possibly millions of lives. But with aircraft we’ve reached the point where rules may backfire, like locking cockpit doors. We don’t know how many people have been saved without locking doors since we can’t go back in time and run the experiment again. But we do know we’ve lost 150 people with them.

And so we add more rules, like requiring two people in the cockpit from now on. Who knows what the mental capacity is of the flight attendant that’s now allowed in there with one pilot, or what their motives are. At some point, if we wait long enough, a flight attendant is going to take over an airplane having only to incapacitate one, not two, pilots. And so we’ll add more rules about the type of flight attendant allowed in the cockpit and on and on.”

This struck me as a rather sensible take on the limits of a rule-oriented, essentially bureaucratic approach to problem solving, which is to say the limits of technocracy or technocratic rationality. Limits, incidentally, that apply as well to our increasing dependence on algorithmic automation.

Of course, this is not to say that rule-oriented, bureaucratic reason is useless. Far from it. As a mode of thinking it is, in fact, capable of solving a great number of problems. It is eminently useful, if also profoundly limited.

Problems arise, however, when this one mode of thought crowds out all others, when we can’t even conceive of an alternative.

This dynamic is, I think, illustrated by a curious feature of Coast’s piece. The engaging argument that characterizes the first half or so of the post gives way to a far less cogent and, frankly, troubling attempt at a solution:

“The primary way we as a society deal with this mess is by creating rule-free zones. Free trade zones for economics. Black budgets for military. The internet for intellectual property. Testing areas for drones. Then after all the objectors have died off, integrate the new things in to society.”

So, it would seem, Coast would have us address the limits of rule-oriented, bureaucratic reason by throwing out all rules, at least within certain contexts until everyone gets on board or dies off. This stark opposition is plausible only if you can’t imagine an alternative mode of thought that might direct your actions. We only have one way of thinking seems to be the unspoken premise. Given that premise, once that mode of thinking fails, there’s nothing left to do but discard thinking altogether.

As I was working on this post I came across a story on NPR that also illustrates our unfortunately myopic understanding of what counts as thought. The story discusses a recent study that identifies a tendency the researchers labeled “algorithm aversion”:

“In a paper just published in the Journal of Experimental Psychology: General, researchers from the University of Pennsylvania’s Wharton School of Business presented people with decisions like these. Across five experiments, they found that people often chose a human — themselves or someone else — over a model when it came to making predictions, especially after seeing the model make some mistakes. In fact, they did so even when the model made far fewer mistakes than the human. The researchers call the phenomenon ‘algorithm aversion,’ where ‘algorithm’ is intended broadly, to encompass — as they write — ‘any evidence-based forecasting formula or rule.'”

After considering what might account for algorithm aversion, the author, psychology professor Tania Lombrozo, closes with this:

“I’m left wondering how people are thinking of their own decision process if not in algorithmic terms — that is, as some evidence-based forecasting formula or rule. Perhaps the aversion — if it is that — is not to algorithms per se, but to the idea that the outcomes of complex, human processes can be predicted deterministically. Or perhaps people assume that human ‘algorithms’ have access to additional information that they (mistakenly) believe will aid predictions, such as cultural background knowledge about the sorts of people who select different majors, or about the conditions under which someone might do well versus poorly on the GMAT. People may simply think they’re implementing better algorithms than the computer-based alternatives.

So, here’s what I want to know. If this research reflects a preference for ‘human algorithms’ over ‘nonhuman algorithms,’ what is it that makes an algorithm human? And if we don’t conceptualize our own decisions as evidence-based rules of some sort, what exactly do we think they are?”

May be it’s just me, but it seems Lombrozo can’t quite imagine how people might understand their own thinking if they are not understanding it on the model of an algorithm.

These two pieces raise a series of questions for me, and I’ll leave you with them:

What is thinking? What do we think we are doing when we are thinking? Can we imagine thinking as something more and other than rule-oriented problem solving or cost/benefit analysis? Have we surrendered our thinking to the controlling power of one master metaphor, the algorithm?

(Spoiler alert: I think the work of Hannah Arendt is of immense help in these matters.)