Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”

[….]

“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.'” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.

Thinking About Big Data

I want to pass on to you three pieces on what has come to be known as Big Data, a diverse set of practices enabled by the power of modern computing to accumulate and process massive amounts of data. The first piece, “View from Nowhere,” is by Nathan Jurgenson. Jurgenson argues that the aspirations attached to Big Data, particularly in the realm of human affairs, amounts to a revival of Positivism:

“The rationalist fantasy that enough data can be collected with the ‘right’ methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular.”

Jurgenson goes on to challenge these positivist assumptions through a critical reading of OkCupid CEO Christian Rudder’s new book Dataclysm: Who We Are (When We Think No One’s Looking).

The second piece is an op-ed in the NY Times by Frank Pasquale, “The Dark Market for Personal Data.” Pasquale considers the risks to privacy associated with gathering and selling of personal information by companies equipped to mine and package such data. Pasquale concludes,

“We need regulation to help consumers recognize the perils of the new information landscape without being overwhelmed with data. The right to be notified about the use of one’s data and the right to challenge and correct errors is fundamental. Without these protections, we’ll continue to be judged by a big-data Star Chamber of unaccountable decision makers using questionable sources.”

Finally, here is a journal article, “Obscurity and Privacy,” by Evan Selinger and Woodrow Hartzog. Selinger and Hartzog offer obscurity as an explanatory concept to help clarify our thinking about the sorts of issues that usually get lumped together as matters of privacy. Privacy, however, may not be a sufficiently robust concept to meet the challenges posed by Big Data.

“Obscurity identifies some of the fundamental ways information can be obtained or kept out of reach, correctly interpreted or misunderstood. Appeals to obscurity can generate explanatory power, clarifying how advances in the sciences of data collection and analysis, innovation in domains related to information and communication technology, and changes to social norms can alter the privacy landscape and give rise to three core problems: 1) new breaches of etiquette, 2) new privacy interests, and 3) new privacy harms.”

In each of these areas, obscurity names the relative confidence individuals can have that the data trail they leave behind as a matter of course will not be readily accessible:

“When information is hard to understand, the only people who will grasp it are those with sufficient motivation to push past the layer of opacity protecting it. Sense-making processes of interpretation are required to understand what is communicated and, if applicable, whom the communications concerns. If the hermeneutic challenge is too steep, the person attempting to decipher the content can come to faulty conclusions, or grow frustrated and give up the detective work. In the latter case, effort becomes a deterrent, just like in instances where information is not readily available.”

Big Data practices have made it increasingly difficult to achieve this relative obscurity thus posing a novel set social and personal challenges. For example, the risks Pasquale identifies in his op-ed may be understood as risks that follow from a loss of obscurity. Read the whole piece for a better understanding of these challenges. In fact, be sure to read all three pieces. Jurgenson, Selinger, and Pasquale are among our most thoughtful guides in these matters.

Allow me to wrap this post up with a couple of additional observations. Returning to Jurgenson’s thesis about Big Data–that Big Data is a neo-Positivist ideology–I’m reminded that positivist sociology, or social physics, was premised on the assumption that the social realm operated in predictable law-like fashion, much as the natural world operated according to the Newtonian world picture. In other words, human action was, at root, rational and thus predictable. The early twentieth century profoundly challenged this confidence in human rationality. Think, for instance, of the carnage of the Great War and Freudianism. Suddenly, humanity seemed less rational and, consequently, the prospect of uncovering law-like principles of human society must have seemed far more implausible. Interestingly, this irrationality preserved our humanity, insofar as our humanity was understood to consist of an irreducible spontaneity, freedom, and unpredictability. In other words, so long as the Other against which our humanity was defined was the Machine.

If Big Data is neo-Positivist, and I think Jurgenson is certainly on to something with that characterization, it aims to transcend the earlier failure of Comteian Positivism. It acknowledges the irrationality of human behavior, but it construes it, paradoxically, as Predictable Irrationality. In other words, it suggests that we can know what we cannot understand. And this recalls Evgeny Morozov’s critical remarks in “Every Little Byte Counts,”

“The predictive models Tucker celebrates are good at telling us what could happen, but they cannot tell us why. As Tucker himself acknowledges, we can learn that some people are more prone to having flat tires and, by analyzing heaps of data, we can even identify who they are — which might be enough to prevent an accident — but the exact reasons defy us.

Such aversion to understanding causality has a political cost. To apply such logic to more consequential problems — health, education, crime — could bias us into thinking that our problems stem from our own poor choices. This is not very surprising, given that the self-tracking gadget in our hands can only nudge us to change our behavior, not reform society at large. But surely many of the problems that plague our health and educational systems stem from the failures of institutions, not just individuals.”

It also suggests that some of the anxieties associated with Big Data may not be unlike those occasioned by the earlier positivism–they are anxieties about our humanity. If we buy into the story Big Data tells about itself, then it threatens, finally, to make our actions scrutable and predictable, suggesting that we are not as free, independent, spontaneous, or unique as we might imagine ourselves to be.

Thinking Without a Bannister

In politics and religion, especially, moderates are in high demand, and understandably so. The demand for moderates reflects growing impatience with polarization, extremism, and vacuous partisan rancor. But perhaps these calls for moderation are misguided, or, at best, incomplete.

To be clear, I have no interest in defending extremism, political or otherwise. But having said that, we immediately hit on part of the problem as I see it. While there are some obvious cases of broad agreement about what constitutes extremism–beheadings, say–it seems pretty clear that, in the more prosaic realms of everyday life, one person’s extremism may very well be another’s principled stand. In such cases, genuine debate and deliberation should follow. But if the way of the moderate is valued as an end in itself, then debate and deliberation may very well be undermined.

I use the phrase “the way of the moderate” in order to avoid using the word moderation. The reason for this is that moderation, to my mind anyway, suggests something a bit different than what I have in view here in talking about the hankering for moderates. Moderation, for instance, may be associated with Aristotle’s approach to virtue, which I rather appreciate.

But moderation in that sense is not really what I have in mind here. I may agree with Aristotle, for instance, that courage is the mean between cowardice on the one hand and foolhardiness on the other. But I’m not sure that such a methodology, which may work rather well in helping us understand the virtues, can be usefully transferred into other realms of life. To be more specific, I do not think that you can approach, to put it quaintly, matters of truth in that fashion, at least not as a rule.

In other words, it does not follow that if two people are arguing about a complex political, social, or economic problem I can simply split the difference between the two and thereby arrive at the truth. It may be that both are desperately wrong and a compromise position between the two would be just as wrong. It may be that one of the two parties is, in fact, right and that a compromise between the two would, again, turn out to be wrong.

The way of the moderate, then, amounts to a kind of intellectual triangulation between two perceived extremes. One need not think about what might be right, true, or just; rather, one takes stock of the positions on the far right and the far left and aims for some sort of mean between the two, even if the position that results is incoherent or unworkable. This sort of intellectual triangulation is also a form of intellectual sloth.

Where the way of the moderate is reflexively favored, it would be enough to successfully frame an opponent as being either “far right” or “far left.” Further debate and deliberation would be superfluous and mere pretense. And, of course, that is exactly what we see in our political discourse.

Again, given our political culture, it is easy to see why the way of the moderate is appealing and tempting. But, sadly, the way of the moderate as I’ve described it does not escape the extremism and rancor that it bemoans. In fact, it is still controlled by it. If I seek to move forward by triangulating a position between two perceived extreme coordinates, I am allowing those extremes to determine my own path. We may very well need a third path, or even a fourth and fifth, but we should not assume that such a path can be found by passing through the middle of the extremes we seek to avoid. Such an assumption is the very opposite of the “independence” that is supposedly demonstrated by pursuing it.

Paradoxically, then, we might understand the way of the moderate as the flip side of the extremism and partisanship it seeks to counteract. What they both have in common is thoughtlessness. On the one hand you get the thoughtlessness of sheer conformity; the line is toed, platitudes are professed, and dissent is silenced. On the other, you sidestep the responsibility for independent thought by splitting the presumed difference between the two perceived extremes.

We do not need moderation of this sort; we need more thought.

In the conference transcripts I mentioned a few days ago, Hannah Arendt was asked about her political leanings and her position on capitalism. She responded this way: “So you ask me where I am. I am nowhere. I am really not in the mainstream of present or any other political thought. But not because I want to be so original–it so happens that I somehow don’t fit.”

A little further on she went on to discuss what she calls thinking without a bannister:

“You said ‘groundless thinking.’ I have a metaphor which is not quite that cruel, and which I have never published but kept for myself. I call it thinking without a bannister. In German, Denken ohne Geländer. That is, as you go up and down the stairs you can always hold onto the bannister so that you don’t fall down. But we have lost this bannister. That is the way I tell it to myself. And this is indeed what I try to do.”

And she added:

“This business that the tradition is broken and the Ariadne thread is lost. Well, that is not quite as new as I made it out to be. It was, after all, Tocqueville who said that ‘the past has ceased to throw its light onto the future, and the mind of man wanders in darkness.’ This is the situation since the middle of the last century, and, seen from the viewpoint of Tocqueville, entirely true. I always thought that one has got to start thinking as though nobody had thought before, and then start learning from everybody else.”

I’m not sure that I agree with Arendt in every respect, but I think we should take her call to start thinking as though nobody had thought before quite seriously.

I’ll leave you with one more encouragement in that general direction, this one from a recent piece by Alan Jacobs.

“I guess what I’m asking for is pretty simple: for writers of all kinds, journalists as well as fiction writers, and artists and academics, to strive to extricate themselves from an ‘artificial obvious’ that has been constructed for us by the dominant institutions of our culture. Simple; also probably impossible. But it’s worth trying. Few things are more worth trying.”

One step in this direction, I think, is to avoid the temptation presented to us by the way of the moderate as I’ve described it here. Very often what is needed is to, somehow, break altogether from the false dilemmas and binary oppositions presented to us.

Building Worlds In Which We Matter

Sometimes, when you start writing, you end up somewhere very different from where you thought you were going at the outset. That’s what happened with my last post which ended up framing Peter Thiel as a would-be, latter-day Francis Bacon. What I set out to write about, however, was the closing line of this paragraph:

“Indefinite longevity—as opposed to the literal immortality promised by the Singularity—might be considered to be in the spirit of the great founder Machiavelli. At the end of ‘You Are Not a Lottery Ticket,’ however, Thiel calls for a ‘cultural revolution’ that allows us to plan to make our futures as definite as possible. That means no more taking orders from John Rawls or Malcolm Gladwell; they are too accepting of the place of luck (or fortuna, to use Machiavelli’s word) in human affairs. It also means ‘rejecting the unjust tyranny of Chance’ by seeing that ‘You can have agency not just over your own life, but over a small and important part of the world.’”

That’s from Peter Lawler’s discussion Thiel’s understanding of the role of luck in a start-up’s success, or lack thereof. In short, Thiel thinks we are in danger of making too much of luck. In fact, he hopes to mitigate the role of chance as much as possible. He would have us maximize our control and mastery over the chaotic flow of time. This was in part what elicited the comparison to Francis Bacon.

What first caught my attention, however, was that line from Thiel quoted at the end of the paragraph: “You can have agency not just over your own life, but over a small and important part of the world.”

More specifically, it was the last clause that piqued my interest. I read this desire to have agency “over a small and important part of the world” in light of Hannah Arendt’s theory of action. Action, in her view, is the most important shape our “doing” takes in this world. In her three-fold account of our doing, there is labor, which seeks to meet our basic bodily needs; there is also work, through which we build up an enduring life-world of artifice and culture; and then there is action.

Here is how Arendt describes action in an oft-cited passage from The Human Condition:

“Action, the only activity that goes on directly between men without the intermediary of things or matter, corresponds to the human condition of plurality … this plurality is specifically the condition — not only the conditio sine qua non, but the conditio per quam — of all political life ”

Action is a political category, it is possible only in the context of plurality when men and women are present to each other. Plurality is one condition of action; the other is freedom. This is not freedom understood merely as a lack of constraint, but rather as the ability to initiate, to begin something, to create (the power Arendt called natality). It is through this action that men and women disclose their true selves and ground their identity. Action discloses not only “what” we are, but “who” we are.

There’s much more that could be said, the category of action is central to Arendt’s political theory, but that should be enough to ground what follows. Arendt worried about the loss of public spaces in which human beings might engage in action. She distinguished between the private, the social, and the public. Roughly put, the private is the sphere of the family and the household. The public is the sphere in which we may act in a self-disclosing manner as described above, where we might express the fullness of our humanity. The social is the realm of bureaucracy and the faceless crowd; rather than self-disclosure, it is the realm of anonymity that forecloses the possibility of action.

I find Arendt’s conception of action and identity compelling. If Aristotle is right and we are political animals, then to some degree we seek to appear in a meaningful fashion among our peers, to act and to be acknowledged. And this is how I read the unspoken subtext of Thiel’s desire to exert agency over a small and important part of the world.

But what if, as Arendt worried in the mid-twentieth century, the world we have built is not amenable to action and self-disclosure? What if we have gradually eliminated the public spaces in which action was possible. Remember, public here does not simply refer to any physical space that someone might freely enter like a park. Rather it is a space constituted by the gathering of people and in which individuals can act in a meaningful and consequential manner.

If our world is one in which we find it increasingly difficult to appear before others in a meaningful fashion, then we have perhaps two options left to us. One might be to force our appearance upon the world by actions of such dramatic scale that they are able to register in the social world, even if just fleetingly. This sort of action, which is not truly action in Arendt’s sense, tends to be rare and frequently destructive.

The other option, of course, is to find or create a world in which action matters. I immediately thought of the immensely popular online game Minecraft. To be clear, I have never played Minecraft; these are the observations of an outsider. I’ve only seen it played and read about it. In fact, Robin Sloan’s recent piece about the game explains why it is kicking around in my mind just now.

Minecraft_city_hallAccording to Sloan, the secret of Minecraft is that “it does not merely allow […] co-creation but requires it.” In other words, in Minecraft is a virtual world in which you work in the midst of others and create with them. Behold: freedom and plurality (of sorts), and thus action. And, of course, it is not only Minecraft. Consider as well the tremendous popularity of multiplayer online role-playing games like World of Warcraft. In these cases, players inhabit virtual worlds in which they may appear before others and act with consequence to win a victory or secure a goal.

I understand, of course, that, at best, I am using the words appear, act, and consequence in a manner that is merely analogical to what Arendt meant by these same words. But the analogical relationship may tell us something about the appeal of these virtual worlds. Conversely, their popularity may also tell us something about our (non-virtual) world.

This brings me, finally, to the point I first set out to make–really, it is a question I’d like to pose (at the expense of egregious digital dualism): Are we building and participating in virtual worlds where our actions matter because in the real world, they don’t?

Let me expand on that question with a hypothetical scenario. I read recently about one man’s vision for ameliorating dire living conditions in a potential future when urban housing is reduced to 100 square-foot windowless apartments. The solution: “‘Mixed Reality Living Spaces,’ where technology is used to create immersive environments that give the inhabitant an illusion of living in a much larger, well-lit space.” I came across that article on Twitter via Christopher Mims. Frank Pasquale likened the scenario to a variation of the living arrangements in Forster’s “The Machine Stops.” I kicked in a comparison to the multimedia walls that Ray Bradbury imagined in Fahrenheit 451, the well-to-do could afford four screen-walls for total immersion.

Imagine, if you will, a future in which people have retreated into immersive media environments, private worlds masquerading as faux-public spheres, where they find it possible to engage in something that approximates meaningful action in the (virtual) presence of others.

In imagining such a scenario it would be far too easy to complain about the hapless masses that so easily retreat into their own personal holodecks, abandoning the world in favor of their escapist fantasies. It would be far too easy because it would avoid the more important consideration. How did we arrive at a society in which virtual worlds afforded the only possibility for meaningful, self-disclosing action that most people would ever encounter?

That scenario, of course, extrapolates in exaggerated fashion from a few present realities. Nonetheless, it gets at questions worth considering. If Arendt is right about the role and significance of what she calls action, then it is right and appropriate that individuals seek for it. Where might individuals find these public spaces today? How can we ensure the possibility for meaningful action in the world? How can we avoid a world in which people are drawn into virtual worlds because it is only there that they feel they matter?

The Political Perils of “Big Data”

In “Every Little Byte Counts,” a recent review of two books on “advances in our ability to store, analyze and profit from vast amounts of data generated by our gadgets” (otherwise known as Big Data), Evgeny Morozov makes two observations to which I want to draw your attention. 

The first of these he makes with the help of the Italian philosopher, Giorgio Agamben. Here are Morozov’s first two paragraphs: 

In “On What We Can Not Do,” a short and pungent essay published a few years ago, the Italian philosopher Giorgio Agamben outlined two ways in which power operates today. There’s the conventional type that seeks to limit our potential for self-­development by restricting material resources and banning certain behaviors. But there’s also a subtler, more insidious type, which limits not what we can do but what we can not do. What’s at stake here is not so much our ability to do things but our capacity not to make use of that very ability.

While each of us can still choose not to be on Facebook, have a credit history or build a presence online, can we really afford not to do any of those things today? It was acceptable not to have a cellphone when most people didn’t have them; today, when almost everybody does and when our phone habits can even be used to assess whether we qualify for a loan, such acts of refusal border on the impossible.

This is a profoundly important observation, and it is hardly ever made. In his brief but insightful book, Nature and Altering It, ethicist Allen Verhey articulated a similar concern. Verhey discusses a series of myths that underlie our understanding of nature (earlier he cataloged 16 uses of the idea of “nature”). While discussing one of these myths, the myth of the project of liberal society, Verhey writes,

“Finally, however, the folly of the myth of liberal society is displayed in the pretense that ‘maximizing freedom’ is always morally innocent. ‘Maximizing freedom,’ however, can ironically increase our bondage. What is introduced as a way to increase our options can become socially enforced. The point can easily be illustrated with technology. New technologies are frequently introduced as ways to increase our options, as ways to maximize our freedom, but they can become socially enforced. The automobile was introduced as an option, as an alternative to the horse, but it is now socially enforced …. The technology that surrounds our dying was introduced to give doctors and patients options in the face of disease and death, but such ‘options’ have become socially enforced; at least one sometimes still hears, “We have no choice!” And the technology that may come to surround birth, including pre-natal diagnosis, for example, may come to be socially enforced. ‘What? You knew you were at risk for bearing a child with XYZ, and you did nothing about it? And now you expect help with this child?’ Now it is possible, of course, to claim that cars and CPR and pre-natal diagnosis are the path of progress, but then the argument has shifted from the celebration of options and the maximizing of freedom to something else, to the meaning of progress.”

The second point from Morozov’s review that I want to draw your attention to involves the political consequences of tools that harness the predictive power of Big Data, a power divorced from understanding:

“The predictive models Tucker celebrates are good at telling us what could happen, but they cannot tell us why. As Tucker himself acknowledges, we can learn that some people are more prone to having flat tires and, by analyzing heaps of data, we can even identify who they are — which might be enough to prevent an accident — but the exact reasons defy us.

Such aversion to understanding causality has a political cost. To apply such logic to more consequential problems — health, education, crime — could bias us into thinking that our problems stem from our own poor choices. This is not very surprising, given that the self-tracking gadget in our hands can only nudge us to change our behavior, not reform society at large. But surely many of the problems that plague our health and educational systems stem from the failures of institutions, not just individuals.”

Moreover, as Hannah Arendt put it in The Human Condition, politics is premised on the ability of human beings to “talk with and make sense to each other and to themselves.” Divorcing action from understanding jeopardizes the premise upon which democratic self-governance is founded, the possibility of deliberative judgment. Is it an exaggeration to speak of the prospective tyranny of the algorithm?

I’ll give Morozov the penultimate word:

“It may be that the first kind of power identified by Agamben is actually less pernicious, for, in barring us from doing certain things, it at least preserves, even nurtures, our capacity to resist. But as we lose our ability not to do — here Agamben is absolutely right — our capacity to resist goes away with it. Perhaps it’s easier to resist the power that bars us from using our smartphones than the one that bars us from not using them. Big Data does not a free society make, at least not without basic political judgment.”

I draw your attention to these concerns not because I have an adequate response to them, but because I am increasingly convinced that they are among the most pressing concerns we must grapple with in the years ahead.