Thinking About Big Data

I want to pass on to you three pieces on what has come to be known as Big Data, a diverse set of practices enabled by the power of modern computing to accumulate and process massive amounts of data. The first piece, “View from Nowhere,” is by Nathan Jurgenson. Jurgenson argues that the aspirations attached to Big Data, particularly in the realm of human affairs, amounts to a revival of Positivism:

“The rationalist fantasy that enough data can be collected with the ‘right’ methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular.”

Jurgenson goes on to challenge these positivist assumptions through a critical reading of OkCupid CEO Christian Rudder’s new book Dataclysm: Who We Are (When We Think No One’s Looking).

The second piece is an op-ed in the NY Times by Frank Pasquale, “The Dark Market for Personal Data.” Pasquale considers the risks to privacy associated with gathering and selling of personal information by companies equipped to mine and package such data. Pasquale concludes,

“We need regulation to help consumers recognize the perils of the new information landscape without being overwhelmed with data. The right to be notified about the use of one’s data and the right to challenge and correct errors is fundamental. Without these protections, we’ll continue to be judged by a big-data Star Chamber of unaccountable decision makers using questionable sources.”

Finally, here is a journal article, “Obscurity and Privacy,” by Evan Selinger and Woodrow Hartzog. Selinger and Hartzog offer obscurity as an explanatory concept to help clarify our thinking about the sorts of issues that usually get lumped together as matters of privacy. Privacy, however, may not be a sufficiently robust concept to meet the challenges posed by Big Data.

“Obscurity identifies some of the fundamental ways information can be obtained or kept out of reach, correctly interpreted or misunderstood. Appeals to obscurity can generate explanatory power, clarifying how advances in the sciences of data collection and analysis, innovation in domains related to information and communication technology, and changes to social norms can alter the privacy landscape and give rise to three core problems: 1) new breaches of etiquette, 2) new privacy interests, and 3) new privacy harms.”

In each of these areas, obscurity names the relative confidence individuals can have that the data trail they leave behind as a matter of course will not be readily accessible:

“When information is hard to understand, the only people who will grasp it are those with sufficient motivation to push past the layer of opacity protecting it. Sense-making processes of interpretation are required to understand what is communicated and, if applicable, whom the communications concerns. If the hermeneutic challenge is too steep, the person attempting to decipher the content can come to faulty conclusions, or grow frustrated and give up the detective work. In the latter case, effort becomes a deterrent, just like in instances where information is not readily available.”

Big Data practices have made it increasingly difficult to achieve this relative obscurity thus posing a novel set social and personal challenges. For example, the risks Pasquale identifies in his op-ed may be understood as risks that follow from a loss of obscurity. Read the whole piece for a better understanding of these challenges. In fact, be sure to read all three pieces. Jurgenson, Selinger, and Pasquale are among our most thoughtful guides in these matters.

Allow me to wrap this post up with a couple of additional observations. Returning to Jurgenson’s thesis about Big Data–that Big Data is a neo-Positivist ideology–I’m reminded that positivist sociology, or social physics, was premised on the assumption that the social realm operated in predictable law-like fashion, much as the natural world operated according to the Newtonian world picture. In other words, human action was, at root, rational and thus predictable. The early twentieth century profoundly challenged this confidence in human rationality. Think, for instance, of the carnage of the Great War and Freudianism. Suddenly, humanity seemed less rational and, consequently, the prospect of uncovering law-like principles of human society must have seemed far more implausible. Interestingly, this irrationality preserved our humanity, insofar as our humanity was understood to consist of an irreducible spontaneity, freedom, and unpredictability. In other words, so long as the Other against which our humanity was defined was the Machine.

If Big Data is neo-Positivist, and I think Jurgenson is certainly on to something with that characterization, it aims to transcend the earlier failure of Comteian Positivism. It acknowledges the irrationality of human behavior, but it construes it, paradoxically, as Predictable Irrationality. In other words, it suggests that we can know what we cannot understand. And this recalls Evgeny Morozov’s critical remarks in “Every Little Byte Counts,”

“The predictive models Tucker celebrates are good at telling us what could happen, but they cannot tell us why. As Tucker himself acknowledges, we can learn that some people are more prone to having flat tires and, by analyzing heaps of data, we can even identify who they are — which might be enough to prevent an accident — but the exact reasons defy us.

Such aversion to understanding causality has a political cost. To apply such logic to more consequential problems — health, education, crime — could bias us into thinking that our problems stem from our own poor choices. This is not very surprising, given that the self-tracking gadget in our hands can only nudge us to change our behavior, not reform society at large. But surely many of the problems that plague our health and educational systems stem from the failures of institutions, not just individuals.”

It also suggests that some of the anxieties associated with Big Data may not be unlike those occasioned by the earlier positivism–they are anxieties about our humanity. If we buy into the story Big Data tells about itself, then it threatens, finally, to make our actions scrutable and predictable, suggesting that we are not as free, independent, spontaneous, or unique as we might imagine ourselves to be.

Thinking Without a Bannister

In politics and religion, especially, moderates are in high demand, and understandably so. The demand for moderates reflects growing impatience with polarization, extremism, and vacuous partisan rancor. But perhaps these calls for moderation are misguided, or, at best, incomplete.

To be clear, I have no interest in defending extremism, political or otherwise. But having said that, we immediately hit on part of the problem as I see it. While there are some obvious cases of broad agreement about what constitutes extremism–beheadings, say–it seems pretty clear that, in the more prosaic realms of everyday life, one person’s extremism may very well be another’s principled stand. In such cases, genuine debate and deliberation should follow. But if the way of the moderate is valued as an end in itself, then debate and deliberation may very well be undermined.

I use the phrase “the way of the moderate” in order to avoid using the word moderation. The reason for this is that moderation, to my mind anyway, suggests something a bit different than what I have in view here in talking about the hankering for moderates. Moderation, for instance, may be associated with Aristotle’s approach to virtue, which I rather appreciate.

But moderation in that sense is not really what I have in mind here. I may agree with Aristotle, for instance, that courage is the mean between cowardice on the one hand and foolhardiness on the other. But I’m not sure that such a methodology, which may work rather well in helping us understand the virtues, can be usefully transferred into other realms of life. To be more specific, I do not think that you can approach, to put it quaintly, matters of truth in that fashion, at least not as a rule.

In other words, it does not follow that if two people are arguing about a complex political, social, or economic problem I can simply split the difference between the two and thereby arrive at the truth. It may be that both are desperately wrong and a compromise position between the two would be just as wrong. It may be that one of the two parties is, in fact, right and that a compromise between the two would, again, turn out to be wrong.

The way of the moderate, then, amounts to a kind of intellectual triangulation between two perceived extremes. One need not think about what might be right, true, or just; rather, one takes stock of the positions on the far right and the far left and aims for some sort of mean between the two, even if the position that results is incoherent or unworkable. This sort of intellectual triangulation is also a form of intellectual sloth.

Where the way of the moderate is reflexively favored, it would be enough to successfully frame an opponent as being either “far right” or “far left.” Further debate and deliberation would be superfluous and mere pretense. And, of course, that is exactly what we see in our political discourse.

Again, given our political culture, it is easy to see why the way of the moderate is appealing and tempting. But, sadly, the way of the moderate as I’ve described it does not escape the extremism and rancor that it bemoans. In fact, it is still controlled by it. If I seek to move forward by triangulating a position between two perceived extreme coordinates, I am allowing those extremes to determine my own path. We may very well need a third path, or even a fourth and fifth, but we should not assume that such a path can be found by passing through the middle of the extremes we seek to avoid. Such an assumption is the very opposite of the “independence” that is supposedly demonstrated by pursuing it.

Paradoxically, then, we might understand the way of the moderate as the flip side of the extremism and partisanship it seeks to counteract. What they both have in common is thoughtlessness. On the one hand you get the thoughtlessness of sheer conformity; the line is toed, platitudes are professed, and dissent is silenced. On the other, you sidestep the responsibility for independent thought by splitting the presumed difference between the two perceived extremes.

We do not need moderation of this sort; we need more thought.

In the conference transcripts I mentioned a few days ago, Hannah Arendt was asked about her political leanings and her position on capitalism. She responded this way: “So you ask me where I am. I am nowhere. I am really not in the mainstream of present or any other political thought. But not because I want to be so original–it so happens that I somehow don’t fit.”

A little further on she went on to discuss what she calls thinking without a bannister:

“You said ‘groundless thinking.’ I have a metaphor which is not quite that cruel, and which I have never published but kept for myself. I call it thinking without a bannister. In German, Denken ohne Geländer. That is, as you go up and down the stairs you can always hold onto the bannister so that you don’t fall down. But we have lost this bannister. That is the way I tell it to myself. And this is indeed what I try to do.”

And she added:

“This business that the tradition is broken and the Ariadne thread is lost. Well, that is not quite as new as I made it out to be. It was, after all, Tocqueville who said that ‘the past has ceased to throw its light onto the future, and the mind of man wanders in darkness.’ This is the situation since the middle of the last century, and, seen from the viewpoint of Tocqueville, entirely true. I always thought that one has got to start thinking as though nobody had thought before, and then start learning from everybody else.”

I’m not sure that I agree with Arendt in every respect, but I think we should take her call to start thinking as though nobody had thought before quite seriously.

I’ll leave you with one more encouragement in that general direction, this one from a recent piece by Alan Jacobs.

“I guess what I’m asking for is pretty simple: for writers of all kinds, journalists as well as fiction writers, and artists and academics, to strive to extricate themselves from an ‘artificial obvious’ that has been constructed for us by the dominant institutions of our culture. Simple; also probably impossible. But it’s worth trying. Few things are more worth trying.”

One step in this direction, I think, is to avoid the temptation presented to us by the way of the moderate as I’ve described it here. Very often what is needed is to, somehow, break altogether from the false dilemmas and binary oppositions presented to us.

Building Worlds In Which We Matter

Sometimes, when you start writing, you end up somewhere very different from where you thought you were going at the outset. That’s what happened with my last post which ended up framing Peter Thiel as a would-be, latter-day Francis Bacon. What I set out to write about, however, was the closing line of this paragraph:

“Indefinite longevity—as opposed to the literal immortality promised by the Singularity—might be considered to be in the spirit of the great founder Machiavelli. At the end of ‘You Are Not a Lottery Ticket,’ however, Thiel calls for a ‘cultural revolution’ that allows us to plan to make our futures as definite as possible. That means no more taking orders from John Rawls or Malcolm Gladwell; they are too accepting of the place of luck (or fortuna, to use Machiavelli’s word) in human affairs. It also means ‘rejecting the unjust tyranny of Chance’ by seeing that ‘You can have agency not just over your own life, but over a small and important part of the world.’”

That’s from Peter Lawler’s discussion Thiel’s understanding of the role of luck in a start-up’s success, or lack thereof. In short, Thiel thinks we are in danger of making too much of luck. In fact, he hopes to mitigate the role of chance as much as possible. He would have us maximize our control and mastery over the chaotic flow of time. This was in part what elicited the comparison to Francis Bacon.

What first caught my attention, however, was that line from Thiel quoted at the end of the paragraph: “You can have agency not just over your own life, but over a small and important part of the world.”

More specifically, it was the last clause that piqued my interest. I read this desire to have agency “over a small and important part of the world” in light of Hannah Arendt’s theory of action. Action, in her view, is the most important shape our “doing” takes in this world. In her three-fold account of our doing, there is labor, which seeks to meet our basic bodily needs; there is also work, through which we build up an enduring life-world of artifice and culture; and then there is action.

Here is how Arendt describes action in an oft-cited passage from The Human Condition:

“Action, the only activity that goes on directly between men without the intermediary of things or matter, corresponds to the human condition of plurality … this plurality is specifically the condition — not only the conditio sine qua non, but the conditio per quam — of all political life ”

Action is a political category, it is possible only in the context of plurality when men and women are present to each other. Plurality is one condition of action; the other is freedom. This is not freedom understood merely as a lack of constraint, but rather as the ability to initiate, to begin something, to create (the power Arendt called natality). It is through this action that men and women disclose their true selves and ground their identity. Action discloses not only “what” we are, but “who” we are.

There’s much more that could be said, the category of action is central to Arendt’s political theory, but that should be enough to ground what follows. Arendt worried about the loss of public spaces in which human beings might engage in action. She distinguished between the private, the social, and the public. Roughly put, the private is the sphere of the family and the household. The public is the sphere in which we may act in a self-disclosing manner as described above, where we might express the fullness of our humanity. The social is the realm of bureaucracy and the faceless crowd; rather than self-disclosure, it is the realm of anonymity that forecloses the possibility of action.

I find Arendt’s conception of action and identity compelling. If Aristotle is right and we are political animals, then to some degree we seek to appear in a meaningful fashion among our peers, to act and to be acknowledged. And this is how I read the unspoken subtext of Thiel’s desire to exert agency over a small and important part of the world.

But what if, as Arendt worried in the mid-twentieth century, the world we have built is not amenable to action and self-disclosure? What if we have gradually eliminated the public spaces in which action was possible. Remember, public here does not simply refer to any physical space that someone might freely enter like a park. Rather it is a space constituted by the gathering of people and in which individuals can act in a meaningful and consequential manner.

If our world is one in which we find it increasingly difficult to appear before others in a meaningful fashion, then we have perhaps two options left to us. One might be to force our appearance upon the world by actions of such dramatic scale that they are able to register in the social world, even if just fleetingly. This sort of action, which is not truly action in Arendt’s sense, tends to be rare and frequently destructive.

The other option, of course, is to find or create a world in which action matters. I immediately thought of the immensely popular online game Minecraft. To be clear, I have never played Minecraft; these are the observations of an outsider. I’ve only seen it played and read about it. In fact, Robin Sloan’s recent piece about the game explains why it is kicking around in my mind just now.

Minecraft_city_hallAccording to Sloan, the secret of Minecraft is that “it does not merely allow [...] co-creation but requires it.” In other words, in Minecraft is a virtual world in which you work in the midst of others and create with them. Behold: freedom and plurality (of sorts), and thus action. And, of course, it is not only Minecraft. Consider as well the tremendous popularity of multiplayer online role-playing games like World of Warcraft. In these cases, players inhabit virtual worlds in which they may appear before others and act with consequence to win a victory or secure a goal.

I understand, of course, that, at best, I am using the words appear, act, and consequence in a manner that is merely analogical to what Arendt meant by these same words. But the analogical relationship may tell us something about the appeal of these virtual worlds. Conversely, their popularity may also tell us something about our (non-virtual) world.

This brings me, finally, to the point I first set out to make–really, it is a question I’d like to pose (at the expense of egregious digital dualism): Are we building and participating in virtual worlds where our actions matter because in the real world, they don’t?

Let me expand on that question with a hypothetical scenario. I read recently about one man’s vision for ameliorating dire living conditions in a potential future when urban housing is reduced to 100 square-foot windowless apartments. The solution: “‘Mixed Reality Living Spaces,’ where technology is used to create immersive environments that give the inhabitant an illusion of living in a much larger, well-lit space.” I came across that article on Twitter via Christopher Mims. Frank Pasquale likened the scenario to a variation of the living arrangements in Forster’s “The Machine Stops.” I kicked in a comparison to the multimedia walls that Ray Bradbury imagined in Fahrenheit 451, the well-to-do could afford four screen-walls for total immersion.

Imagine, if you will, a future in which people have retreated into immersive media environments, private worlds masquerading as faux-public spheres, where they find it possible to engage in something that approximates meaningful action in the (virtual) presence of others.

In imagining such a scenario it would be far too easy to complain about the hapless masses that so easily retreat into their own personal holodecks, abandoning the world in favor of their escapist fantasies. It would be far too easy because it would avoid the more important consideration. How did we arrive at a society in which virtual worlds afforded the only possibility for meaningful, self-disclosing action that most people would ever encounter?

That scenario, of course, extrapolates in exaggerated fashion from a few present realities. Nonetheless, it gets at questions worth considering. If Arendt is right about the role and significance of what she calls action, then it is right and appropriate that individuals seek for it. Where might individuals find these public spaces today? How can we ensure the possibility for meaningful action in the world? How can we avoid a world in which people are drawn into virtual worlds because it is only there that they feel they matter?

The Political Perils of “Big Data”

In “Every Little Byte Counts,” a recent review of two books on “advances in our ability to store, analyze and profit from vast amounts of data generated by our gadgets” (otherwise known as Big Data), Evgeny Morozov makes two observations to which I want to draw your attention. 

The first of these he makes with the help of the Italian philosopher, Giorgio Agamben. Here are Morozov’s first two paragraphs: 

In “On What We Can Not Do,” a short and pungent essay published a few years ago, the Italian philosopher Giorgio Agamben outlined two ways in which power operates today. There’s the conventional type that seeks to limit our potential for self-­development by restricting material resources and banning certain behaviors. But there’s also a subtler, more insidious type, which limits not what we can do but what we can not do. What’s at stake here is not so much our ability to do things but our capacity not to make use of that very ability.

While each of us can still choose not to be on Facebook, have a credit history or build a presence online, can we really afford not to do any of those things today? It was acceptable not to have a cellphone when most people didn’t have them; today, when almost everybody does and when our phone habits can even be used to assess whether we qualify for a loan, such acts of refusal border on the impossible.

This is a profoundly important observation, and it is hardly ever made. In his brief but insightful book, Nature and Altering It, ethicist Allen Verhey articulated a similar concern. Verhey discusses a series of myths that underlie our understanding of nature (earlier he cataloged 16 uses of the idea of “nature”). While discussing one of these myths, the myth of the project of liberal society, Verhey writes,

“Finally, however, the folly of the myth of liberal society is displayed in the pretense that ‘maximizing freedom’ is always morally innocent. ‘Maximizing freedom,’ however, can ironically increase our bondage. What is introduced as a way to increase our options can become socially enforced. The point can easily be illustrated with technology. New technologies are frequently introduced as ways to increase our options, as ways to maximize our freedom, but they can become socially enforced. The automobile was introduced as an option, as an alternative to the horse, but it is now socially enforced …. The technology that surrounds our dying was introduced to give doctors and patients options in the face of disease and death, but such ‘options’ have become socially enforced; at least one sometimes still hears, “We have no choice!” And the technology that may come to surround birth, including pre-natal diagnosis, for example, may come to be socially enforced. ‘What? You knew you were at risk for bearing a child with XYZ, and you did nothing about it? And now you expect help with this child?’ Now it is possible, of course, to claim that cars and CPR and pre-natal diagnosis are the path of progress, but then the argument has shifted from the celebration of options and the maximizing of freedom to something else, to the meaning of progress.”

The second point from Morozov’s review that I want to draw your attention to involves the political consequences of tools that harness the predictive power of Big Data, a power divorced from understanding:

“The predictive models Tucker celebrates are good at telling us what could happen, but they cannot tell us why. As Tucker himself acknowledges, we can learn that some people are more prone to having flat tires and, by analyzing heaps of data, we can even identify who they are — which might be enough to prevent an accident — but the exact reasons defy us.

Such aversion to understanding causality has a political cost. To apply such logic to more consequential problems — health, education, crime — could bias us into thinking that our problems stem from our own poor choices. This is not very surprising, given that the self-tracking gadget in our hands can only nudge us to change our behavior, not reform society at large. But surely many of the problems that plague our health and educational systems stem from the failures of institutions, not just individuals.”

Moreover, as Hannah Arendt put it in The Human Condition, politics is premised on the ability of human beings to “talk with and make sense to each other and to themselves.” Divorcing action from understanding jeopardizes the premise upon which democratic self-governance is founded, the possibility of deliberative judgment. Is it an exaggeration to speak of the prospective tyranny of the algorithm?

I’ll give Morozov the penultimate word:

“It may be that the first kind of power identified by Agamben is actually less pernicious, for, in barring us from doing certain things, it at least preserves, even nurtures, our capacity to resist. But as we lose our ability not to do — here Agamben is absolutely right — our capacity to resist goes away with it. Perhaps it’s easier to resist the power that bars us from using our smartphones than the one that bars us from not using them. Big Data does not a free society make, at least not without basic political judgment.”

I draw your attention to these concerns not because I have an adequate response to them, but because I am increasingly convinced that they are among the most pressing concerns we must grapple with in the years ahead.

Technology, Moral Discourse, and Political Communities

According to Langdon Winner, neither ancient nor modern culture have been able to bring politics and technology together. Classical culture because of its propensity to look down its nose, ontologically speaking, at the mechanical arts and manual labor. Modern culture because of its relegation of science and technology to the private sphere and its assumptions about the nature of technological progress. (For more see previous post.)

The assumptions about technological progress that Winner alludes to in his article are of the sort that I’ve grouped under the Borg Complex. Fundamentally, they are assumptions about the inevitability and unalloyed goodness of technological progress. If technological development is inevitable, for better or for worse, than there is little use deliberating about it.

Interestingly, Winner elaborates his point by reference to the work of moral philosopher Alasdair MacIntyre. In his now classic work, After Virtue: A Study in Moral Theory, MacIntyre argued that contemporary moral discourse consistently devolves into acrimonious invective because it proceeds in the absence of a shared moral community or tradition.

Early in After Virtue, MacIntyre imagines a handful of typical moral debates that we are accustomed to hearing about or participating in. The sort of debates that convince no one to change their minds, and the sort, as well, in which both sides are convinced of the rationality of their position and the irrationality of their opponents’. Part of what MacIntyre argues is that neither side is necessarily more rational than the other. The problem is that the reasoning of both sides proceeds from incommensurable sets of moral communities, traditions, and social practices. In the absence of a shared moral vision that contextualizes specific moral claims and frames moral arguments there can be no meaningful moral discourse, only assertions and counter-assertions made with more or less civility.

Here is how Winner brings MacIntyre into his discussion:

“Another characteristic of contemporary discussion about technology policy is that, as Alasdair MacIntyre might have predicted, they involve what seem to be interminable moral controversies. In a typical dispute, one side offers policy proposals based upon what seem to be ethically sound moral arguments. The the opposing side urges entirely different policies using arguments that appear equally well-grounded. The likelihood that the two (or More) sides can locate common ground is virtually nil.”

Winner then goes on to provide his own examples of how such seemingly fruitless debates play out. For instance,

“1a. Conditions of international competitiveness require measures to reduce production costs. Automation realized through the computerization of office and factory work is clearly the best way to do this at present. Even though it involves eliminating jobs, rapid automation is the way to achieve the greatest good for the greatest number in advanced industrial societies.

b. The strength of any economy depends upon the skills of people who actually do the work. Skills of this kind arise from traditions of practice handed down from one generation to the next. Automation that de-skills the work process ought to be rejected because it undermines the well-being of workers and harms their ability to contribute to society.”

“In this way,” Winner adds, “debates about technology policy confirm MacIntyre’s argument that modern societies lack the kinds of coherent social practice that might provide firm foundations for moral judgments and public policies.”

Again, the problem is not simply a breakdown of moral discourse, it is also the absence of a political community of public deliberation and action in which moral discourse might take shape and find traction. Again, Winner:

“[...] the trouble is not that we lack good arguments and theories, but rather that modern politics simply does not provide appropriate roles and institutions in which the goal of defining the common good in technology policy is a legitimate project.”

The exception that proves Winner’s rule is, I think, the Amish. Granted, of course, that the scale and complexity of modern society is hardly comparable to an Amish community. That said, it is nonetheless instructive to appreciate Amish communities as tangible, lived examples of what it might look like to live in a political community whose moral traditions circumscribed the development of technology.

By contrast, as Winner put it in the title of one of his books, in modern society “technics-out-of-control” is a theme of political thought. It is a cliché for us to observe that technology barrels ahead leaving ethics and law a generation behind.

Given those two alternatives, it is not altogether unreasonable for someone to conclude that they would rather live with the promise and peril of modern technology rather than live within the constraints imposed by an Amish-style community. Fair enough. It’s worth wondering, however, whether our alternatives are, in fact, quite so stark.

In any case, Winner raises, as I see it, two important considerations. Our thinking about technology, if it is to be about more than private action, must reckon with the larger moral traditions, the sometimes unarticulated and unacknowledged visions of the good life, that frame our evaluations of technology. It must also find some way of reconstituting a meaningful political contexts for acting. Basically, then, we are talking not only about technology, but about democracy itself.