Orality and Literacy Revisited

“‘Tis all in pieces, all coherence gone,” lamented John Donne in 1611. The line is from An Anatomy of the World, a poem written by Donne to mark the death of his patron’s daughter at the age of sixteen. Immediately before and after this line, Donne alludes to ruptures in the social, political, philosophical, religious, and scientific assumptions of his age. In short, Donne is registering the sense of bewilderment and disorientation that marked the transition from the premodern to the modern world.

“And new philosophy calls all in doubt,
The element of fire is quite put out,
The sun is lost, and th’earth, and no man’s wit
Can well direct him where to look for it.
And freely men confess that this world’s spent,
When in the planets and the firmament
They seek so many new; they see that this
Is crumbled out again to his atomies.
‘Tis all in pieces, all coherence gone,
All just supply, and all relation;
Prince, subject, father, son, are things forgot,
For every man alone thinks he hath got
To be a phoenix, and that then can be
None of that kind, of which he is, but he.”

In the philosopher Stephen Toulmin’s compelling analysis of this transition, Cosmopolis: The Hidden Agenda of Modernity, Donne is writing before the defining structures of modernity would fully emerge to stabilize the social order. Donne’s was still a time of flux between the dissolution of an old order and the emergence of a new one that would take its place–his time, we might say, was the time of death throes and birth pangs.

As Alan Jacobs, among others, has noted, the story of the emergence of modernity is a story that cannot be told without paying close attention to the technological background against which intellectual, political, and religious developments unfolded. Those technologies that we tend to think of as media technologies–technologies of word, image, and sound or technologies of representation–have an especially important role to play in this story.

I mention all of this because we find ourselves in a position not unlike Donne’s: we too are caught in a moment of instability. Traditional institutions and old assumptions that passed for common sense are proving inadequate in the face of new challenges, but we are as of yet uncertain about what new institutions and guiding assumptions will take their place. Right now, Donne’s lament resonates with us: “‘Tis all in pieces, all coherence gone.”

And for us no less than for those who witnessed the emergence of modernity, technological developments are inextricably bound up with the social and cultural turbulence we are experiencing, especially new media technologies.

One useful way of thinking about these developments is provided by the work of the late Walter Ong. Ong was a scholar of immense learning who is best known for Orality and Literacy: The Technologizing of the Word, a study of the cultural consequences of writing. In Ong’s view, the advent of the written word dramatically reconfigured our mental and social worlds. Primary oral cultures, cultures that have no notion of writing at all, operated quite differently than literate cultures, cultures into which writing has been introduced.

Likewise, Ong argued, the consciousness of individuals in literate cultures differs markedly from those living in an oral culture. Writing in the late twentieth century, Ong also posited the emergence of a state of secondary orality created by electronic media.

I’ve been pleasantly surprised to see Ong and his work invoked, directly and indirectly, in a handful of pieces about media, politics, and the 2016 election.

In August 2015, Jeet Heer wrote a piece titled, “Donald Trump, Epic Hero.” In it, he proposed the following: “Trump’s rhetorical braggadocio and spite might seem crude, even juvenile. But his special way with words has a noble ancestry: flyting, a recurring trope in epic poetry that eerily resembles the real estate magnate’s habit of self-celebration and cruel mockery.” Heer, who wrote a 2004 essay on Ong for Books and Culture, grounded his defense of this thesis on Ong’s work.

In a post for Neiman Reports, Danielle Allen does not cite Ong, but she does invoke the distinction between oral and literate cultures. “Trump appears to have understood that the U.S. is transitioning from a text-based to an oral culture,” Allen concluded after discussing her early frustration with the lack of traditional policy position papers produced by the Trump campaign and its reliance on short video clips.

In Technology Review, Hossein Derakhshan, relying on Neil Postman rather than Walter Ong, argues that the image-based discourse that has, in his view, come to dominate the Internet has contributed to the rise of post-truth politics and that we do well, for the sake of our democracy, to return to text-based discourse. “For one thing,” Derakhshan writes,

we need more text than videos in order to remain rational animals. Typography, as Postman describes, is in essence much more capable of communicating complex messages that provoke thinking. This means we should write and read more, link more often, and watch less television and fewer videos—and spend less time on Facebook, Instagram, and YouTube.

Writing for Bloomberg, Joe Weisenthal, like Heer, cites Ong’s Orality and Literacy to help explain Donald Trump’s rhetorical style. Building on scholarship that looked to Homer’s epic poetry for the residue of oral speech patterns, Ong identified various features of oral communication. Weisenthal chose three to explain “why someone like Donald Trump would thrive in this new oral context”: oral communication was “additive, not analytic,” relied on repetition, and was aggressively polemical. Homer gives us swift-footed Achilles, man-killing Hector, and wise Odysseus; Trump gives us Little Marco, Lyin’ Ted, and Crooked Hillary; his speeches were jammed with mind-numbing repetition; and his style was clearly combative.

There’s something with which to quibble in each of these pieces, but raising these questions about oral, print, and image based discourse is helpful. As Ong and Postman recognized, innovations in media technology have far reaching consequences: they enable new modes of social organization and new modes of thought, they reconfigure the cognitive load of remembering, they alter the relation between self and community, sometimes creating new communities and new understandings of the self, and they generate new epistemic ecosystems.

As Postman puts it in Technopoly,

Surrounding every technology are institutions whose organization–not to mention their reason for being–reflects the world-view promoted by the technology. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis. This is serious business, which is why we learn nothing when educators ask, Will students learn mathematics better by computers than by textbooks? Or when businessmen ask, Through which medium can we sell more products? Or when preachers ask, Can we reach more people through television than through radio? Or when politicians ask, How effective are messages sent through different media? Such questions have an immediate, practical value to those who ask them, but they are diversionary. They direct our attention away from the serious social, intellectual, and institutional crisis that new media foster.

Given the seriousness of what is at stake, then, I’ll turn to some of my quibbles as a way of moving toward a better understanding of our situation. Most of my quibbles involve the need for some finer distinctions. For example, in her piece, Allen suggest that we are moving back toward an oral culture. But it is important to make Ong’s distinction: if this is the case, then it is to something like what Ong called secondary orality. A primary oral culture has never known literacy, and that makes a world of difference. However much we might revert to oral forms of communication, we cannot erase our knowledge of or dependence upon text, and this realization must inform whatever it is we mean by “oral culture.”

Moreover, I wonder whether it is best to characterize our move as one toward orality. What about the visual, the image? Derakhshan, for example, frames his piece in this way. Contrasting the Internet before and after a six year stay in an Iranian prison, Derakshan observed, “Facebook and Twitter had replaced blogging and had made the Internet like TV: centralized and image-centered, with content embedded in pictures, without links.” But Alan Jacobs took exception to this line of thinking. “Much of the damage done to truth and charity done in this past election was done with text,” Jacobs notes, adding that Donald Trump rarely tweets images. “[I]t’s not the predominance of image over text that’s hurting us,” Jacobs then concludes, “It’s the use of platforms whose code architecture promotes novelty, instantaneous response, and the quick dissemination of lies.”

My sense is that Jacobs is right, but Derakhshan is not wrong, which means more distinctions are in order. After all, text is visual.

My plan in the coming days, possibly weeks depending on the cracks of time into which I am able to squeeze some writing, is take a closer look at Walter Ong, particularly but not exclusively Orality and Literacy, in order to explore what resources his work may offer us as we try to understand the changes that are afoot all about us.

Truth, Facts, and Politics in the Digital Age

On election night, one tweet succinctly summed up the situation: “Smart people spent 2016 being wrong about everything.”

Indeed. I can, however, think of one smart person who may have seen more clearly had he been alive:  Neil Postman. As I’ve suggested on more than a few occasions, #NeilPostmanWasRight would be a wonderfully apt hashtag with which to sum up this fateful year. Naturally, I don’t think Neil Postman’s work on media ecology and politics explains everything about our present political culture, but his insights go a long way. I wrote a bit about why that is the case after the first presidential debate a couple of months ago. Here I’ll only remind you of this paragraph from Amusing Ourselves to Death:

“My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.”

It is that last line that I want you to consider as I pass along a few items that help us better understand the relationship among media, truth, and politics.

The first two pieces are from Nathan Jurgenson. The first is a post written in the immediate aftermath of the election. Here is a key section:

And it also seems that the horror I’m seeing being expressed right now is partly the shock about being so dreadfully wrong. It’s the terror of having to come to terms with the fact that your information diet is deeply flawed. It’s the obvious fact that misinformation isn’t a problem over there on the right wing meme pages but is also our problem.

On the right, they have what Stephen Colbert called “truthiness,” which we might define as ignoring facts in the name of some larger truth. The facts of Obama’s birthplace mattered less for them than their own racist “truth” of white superiority. Perhaps we need to start articulating a left-wing version of truthiness: let’s call it “factiness.” Factiness is the taste for the feel and aesthetic of “facts,” often at the expense of missing the truth. From silly self-help-y TED talks to bad NPR-style neuroscience science updates to wrapping ourselves in the misleading scientisim of Fivethirtyeight statistics, factiness is obsessing over and covering ourselves in fact after fact while still missing bigger truths.

The second is an essay from October, “Chaos of Facts,” that more deeply explores similar terrain. Here are two excerpts, but do read the whole thing.

It’s easy to see how Trump’s rise was the culmination of image-based politics rather than some unprecedented and aberrant manifestation of them. Yet much of the political apparatus — conventional politicians and the traditional media outlets accustomed to a monopoly in covering them — still rarely admits this out loud. Instead, it tried to use Trump’s obvious performativity as an opportunity to pass off the rest of the conventional politics it has been practicing — the image-based, entertainment-driven politics we’ve been complaining about since Boorstin and before — as real. Perhaps it was more real than ever, given how strenuously many outlets touted the number of fact-checkers working a debate, and how they pleaded that democracy depends on their gatekeeping.

And:

It’s been repeated that the theme of the 2016 campaign is that we’re now living in a “post-truth” world. People seem to live in entirely different realities, where facts and fact-checking don’t seem to matter, where disagreement about even the most basic shape of things seems beyond debate. There is a broad erosion of credibility for truth gatekeepers. On the right, mainstream “credibility” is often regarded as code for “liberal,” and on the left, “credibility” is reduced to a kind of taste, a gesture toward performed expertism. This decline of experts is part of an even longer-term decline in the trust and legitimacy of nearly all social institutions. Ours is a moment of epistemic chaos.

You should also read Adam Elkus’ post, “It’s the Memes, Stupid.” Here is his concluding paragraph:

Subcultural memes, bots, and other forms of technology that represent, shape, distort, mutate, select, reproduce, combine, or generate information are not only sources of political power, they are also significant and under-analyed features of contemporary society. Memes and bots are both alike in that they are forms of automation – memes (in the Dawkins telling) almost robotically replicate themselves, and computer programs of varying degrees of complexity or simplicity also increasingly outnumber humans in social forums like Twitter. The Puppetmaster said in Ghost in the Shell that humankind has underestimated the consequences of computerization. This was a gross understatement. If there is no distinction between politics and memes (or other forms of cyberculture), we have a long road ahead in which we have to adapt to the consequences.

It would be a mistake, however, to think that the moment we inhabit has emerged out of nowhere, breaking altogether with some placid, unmediated past. (Neither Elkus nor Jurgenson make this mistake.) Thinking about how the past relates to the present is not a straightforward affair. It is too easy, on the one hand, to fall into the trap of thinking that we are merely repeating the past in a different key, or, on the other, that our moment is, indeed, wholly discontinuous with the past. The truth, difficult to ascertain, is always more complicated.

That said, consider the closing paragraphs of Søren Kierkegaard’s, “The Present Age”:

The public is an idea, which would never have occurred to people in ancient times, for the people themselves en masse in corpora took steps in any active situation, and bore responsibility for each individual among them, and each individual had to personally, without fail, present himself and submit his decision immediately to approval or disapproval. When first a clever society makes concrete reality into nothing, then the Media creates that abstraction, “the public,” which is filled with unreal individuals, who are never united nor can they ever unite simultaneously in a single situation or organization, yet still stick together as a whole. The public is a body, more numerous than the people which compose it, but this body can never be shown, indeed it can never have only a single representation, because it is an abstraction. Yet this public becomes larger, the more the times become passionless and reflective and destroy concrete reality; this whole, the public, soon embraces everything. . . .

The public is not a people, it is not a generation, it is not a simultaneity, it is not a community, it is not a society, it is not an association, it is not those particular men over there, because all these exist because they are concrete and real; however, no single individual who belongs to the public has any real commitment; some times during the day he belongs to the public, namely, in those times in which he is nothing; in those times that he is a particular person, he does not belong to the public. Consisting of such individuals, who as individuals are nothing, the public becomes a huge something, a nothing, an abstract desert and emptiness, which is everything and nothing. . . .

The Media is an abstraction (because a newspaper is not concrete and only in an abstract sense can be considered an individual), which in association with the passionlessness and reflection of the times creates that abstract phantom, the public, which is the actual leveler. . . . More and more individuals will, because of their indolent bloodlessness, aspire to become nothing, in order to become the public, this abstract whole, which forms in this ridiculous manner: the public comes into existence because all its participants become third parties. This lazy mass, which understands nothing and does nothing, this public gallery seeks some distraction, and soon gives itself over to the idea that everything which someone does, or achieves, has been done to provide the public something to gossip about. . . . The public has a dog for its amusement. That dog is the Media. If there is someone better than the public, someone who distinguishes himself, the public sets the dog on him and all the amusement begins. This biting dog tears up his coat-tails, and takes all sort of vulgar liberties with his leg–until the public bores of it all and calls the dog off. That is how the public levels.

I’d encourage you to take a closer look at those last six or so lines.

I first encountered “The Present Age” in philosopher Hubert Dreyfus’s On the Internet. You can read, what I presume is an earlier version of Dreyfus’ thoughts in his paper, “Kierkegaard on the Internet: Anonymity vrs. Commitment in the Present Age.” In On the Internet, Dreyfus summed up Kierkegaard’s argument this way:

. . . the new massive distribution of desituated information was making every sort of information immediately available to anyone, thereby producing a desituated, detached spectator.  Thus, the new power of the press to disseminate information to everyone in a nation led its readers to transcend their local, personal involvement . . . . Kierkegaard saw that the public sphere was destined to become a detached world in which everyone had an opinion about and commented on all public matters without needing any first-hand experience and without having or wanting any responsibility.

I’ll leave you with that.


Tip the Writer

$1.00

Presidential Debates and Social Media, or Neil Postman Was Right

imageI’ve chosen to take my debates on Twitter. I’ve done so mostly in the interest of exploring what difference it might make to take in the debates on social media rather than on television.

Of course, the first thing to know is that the first televised debate, the famous 1960 Kennedy/Nixon debate, is something of a canonical case study in media studies. Most of you, I suspect, have heard at some point about how polls conducted after the debate found that those who listened on the radio were inclined to think that Nixon had gotten the better of Kennedy while those who watched the debate on television were inclined to think that Kennedy had won the day.

As it turns out, this is something like a political urban legend. At the very least, it is fair to say that the facts of the case are somewhat more complicated. Media scholar, W. Joseph Campbell of American University, leaning heavily on a 1987 article by David L. Vancil and Sue D. Pendell, has shown that the evidence for viewer-listener disagreement is surprisingly scant and suspect. What little empirical evidence did point to a disparity between viewers and listeners depended on less than rigorous methodology.

Campbell, who’s written a book on media myths, is mostly interested in debunking the idea that viewer-listener disagreement was responsible for the outcome of the election. His point, well-taken, is simply that the truth of the matter is more complicated. With this we can, of course, agree. It would be a mistake, however, to write off the consequences over time of the shift in popular media. We may, for instance, take the first Clinton/Trump debate and contrast it to the Kennedy/Nixon debate and also to the famous Lincoln/Douglas debates. It would be hard to maintain that nothing has changed. But what is the cause of that change?

dd-3Does the evolution of media technology alone account for it? Probably not, if only because in the realm of human affairs we are unlikely to ever encounter singular causes. The emergence of new media itself, for instance, requires explanation, which would lead us to consider economic, scientific, and political factors. However, it would be impossible to discount how new media shape, if nothing else, the conditions under which political discourse evolves.

Not surprisingly, I turned to the late Neil Postman for some further insight. Indeed, I’ve taken of late to suggesting that the hashtag for 2016, should we want one, ought to be #NeilPostmanWasRight. This was a sentiment that I initially encountered in a fine post by Adam Elkus on the Internet culture wars. During the course of his analysis, Elkus wrote, “And at this point you accept that Neil Postman was right and that you were wrong.”

I confess that I rather agreed with Postman all along, and on another occasion I might take the time to write about how well Postman’s writing about technology holds up. Here, I’ll only cite this statement of his argument in Amusing Ourselves to Death:

“My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content—in a phrase, by creating new forms of truth-telling.”

This is the argument Postman presents in a chapter aptly title “Media as Epistemology.” Postman went on to add, admirably, that “I am no relativist in this matter, and that I believe the epistemology created by television not only is inferior to a print-based epistemology but is dangerous and absurdist.”

Let us make a couple of supporting observations in passing, neither of which is original or particularly profound. First, what is it that we remember about the televised debates prior to the age of social media? Do any of us, old enough to remember, recall anything other than an adroitly delivered one-liner? And you know exactly which I have in mind already. Go ahead, before reading any further, call to mind your top three debate memories. Tell me if at least one of these is not among the three.

Reagan, when asked about his age, joking that we would not make an issue out of his opponent’s youth and inexperience.

Sen. Bentsen reminding Dan Quayle that he is no Jack Kennedy.

Admiral Stockdale, seemingly lost on stage, wondering, “Who am I? Why am I here?”

So how did we do? Did we have at least one of those in common? Here’s my point: what is memorable and what counts for “winning” or “losing” a debate in the age of television had precious little to do with the substance of an argument. It had everything to do with style and image. Again, I claim no great insight in saying as much. In fact, this is, I presume, conventional wisdom by now.

(By the way, Postman gets all the more credit if your favorite presidential debate memories involved an SNL cast member, say Dana Carvey, for example.)

Consider as well an example fresh from the first Clinton/Trump debate.

You tell me what “over-prepared” could possibly mean. Moreover, you tell me if that was a charge that you can even begin to imagine being leveled against Lincoln or Douglas or, for that matter, Nixon or Kennedy.

Let’s let Marshall McLuhan take a shot at explaining what Mr. Todd might possibly have meant.

I know, you’re not going to watch the whole thing. Who’s got the time? [#NeilPostmanWasRight] But if you did, you would hear McLuhan explaining why the 1976 Carter/Ford debate was an “atrocious misuse of the TV medium” and “the most stupid arrangement of any debate in the history of debating.” Chiefly, the content and the medium were mismatched. The style of debating both candidates embodied was ill-suited for what television prized, something approaching casual ease, warmth, and informality. Being unable to achieve that style means “losing” the debate regardless of how well you knew your stuff. As McLuhan tells Tom Brokaw, “You’re assuming that what these people say is important. All that matters is that they hold that audience on their image.”

Incidentally, writing in Slate about this clip in 2011, David Haglund wrote, “What seems most incredible to me about this cultural artifact is that there was ever a time when The Today Show would spend ten uninterrupted minutes talking about the presidential debates with a media theorist.” [#NeilPostmanWasRight]

So where does this leave us? Does social media, like television, present us with what Postman calls a new epistemology? Perhaps. We keep hearing a lot of talk about post-factual politics. If that describes our political climate, and I have little reason to doubt as much, then we did not suddenly land here after the advent of social media or the Internet. Facts, or simply the truth, has been fighting a rear-guard action for some time now.

I will make one passing observation, though, about the dynamics of following a debate on Twitter. While the entertainment on offer in the era of television was the thrill of hearing the perfect zinger, social media encourages each of us to become part of the action. Reading tweet after tweet of running commentary on the debate, from left, right, and center, I was struck by the near unanimity of tone: either snark or righteous indignation. Or, better, the near unanimity of apparent intent. No one, it seems to me, was trying to persuade anybody of anything. Insofar as I could discern a motive factor I might on the one hand suggest something like catharsis, a satisfying expunging of emotions. On the other, the desire to land the zinger ourselves. To compose that perfect tweet that would suddenly go viral and garner thousands of retweets. I saw more than a few cross my timeline–some from accounts with thousands and thousands of followers and others from accounts with a meager few hundred–and I felt that it was not unlike watching someone hit the jackpot in the slot machine next to me. Just enough incentive to keep me playing.

A citizen may have attended a Lincoln/Douglas debate to be informed and also, in part, to be entertained. The consumer of the television era tuned in to a debate ostensibly to be informed, but in reality to be entertained. The prosumer of the digital age aspires to do the entertaining.

#NeilPostmanWasRight

Fit the Tool to the Person, Not the Person to the Tool

I recently had a conversation with a student about the ethical quandaries raised by the advent of self-driving cars. Hypothetically, for instance, how would a self-driving car react to a pedestrian who stepped out in front of it? Whose safety would it be programmed to privilege?

The relatively tech-savvy student was unfazed. Obviously this would only be a problem until pedestrians were forced out of the picture. He took it for granted that the recalcitrant human element would be eliminated as a matter of course in order to perfect the technological system. I don’t think he took this to be a “good” solution, but he intuited the sad truth that we are more likely to bend the person to fit the technological system than to design the system to fit the person.

Not too long ago, I made a similar observation:

… any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

A few days ago, Elon Musk put it all very plainly:

“Tesla co-founder and CEO Elon Musk believes that cars you can control will eventually be outlawed in favor of ones that are controlled by robots. The simple explanation: Musk believes computers will do a much better job than us to the point where, statistically, humans would be a liability on roadways [….] Musk said that the obvious move is to outlaw driving cars. ‘It’s too dangerous,’ Musk said. ‘You can’t have a person driving a two-ton death machine.'”

Mind you, such a development, were it to transpire, would be quite a boon for the owner of a company working on self-driving cars. And we should also bear in mind Dale Carrico’s admonition “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

If autonomous cars become the norm and transportation systems are designed to accommodate their needs, it will not have happened because of some force inherent in the technology itself. It will happen because interested parties will make it happen, with varying degrees of acquiescence from the general public.

This was precisely the case with the emergence of the modern highway system that we take for granted. Its development was not a foregone conclusion. It was heavily promoted by government and industry. As Walter Lippmann observed during the 1939 World’s Fair, “General motors has spent a small fortune to convince the american public that if it wishes to enjoy the full benefit of private enterprise in motor manufacturing, it will have to rebuild its cities and its highways by public enterprise.”

Consider as well the film below produced by Dow Chemicals in support of the 1956 Federal Aid-Highway Act:

Whatever you think about the virtues or vices of the highway system and a transportation system designed premised on the primacy the automobile, my point is that such a system did not emerge in a cultural or political vacuum. Choices were made; political will was exerted; money was spent. So it is now, and so it will be tomorrow.

Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”

[….]

“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.'” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.