Truth, Facts, and Politics in the Digital Age

On election night, one tweet succinctly summed up the situation: “Smart people spent 2016 being wrong about everything.”

Indeed. I can, however, think of one smart person who may have seen more clearly had he been alive:  Neil Postman. As I’ve suggested on more than a few occasions, #NeilPostmanWasRight would be a wonderfully apt hashtag with which to sum up this fateful year. Naturally, I don’t think Neil Postman’s work on media ecology and politics explains everything about our present political culture, but his insights go a long way. I wrote a bit about why that is the case after the first presidential debate a couple of months ago. Here I’ll only remind you of this paragraph from Amusing Ourselves to Death:

“My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.”

It is that last line that I want you to consider as I pass along a few items that help us better understand the relationship among media, truth, and politics.

The first two pieces are from Nathan Jurgenson. The first is a post written in the immediate aftermath of the election. Here is a key section:

And it also seems that the horror I’m seeing being expressed right now is partly the shock about being so dreadfully wrong. It’s the terror of having to come to terms with the fact that your information diet is deeply flawed. It’s the obvious fact that misinformation isn’t a problem over there on the right wing meme pages but is also our problem.

On the right, they have what Stephen Colbert called “truthiness,” which we might define as ignoring facts in the name of some larger truth. The facts of Obama’s birthplace mattered less for them than their own racist “truth” of white superiority. Perhaps we need to start articulating a left-wing version of truthiness: let’s call it “factiness.” Factiness is the taste for the feel and aesthetic of “facts,” often at the expense of missing the truth. From silly self-help-y TED talks to bad NPR-style neuroscience science updates to wrapping ourselves in the misleading scientisim of Fivethirtyeight statistics, factiness is obsessing over and covering ourselves in fact after fact while still missing bigger truths.

The second is an essay from October, “Chaos of Facts,” that more deeply explores similar terrain. Here are two excerpts, but do read the whole thing.

It’s easy to see how Trump’s rise was the culmination of image-based politics rather than some unprecedented and aberrant manifestation of them. Yet much of the political apparatus — conventional politicians and the traditional media outlets accustomed to a monopoly in covering them — still rarely admits this out loud. Instead, it tried to use Trump’s obvious performativity as an opportunity to pass off the rest of the conventional politics it has been practicing — the image-based, entertainment-driven politics we’ve been complaining about since Boorstin and before — as real. Perhaps it was more real than ever, given how strenuously many outlets touted the number of fact-checkers working a debate, and how they pleaded that democracy depends on their gatekeeping.

And:

It’s been repeated that the theme of the 2016 campaign is that we’re now living in a “post-truth” world. People seem to live in entirely different realities, where facts and fact-checking don’t seem to matter, where disagreement about even the most basic shape of things seems beyond debate. There is a broad erosion of credibility for truth gatekeepers. On the right, mainstream “credibility” is often regarded as code for “liberal,” and on the left, “credibility” is reduced to a kind of taste, a gesture toward performed expertism. This decline of experts is part of an even longer-term decline in the trust and legitimacy of nearly all social institutions. Ours is a moment of epistemic chaos.

You should also read Adam Elkus’ post, “It’s the Memes, Stupid.” Here is his concluding paragraph:

Subcultural memes, bots, and other forms of technology that represent, shape, distort, mutate, select, reproduce, combine, or generate information are not only sources of political power, they are also significant and under-analyed features of contemporary society. Memes and bots are both alike in that they are forms of automation – memes (in the Dawkins telling) almost robotically replicate themselves, and computer programs of varying degrees of complexity or simplicity also increasingly outnumber humans in social forums like Twitter. The Puppetmaster said in Ghost in the Shell that humankind has underestimated the consequences of computerization. This was a gross understatement. If there is no distinction between politics and memes (or other forms of cyberculture), we have a long road ahead in which we have to adapt to the consequences.

It would be a mistake, however, to think that the moment we inhabit has emerged out of nowhere, breaking altogether with some placid, unmediated past. (Neither Elkus nor Jurgenson make this mistake.) Thinking about how the past relates to the present is not a straightforward affair. It is too easy, on the one hand, to fall into the trap of thinking that we are merely repeating the past in a different key, or, on the other, that our moment is, indeed, wholly discontinuous with the past. The truth, difficult to ascertain, is always more complicated.

That said, consider the closing paragraphs of Søren Kierkegaard’s, “The Present Age”:

The public is an idea, which would never have occurred to people in ancient times, for the people themselves en masse in corpora took steps in any active situation, and bore responsibility for each individual among them, and each individual had to personally, without fail, present himself and submit his decision immediately to approval or disapproval. When first a clever society makes concrete reality into nothing, then the Media creates that abstraction, “the public,” which is filled with unreal individuals, who are never united nor can they ever unite simultaneously in a single situation or organization, yet still stick together as a whole. The public is a body, more numerous than the people which compose it, but this body can never be shown, indeed it can never have only a single representation, because it is an abstraction. Yet this public becomes larger, the more the times become passionless and reflective and destroy concrete reality; this whole, the public, soon embraces everything. . . .

The public is not a people, it is not a generation, it is not a simultaneity, it is not a community, it is not a society, it is not an association, it is not those particular men over there, because all these exist because they are concrete and real; however, no single individual who belongs to the public has any real commitment; some times during the day he belongs to the public, namely, in those times in which he is nothing; in those times that he is a particular person, he does not belong to the public. Consisting of such individuals, who as individuals are nothing, the public becomes a huge something, a nothing, an abstract desert and emptiness, which is everything and nothing. . . .

The Media is an abstraction (because a newspaper is not concrete and only in an abstract sense can be considered an individual), which in association with the passionlessness and reflection of the times creates that abstract phantom, the public, which is the actual leveler. . . . More and more individuals will, because of their indolent bloodlessness, aspire to become nothing, in order to become the public, this abstract whole, which forms in this ridiculous manner: the public comes into existence because all its participants become third parties. This lazy mass, which understands nothing and does nothing, this public gallery seeks some distraction, and soon gives itself over to the idea that everything which someone does, or achieves, has been done to provide the public something to gossip about. . . . The public has a dog for its amusement. That dog is the Media. If there is someone better than the public, someone who distinguishes himself, the public sets the dog on him and all the amusement begins. This biting dog tears up his coat-tails, and takes all sort of vulgar liberties with his leg–until the public bores of it all and calls the dog off. That is how the public levels.

I’d encourage you to take a closer look at those last six or so lines.

I first encountered “The Present Age” in philosopher Hubert Dreyfus’s On the Internet. You can read, what I presume is an earlier version of Dreyfus’ thoughts in his paper, “Kierkegaard on the Internet: Anonymity vrs. Commitment in the Present Age.” In On the Internet, Dreyfus summed up Kierkegaard’s argument this way:

. . . the new massive distribution of desituated information was making every sort of information immediately available to anyone, thereby producing a desituated, detached spectator.  Thus, the new power of the press to disseminate information to everyone in a nation led its readers to transcend their local, personal involvement . . . . Kierkegaard saw that the public sphere was destined to become a detached world in which everyone had an opinion about and commented on all public matters without needing any first-hand experience and without having or wanting any responsibility.

I’ll leave you with that.


Tip the Writer

$1.00

Presidential Debates and Social Media, or Neil Postman Was Right

imageI’ve chosen to take my debates on Twitter. I’ve done so mostly in the interest of exploring what difference it might make to take in the debates on social media rather than on television.

Of course, the first thing to know is that the first televised debate, the famous 1960 Kennedy/Nixon debate, is something of a canonical case study in media studies. Most of you, I suspect, have heard at some point about how polls conducted after the debate found that those who listened on the radio were inclined to think that Nixon had gotten the better of Kennedy while those who watched the debate on television were inclined to think that Kennedy had won the day.

As it turns out, this is something like a political urban legend. At the very least, it is fair to say that the facts of the case are somewhat more complicated. Media scholar, W. Joseph Campbell of American University, leaning heavily on a 1987 article by David L. Vancil and Sue D. Pendell, has shown that the evidence for viewer-listener disagreement is surprisingly scant and suspect. What little empirical evidence did point to a disparity between viewers and listeners depended on less than rigorous methodology.

Campbell, who’s written a book on media myths, is mostly interested in debunking the idea that viewer-listener disagreement was responsible for the outcome of the election. His point, well-taken, is simply that the truth of the matter is more complicated. With this we can, of course, agree. It would be a mistake, however, to write off the consequences over time of the shift in popular media. We may, for instance, take the first Clinton/Trump debate and contrast it to the Kennedy/Nixon debate and also to the famous Lincoln/Douglas debates. It would be hard to maintain that nothing has changed. But what is the cause of that change?

dd-3Does the evolution of media technology alone account for it? Probably not, if only because in the realm of human affairs we are unlikely to ever encounter singular causes. The emergence of new media itself, for instance, requires explanation, which would lead us to consider economic, scientific, and political factors. However, it would be impossible to discount how new media shape, if nothing else, the conditions under which political discourse evolves.

Not surprisingly, I turned to the late Neil Postman for some further insight. Indeed, I’ve taken of late to suggesting that the hashtag for 2016, should we want one, ought to be #NeilPostmanWasRight. This was a sentiment that I initially encountered in a fine post by Adam Elkus on the Internet culture wars. During the course of his analysis, Elkus wrote, “And at this point you accept that Neil Postman was right and that you were wrong.”

I confess that I rather agreed with Postman all along, and on another occasion I might take the time to write about how well Postman’s writing about technology holds up. Here, I’ll only cite this statement of his argument in Amusing Ourselves to Death:

“My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content—in a phrase, by creating new forms of truth-telling.”

This is the argument Postman presents in a chapter aptly title “Media as Epistemology.” Postman went on to add, admirably, that “I am no relativist in this matter, and that I believe the epistemology created by television not only is inferior to a print-based epistemology but is dangerous and absurdist.”

Let us make a couple of supporting observations in passing, neither of which is original or particularly profound. First, what is it that we remember about the televised debates prior to the age of social media? Do any of us, old enough to remember, recall anything other than an adroitly delivered one-liner? And you know exactly which I have in mind already. Go ahead, before reading any further, call to mind your top three debate memories. Tell me if at least one of these is not among the three.

Reagan, when asked about his age, joking that we would not make an issue out of his opponent’s youth and inexperience.

Sen. Bentsen reminding Dan Quayle that he is no Jack Kennedy.

Admiral Stockdale, seemingly lost on stage, wondering, “Who am I? Why am I here?”

So how did we do? Did we have at least one of those in common? Here’s my point: what is memorable and what counts for “winning” or “losing” a debate in the age of television had precious little to do with the substance of an argument. It had everything to do with style and image. Again, I claim no great insight in saying as much. In fact, this is, I presume, conventional wisdom by now.

(By the way, Postman gets all the more credit if your favorite presidential debate memories involved an SNL cast member, say Dana Carvey, for example.)

Consider as well an example fresh from the first Clinton/Trump debate.

You tell me what “over-prepared” could possibly mean. Moreover, you tell me if that was a charge that you can even begin to imagine being leveled against Lincoln or Douglas or, for that matter, Nixon or Kennedy.

Let’s let Marshall McLuhan take a shot at explaining what Mr. Todd might possibly have meant.

I know, you’re not going to watch the whole thing. Who’s got the time? [#NeilPostmanWasRight] But if you did, you would hear McLuhan explaining why the 1976 Carter/Ford debate was an “atrocious misuse of the TV medium” and “the most stupid arrangement of any debate in the history of debating.” Chiefly, the content and the medium were mismatched. The style of debating both candidates embodied was ill-suited for what television prized, something approaching casual ease, warmth, and informality. Being unable to achieve that style means “losing” the debate regardless of how well you knew your stuff. As McLuhan tells Tom Brokaw, “You’re assuming that what these people say is important. All that matters is that they hold that audience on their image.”

Incidentally, writing in Slate about this clip in 2011, David Haglund wrote, “What seems most incredible to me about this cultural artifact is that there was ever a time when The Today Show would spend ten uninterrupted minutes talking about the presidential debates with a media theorist.” [#NeilPostmanWasRight]

So where does this leave us? Does social media, like television, present us with what Postman calls a new epistemology? Perhaps. We keep hearing a lot of talk about post-factual politics. If that describes our political climate, and I have little reason to doubt as much, then we did not suddenly land here after the advent of social media or the Internet. Facts, or simply the truth, has been fighting a rear-guard action for some time now.

I will make one passing observation, though, about the dynamics of following a debate on Twitter. While the entertainment on offer in the era of television was the thrill of hearing the perfect zinger, social media encourages each of us to become part of the action. Reading tweet after tweet of running commentary on the debate, from left, right, and center, I was struck by the near unanimity of tone: either snark or righteous indignation. Or, better, the near unanimity of apparent intent. No one, it seems to me, was trying to persuade anybody of anything. Insofar as I could discern a motive factor I might on the one hand suggest something like catharsis, a satisfying expunging of emotions. On the other, the desire to land the zinger ourselves. To compose that perfect tweet that would suddenly go viral and garner thousands of retweets. I saw more than a few cross my timeline–some from accounts with thousands and thousands of followers and others from accounts with a meager few hundred–and I felt that it was not unlike watching someone hit the jackpot in the slot machine next to me. Just enough incentive to keep me playing.

A citizen may have attended a Lincoln/Douglas debate to be informed and also, in part, to be entertained. The consumer of the television era tuned in to a debate ostensibly to be informed, but in reality to be entertained. The prosumer of the digital age aspires to do the entertaining.

#NeilPostmanWasRight

Maybe the Kids Aren’t Alright

Consider the following statements regarding the place of digital media in the lives of a cohort of thirteen-year-olds:

“One teenager, Fesse, was usually late – partly because he played Xbox till late into the night ….”

“We witnessed a fair number of struggles to make the technology work, or sometimes to engage pupils with digital media in the classroom.”

“Homework was often accompanied by Facebook, partly as a distraction and partly for summoning help from friends. Some became quickly absorbed in computer games.”

“Adam [played] with people from the online multi-player game in which he could adopt an identity he felt was truly himself.”

“Megan worked on creating her private online space in Tumblr – hours passing by unnoticed.”

“Each found themselves drawn, to varying degrees, into their parents’ efforts to gather as a family, at supper, through shared hobbies, looking after pets, or simply chatting in front of the television – albeit each with phones or tablets at the ready – before peeling off in separate directions.”

“Digital devices and the uses they put them to have become teenagers’ way of asserting their agency – a shield from bossy parents or annoying younger siblings or seemingly critical teachers, a means to connect with sympathetic friends or catching up with ongoing peer ‘drama.'”

Okay, now what would be your initial thoughts about the state of affairs described by these statements? Generally speaking, presented with these observations about the lives of 13-year-olds, I’d think that we might be forgiven a bit of concern. Sure, some of this describes the generally recognizable behavior of “teenagers” writ large, and nothing here suggested life-or-death matters, necessarily, but, nonetheless, it seemed to me that we might wish things were a touch different in some respects. At least, we might want a little more information about how these factors play out over the long run.

But the author framed these statements with these sorts of interpretative comments:

“… the more we know about teenagers’ lives the clearer it becomes that young people are no more interested in being constantly plugged in than are the adults around them.”

“As adults and parents, we might spend less time worrying about what they get up to as teenagers and more time with them, discussing the challenges that lie ahead for them as adults in an increasingly connected world.”

Couple that with the opening paragraph, which begins thus: “With each generation the public consciousness conjures up a new fear for our youth ….” There is no quicker way to signal that you are not at all concerned about something than by leading with “each generation, blah, blah, blah.”

When I first read this piece, I felt a certain dissonance, and I couldn’t quite figure out its source. After thinking about it a bit more, I realized that the dissonance arose from the incongruity between the cheery, “the kids are alright” tone of the article and what the article actually reported.

(I might add that part of my unease also regards methodology. Why would we think that the students were any more transparent with this adult researcher in their midst than they were with the teachers whose halting attempts to connect with them via digital media they hold in apparent contempt? Mind you, this may very well be addressed in a perfectly adequate manner by the author in the book that this article introduces.)

Let me be clear, I’m not calling for what is conventionally and dismissively referred to as a “moral panic.” But I don’t think our only options are “everything is going to hell” and “we live in a digital paradise, quit complaining.” And what is reported in this article suggests to me that we should not be altogether unconcerned about how digital media floods every aspect of our lives and the lives of our children.

To the author’s point that “the more we know about teenagers’ lives the clearer it becomes that young people are no more interested in being constantly plugged in than are the adults around them,” I reply, that’s a damnably low bar and, thus, little comfort.

And when the author preaches “As adults and parents, we might spend less time worrying about what they get up to as teenagers and more time with them, discussing the challenges that lie ahead for them as adults in an increasingly connected world,” I reply, that’s exactly what many adults and parents are trying to do but many of them feel as if they are fighting a losing battle against the very thing you don’t want them to worry about.

One last thought: we are deeply invested in the comforting notion that “the kids are alright,” aren’t we? I’m not saying they are not or that they will not be alright, necessarily. I’m just not sure. Maybe some will and some won’t. Some of the very stories linked by the website to the article in question suggest that there are at least some troubling dimensions to the place of digital media in the lives of teens. I’ve spent the better part of the last fifteen years teaching teens in multiple contexts. In my experience, with a much larger data set mind you, there are indeed reasons to be hopeful, but there are also reasons to be concerned. But never mind that, we really want to believe that they will be just fine regardless.

That desire to believe the “kids are alright” couples all too well with the desire to hold our technology innocent of all wrong. My technological habits are no different, may be they’re worse, so if the kids are alright then so am I. Perhaps the deeper desire underlying these tendencies is the desire to hold ourselves blameless and deflect responsibility for our own actions. If the “kids are alright” no matter what we do or how badly we screw up, then I’ve got nothing to worry about as an adult and a parent. And if the technologies that I’ve allowed to colonize my life and theirs are never, ever to blame, then I can indulge in them to my heart’s content without so much as a twinge of compunction. I get a pass either way, and who doesn’t want that? But maybe the kids are not altogether alright, and maybe it is not altogether their fault but ours.

Finally, one last thought occurred to me. Do we even know what it would mean to be alright anymore? Sometimes I think all we’re aiming at is something like a never-ending and exhausting management of perpetual chaos. Maybe we’ve forgotten how our lives might be alternatively ordered. Maybe our social and cultural context inhibits us from pursuing a better ordered life. Perhaps out of resignation, perhaps for lack of imagination, perhaps because we lack the will, we dare not ask what might be the root causes of our disorders. If we did, we might find that some cherished and unquestioned value, like our own obsession with unbridled individual autonomy, might be complicit. Easier to go on telling ourselves that everything will be alright.

Et in Facebook ego

Today is the birthday of the friend whose death elicited this post two years ago. I republish it today for your consideration. 

In Nicolas Poussin’s mid-seventeenth century painting, Et in Arcadia ego, shepherds have stumbled upon an ancient tomb on which the titular words are inscribed. Understood to be the voice of death, the Latin phrase may be roughly translated, “Even in Arcadia there am I.” Because Arcadia symbolized a mythic pastoral paradise, the painting suggested the ubiquity of death. To the shepherds, the tomb was a momento mori: a reminder of death’s inevitability.

Nicolas Poussin, Et in Arcadia ego, 1637-38
Nicolas Poussin, Et in Arcadia ego, 1637-38

Poussin was not alone among artists of the period in addressing the certainty of death. During the seventeenth and eighteenth century, vanitas art flourished. The designation stems from the Latin phrase vanitas vanitatum omni vanitas, a recurring refrain throughout the biblical book of Ecclesiastes: ”vanity of vanities, all is vanity,” in the King James translation. Paintings in the genre were still lifes depicting an assortment of objects which represented all that we might pursue in this life: love, power, fame, fortune, happiness. In their midst, however, one might also find a skull or an hour glass. These were symbols of death and the brevity of life. The idea, of course, was to encourage people to make the most of their living years.

Edwart Collier, 1690
Edwart Collier, 1690

For the most part, we don’t go in for this sort of thing anymore. Few people, if any, operate under the delusion that we might escape death (excepting, perhaps, the Singularity crowd), but we do a pretty good job of forgetting what we know about death. We keep death out of sight and, hence, out of mind. We’re certainly not going out of our way to remind ourselves of death’s inevitability. And, who knows, maybe that’s for the better. Maybe all of those skulls and hourglasses were morbidly unhealthy.

But while vanitas art has gone out of fashion, a new class of memento mori has emerged: the social media profile.

I’m one of those on again, off again Facebook users. Lately, I’ve been on again, and recently I noticed one of those birthday reminders Facebook places in the column where it puts all of the things Facebook would like you to click on. It was for a high school friend who I had not spoken to in over eight years. It was in that respect a very typical Facebook friendship:  the sort that probably wouldn’t exist at all were it not for Facebook. And that’s not necessarily a knock on the platform. For the most part, I appreciate being able to maintain at least minimal ties to old friends. In this case, though, it demonstrated just how weak those ties can be.

Upon clicking over to their profile, I read a few odd notes, and very quickly it became disconcertingly clear that my friend had died over a year ago. Naturally, I was taken a back and saddened. He died while I was off Facebook, and news had not reached me by any other channel. But there it was. Out of nowhere and without warning my browser was haunted by the very real presence of death. Momento mori.

Just a few days prior I logged on to Facebook and was greeted by the tragic news of a former student’s sudden passing. Because we had several mutual connections, photographs of the young man found their way into my news feed for several days. It was odd and disconcerting and terribly sad all at once. I don’t know what I think of social media mourning. It makes me uneasy, but I won’t criticize what might bring others solace. In any case, it is, like death itself, an unavoidable reality of our social media experience. Death is no digital dualist.

Facebook sometimes feels like a modern-day Arcadia. It is a carefully cultivated space in which life appears Edenic. The pictures are beautiful, the events exciting, the faces always smiling, the children always amusing, the couples always adoring. Some studies even suggest that comparing our own experience to these immaculately curated slices of life leads to envy, discontent, and unhappiness. Understandably so … if we assume that these slices of life are comprehensive representations of the lives people acutally lead. Of course, they are not.

Lest we be fooled, however, there, alongside the pets and witty status updates and wedding pictures and birth announcements, we will increasingly find our virtual Arcadias haunted by the digital, disembodied presence of the dead. Our digital memento mori.

Et in Facebook ego.

On the Merits of Inconclusive Debates

On social media, criticism too often takes the form of aggressively ironic derision performed for those who are already prone to agree. It can also be challenging, although not impossible, to find sustained discussions that are both civil and well-reasoned. Relatedly, one of the complaints I frequently hear about online debates, one that I’ve made myself, is that no one ever changes their mind as a result of their online exchanges, no matter how prolonged or passionate those exchanges might be.

Of course, there are many reasons for this. For instance, we are, it seems to me, much less likely to surrender our positions, particularly our cherished convictions, in a public forum. Most of us are not so humble. Moreover, shifts in perspective or intellectual reversals tend to happen gradually. So much so, that one may not even be able to pinpoint the moment of conversion. In any case, they rarely happen in the heat of intellectual battle. And that last metaphor is also part of the problem. There’s a tendency to characterize our intellectual life as a quest to vanquish all foes rather than as a mutual, dialectical pursuit of knowledge and wisdom. A change of mind, then, is experienced as a defeat rather than a step toward better understanding.

All of this is really just a way of introducing the following passage from Oliver O’Donovan’s Self, World, and Time. O’Donovan reminds us, reminds me, that there is value even in an inconclusive debate or conversation, because, again, the point is not to be proven right.

“Let us suppose that I disapprove of the death penalty, and take up the cudgels against someone who defends it. As our discussion proceeds, certain things will become clear. One is that there are various reasons for disapproving of the death penalty, some of which may plausibly claim a perennial moral truth, while others are more circumstantial. If my opponent forces me to think hard, I shall understand better what social and historical conditions have made the death penalty appear reasonable to past generations, and I shall have to ask if those conditions could ever recur. I shall come to see that my view of the matter is part and parcel of a wider philosophy of penal justice and governmental responsibility, and I shall be forced to elucidate that philosophy more fully and to test its capacity to shed illumination on other questions, too. None of this could I have gained from talking to those who agreed with me. What it amounts to is that if at the end of the argument I still say, ‘I disapprove of the death penalty!’ I know much better than before what I mean by it.”

Thanks to Alastair Roberts for drawing my attention to it.