Presidential Debates and Social Media, or Neil Postman Was Right

imageI’ve chosen to take my debates on Twitter. I’ve done so mostly in the interest of exploring what difference it might make to take in the debates on social media rather than on television.

Of course, the first thing to know is that the first televised debate, the famous 1960 Kennedy/Nixon debate, is something of a canonical case study in media studies. Most of you, I suspect, have heard at some point about how polls conducted after the debate found that those who listened on the radio were inclined to think that Nixon had gotten the better of Kennedy while those who watched the debate on television were inclined to think that Kennedy had won the day.

As it turns out, this is something like a political urban legend. At the very least, it is fair to say that the facts of the case are somewhat more complicated. Media scholar, W. Joseph Campbell of American University, leaning heavily on a 1987 article by David L. Vancil and Sue D. Pendell, has shown that the evidence for viewer-listener disagreement is surprisingly scant and suspect. What little empirical evidence did point to a disparity between viewers and listeners depended on less than rigorous methodology.

Campbell, who’s written a book on media myths, is mostly interested in debunking the idea that viewer-listener disagreement was responsible for the outcome of the election. His point, well-taken, is simply that the truth of the matter is more complicated. With this we can, of course, agree. It would be a mistake, however, to write off the consequences over time of the shift in popular media. We may, for instance, take the first Clinton/Trump debate and contrast it to the Kennedy/Nixon debate and also to the famous Lincoln/Douglas debates. It would be hard to maintain that nothing has changed. But what is the cause of that change?

dd-3Does the evolution of media technology alone account for it? Probably not, if only because in the realm of human affairs we are unlikely to ever encounter singular causes. The emergence of new media itself, for instance, requires explanation, which would lead us to consider economic, scientific, and political factors. However, it would be impossible to discount how new media shape, if nothing else, the conditions under which political discourse evolves.

Not surprisingly, I turned to the late Neil Postman for some further insight. Indeed, I’ve taken of late to suggesting that the hashtag for 2016, should we want one, ought to be #NeilPostmanWasRight. This was a sentiment that I initially encountered in a fine post by Adam Elkus on the Internet culture wars. During the course of his analysis, Elkus wrote, “And at this point you accept that Neil Postman was right and that you were wrong.”

I confess that I rather agreed with Postman all along, and on another occasion I might take the time to write about how well Postman’s writing about technology holds up. Here, I’ll only cite this statement of his argument in Amusing Ourselves to Death:

“My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.”

This is the argument Postman presents in a chapter aptly title “Media as Epistemology.” Postman went on to add, admirably, “I am no relativist in this matter., and that I believe the epistemology created by television not only is inferior to a print-based epistemology but is dangerous and absurdist.”

Let us make a couple of supporting observations in passing, neither of which is original or particularly profound. First, what is it that we remember about the televised debates prior to the age of social media? Do any of us, old enough to remember, recall anything other than an adroitly delivered one-liner? And you know exactly which I have in mind already. Go ahead, before reading any further, call to mind your top three debate memories. Tell me if at least one of these is not among the three.

Reagan, when asked about his age, joking that we would not make an issue out of his opponent’s youth and inexperience.

Sen. Bentsen reminding Dan Quayle that he is no Jack Kennedy.

Admiral Stockdale, seemingly lost on stage, wondering, “Who am I? Why am I here?”

So how did we do? Did we have at least one of those in common? Here’s my point: what is memorable and what counts for “winning” or “losing” a debate in the age of television had precious little to do with the substance of an argument. It had everything to do with style and image. Again, I claim no great insight in saying as much. In fact, this is, I presume, conventional wisdom by now.

(By the way, Postman gets all the more credit if your favorite presidential debate memories involved an SNL cast member, say Dana Carvey, for example.)

Consider as well an example fresh from the first Clinton/Trump debate.

You tell me what “over-prepared” could possibly mean. Moreover, you tell me if that was a charge that you can even begin to imagine being leveled against Lincoln or Douglas or, for that matter, Nixon or Kennedy.

Let’s let Marshall McLuhan take a shot at explaining what Mr. Todd might possibly have meant.

I know, you’re not going to watch the whole thing. Who’s got the time? [#NeilPostmanWasRight] But if you did, you would hear McLuhan explaining why the 1976 Carter/Ford debate was an “atrocious misuse of the TV medium” and “the most stupid arrangement of any debate in the history of debating.” Chiefly, the content and the medium were mismatched. The style of debating both candidates embodied was ill-suited for what television prized, something approaching casual ease, warmth, and informality. Being unable to achieve that style means “losing” the debate regardless of how well you knew your stuff. As McLuhan tells Tom Brokaw, “You’re assuming that what these people say is important. All that matters is that they hold that audience on their image.”

Incidentally, writing in Slate about this clip in 2011, David Haglund wrote, “What seems most incredible to me about this cultural artifact is that there was ever a time when The Today Show would spend ten uninterrupted minutes talking about the presidential debates with a media theorist.” [#NeilPostmanWasRight]

So where does this leave us? Does social media, like television, present us with what Postman calls a new epistemology? Perhaps. We keep hearing a lot of talk about post-factual politics. If that describes our political climate, and I have little reason to doubt as much, then we did not suddenly land here after the advent of social media or the Internet. Facts, or simply the truth, has been fighting a rear-guard action for some time now.

I will make one passing observation, though, about the dynamics of following a debate on Twitter. While the entertainment on offer in the era of television was the thrill of hearing the perfect zinger, social media encourages each of us to become part of the action. Reading tweet after tweet of running commentary on the debate, from left, right, and center, I was struck by the near unanimity of tone: either snark or righteous indignation. Or, better, the near unanimity of apparent intent. No one, it seems to me, was trying to persuade anybody of anything. Insofar as I could discern a motive factor I might on the one hand suggest something like catharsis, a satisfying expunging of emotions. On the other, the desire to land the zinger ourselves. To compose that perfect tweet that would suddenly go viral and garner thousands of retweets. I saw more than a few cross my timeline–some from accounts with thousands and thousands of followers and others from accounts with a meager few hundred–and I felt that it was not unlike watching someone hit the jackpot in the slot machine next to me. Just enough incentive to keep me playing.

A citizen may have attended a Lincoln/Douglas debate to be informed and also, in part, to be entertained. The consumer of the television era tuned in to a debate ostensibly to be informed, but in reality to be entertained. The prosumer of the digital age aspires to do the entertaining.


Fit the Tool to the Person, Not the Person to the Tool

I recently had a conversation with a student about the ethical quandaries raised by the advent of self-driving cars. Hypothetically, for instance, how would a self-driving car react to a pedestrian who stepped out in front of it? Whose safety would it be programmed to privilege?

The relatively tech-savvy student was unfazed. Obviously this would only be a problem until pedestrians were forced out of the picture. He took it for granted that the recalcitrant human element would be eliminated as a matter of course in order to perfect the technological system. I don’t think he took this to be a “good” solution, but he intuited the sad truth that we are more likely to bend the person to fit the technological system than to design the system to fit the person.

Not too long ago, I made a similar observation:

… any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

A few days ago, Elon Musk put it all very plainly:

“Tesla co-founder and CEO Elon Musk believes that cars you can control will eventually be outlawed in favor of ones that are controlled by robots. The simple explanation: Musk believes computers will do a much better job than us to the point where, statistically, humans would be a liability on roadways [….] Musk said that the obvious move is to outlaw driving cars. ‘It’s too dangerous,’ Musk said. ‘You can’t have a person driving a two-ton death machine.'”

Mind you, such a development, were it to transpire, would be quite a boon for the owner of a company working on self-driving cars. And we should also bear in mind Dale Carrico’s admonition “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

If autonomous cars become the norm and transportation systems are designed to accommodate their needs, it will not have happened because of some force inherent in the technology itself. It will happen because interested parties will make it happen, with varying degrees of acquiescence from the general public.

This was precisely the case with the emergence of the modern highway system that we take for granted. Its development was not a foregone conclusion. It was heavily promoted by government and industry. As Walter Lippmann observed during the 1939 World’s Fair, “General motors has spent a small fortune to convince the american public that if it wishes to enjoy the full benefit of private enterprise in motor manufacturing, it will have to rebuild its cities and its highways by public enterprise.”

Consider as well the film below produced by Dow Chemicals in support of the 1956 Federal Aid-Highway Act:

Whatever you think about the virtues or vices of the highway system and a transportation system designed premised on the primacy the automobile, my point is that such a system did not emerge in a cultural or political vacuum. Choices were made; political will was exerted; money was spent. So it is now, and so it will be tomorrow.

Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”


“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.'” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.

Thinking About Big Data

I want to pass on to you three pieces on what has come to be known as Big Data, a diverse set of practices enabled by the power of modern computing to accumulate and process massive amounts of data. The first piece, “View from Nowhere,” is by Nathan Jurgenson. Jurgenson argues that the aspirations attached to Big Data, particularly in the realm of human affairs, amounts to a revival of Positivism:

“The rationalist fantasy that enough data can be collected with the ‘right’ methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular.”

Jurgenson goes on to challenge these positivist assumptions through a critical reading of OkCupid CEO Christian Rudder’s new book Dataclysm: Who We Are (When We Think No One’s Looking).

The second piece is an op-ed in the NY Times by Frank Pasquale, “The Dark Market for Personal Data.” Pasquale considers the risks to privacy associated with gathering and selling of personal information by companies equipped to mine and package such data. Pasquale concludes,

“We need regulation to help consumers recognize the perils of the new information landscape without being overwhelmed with data. The right to be notified about the use of one’s data and the right to challenge and correct errors is fundamental. Without these protections, we’ll continue to be judged by a big-data Star Chamber of unaccountable decision makers using questionable sources.”

Finally, here is a journal article, “Obscurity and Privacy,” by Evan Selinger and Woodrow Hartzog. Selinger and Hartzog offer obscurity as an explanatory concept to help clarify our thinking about the sorts of issues that usually get lumped together as matters of privacy. Privacy, however, may not be a sufficiently robust concept to meet the challenges posed by Big Data.

“Obscurity identifies some of the fundamental ways information can be obtained or kept out of reach, correctly interpreted or misunderstood. Appeals to obscurity can generate explanatory power, clarifying how advances in the sciences of data collection and analysis, innovation in domains related to information and communication technology, and changes to social norms can alter the privacy landscape and give rise to three core problems: 1) new breaches of etiquette, 2) new privacy interests, and 3) new privacy harms.”

In each of these areas, obscurity names the relative confidence individuals can have that the data trail they leave behind as a matter of course will not be readily accessible:

“When information is hard to understand, the only people who will grasp it are those with sufficient motivation to push past the layer of opacity protecting it. Sense-making processes of interpretation are required to understand what is communicated and, if applicable, whom the communications concerns. If the hermeneutic challenge is too steep, the person attempting to decipher the content can come to faulty conclusions, or grow frustrated and give up the detective work. In the latter case, effort becomes a deterrent, just like in instances where information is not readily available.”

Big Data practices have made it increasingly difficult to achieve this relative obscurity thus posing a novel set social and personal challenges. For example, the risks Pasquale identifies in his op-ed may be understood as risks that follow from a loss of obscurity. Read the whole piece for a better understanding of these challenges. In fact, be sure to read all three pieces. Jurgenson, Selinger, and Pasquale are among our most thoughtful guides in these matters.

Allow me to wrap this post up with a couple of additional observations. Returning to Jurgenson’s thesis about Big Data–that Big Data is a neo-Positivist ideology–I’m reminded that positivist sociology, or social physics, was premised on the assumption that the social realm operated in predictable law-like fashion, much as the natural world operated according to the Newtonian world picture. In other words, human action was, at root, rational and thus predictable. The early twentieth century profoundly challenged this confidence in human rationality. Think, for instance, of the carnage of the Great War and Freudianism. Suddenly, humanity seemed less rational and, consequently, the prospect of uncovering law-like principles of human society must have seemed far more implausible. Interestingly, this irrationality preserved our humanity, insofar as our humanity was understood to consist of an irreducible spontaneity, freedom, and unpredictability. In other words, so long as the Other against which our humanity was defined was the Machine.

If Big Data is neo-Positivist, and I think Jurgenson is certainly on to something with that characterization, it aims to transcend the earlier failure of Comteian Positivism. It acknowledges the irrationality of human behavior, but it construes it, paradoxically, as Predictable Irrationality. In other words, it suggests that we can know what we cannot understand. And this recalls Evgeny Morozov’s critical remarks in “Every Little Byte Counts,”

“The predictive models Tucker celebrates are good at telling us what could happen, but they cannot tell us why. As Tucker himself acknowledges, we can learn that some people are more prone to having flat tires and, by analyzing heaps of data, we can even identify who they are — which might be enough to prevent an accident — but the exact reasons defy us.

Such aversion to understanding causality has a political cost. To apply such logic to more consequential problems — health, education, crime — could bias us into thinking that our problems stem from our own poor choices. This is not very surprising, given that the self-tracking gadget in our hands can only nudge us to change our behavior, not reform society at large. But surely many of the problems that plague our health and educational systems stem from the failures of institutions, not just individuals.”

It also suggests that some of the anxieties associated with Big Data may not be unlike those occasioned by the earlier positivism–they are anxieties about our humanity. If we buy into the story Big Data tells about itself, then it threatens, finally, to make our actions scrutable and predictable, suggesting that we are not as free, independent, spontaneous, or unique as we might imagine ourselves to be.

Thinking Without a Bannister

In politics and religion, especially, moderates are in high demand, and understandably so. The demand for moderates reflects growing impatience with polarization, extremism, and vacuous partisan rancor. But perhaps these calls for moderation are misguided, or, at best, incomplete.

To be clear, I have no interest in defending extremism, political or otherwise. But having said that, we immediately hit on part of the problem as I see it. While there are some obvious cases of broad agreement about what constitutes extremism–beheadings, say–it seems pretty clear that, in the more prosaic realms of everyday life, one person’s extremism may very well be another’s principled stand. In such cases, genuine debate and deliberation should follow. But if the way of the moderate is valued as an end in itself, then debate and deliberation may very well be undermined.

I use the phrase “the way of the moderate” in order to avoid using the word moderation. The reason for this is that moderation, to my mind anyway, suggests something a bit different than what I have in view here in talking about the hankering for moderates. Moderation, for instance, may be associated with Aristotle’s approach to virtue, which I rather appreciate.

But moderation in that sense is not really what I have in mind here. I may agree with Aristotle, for instance, that courage is the mean between cowardice on the one hand and foolhardiness on the other. But I’m not sure that such a methodology, which may work rather well in helping us understand the virtues, can be usefully transferred into other realms of life. To be more specific, I do not think that you can approach, to put it quaintly, matters of truth in that fashion, at least not as a rule.

In other words, it does not follow that if two people are arguing about a complex political, social, or economic problem I can simply split the difference between the two and thereby arrive at the truth. It may be that both are desperately wrong and a compromise position between the two would be just as wrong. It may be that one of the two parties is, in fact, right and that a compromise between the two would, again, turn out to be wrong.

The way of the moderate, then, amounts to a kind of intellectual triangulation between two perceived extremes. One need not think about what might be right, true, or just; rather, one takes stock of the positions on the far right and the far left and aims for some sort of mean between the two, even if the position that results is incoherent or unworkable. This sort of intellectual triangulation is also a form of intellectual sloth.

Where the way of the moderate is reflexively favored, it would be enough to successfully frame an opponent as being either “far right” or “far left.” Further debate and deliberation would be superfluous and mere pretense. And, of course, that is exactly what we see in our political discourse.

Again, given our political culture, it is easy to see why the way of the moderate is appealing and tempting. But, sadly, the way of the moderate as I’ve described it does not escape the extremism and rancor that it bemoans. In fact, it is still controlled by it. If I seek to move forward by triangulating a position between two perceived extreme coordinates, I am allowing those extremes to determine my own path. We may very well need a third path, or even a fourth and fifth, but we should not assume that such a path can be found by passing through the middle of the extremes we seek to avoid. Such an assumption is the very opposite of the “independence” that is supposedly demonstrated by pursuing it.

Paradoxically, then, we might understand the way of the moderate as the flip side of the extremism and partisanship it seeks to counteract. What they both have in common is thoughtlessness. On the one hand you get the thoughtlessness of sheer conformity; the line is toed, platitudes are professed, and dissent is silenced. On the other, you sidestep the responsibility for independent thought by splitting the presumed difference between the two perceived extremes.

We do not need moderation of this sort; we need more thought.

In the conference transcripts I mentioned a few days ago, Hannah Arendt was asked about her political leanings and her position on capitalism. She responded this way: “So you ask me where I am. I am nowhere. I am really not in the mainstream of present or any other political thought. But not because I want to be so original–it so happens that I somehow don’t fit.”

A little further on she went on to discuss what she calls thinking without a bannister:

“You said ‘groundless thinking.’ I have a metaphor which is not quite that cruel, and which I have never published but kept for myself. I call it thinking without a bannister. In German, Denken ohne Geländer. That is, as you go up and down the stairs you can always hold onto the bannister so that you don’t fall down. But we have lost this bannister. That is the way I tell it to myself. And this is indeed what I try to do.”

And she added:

“This business that the tradition is broken and the Ariadne thread is lost. Well, that is not quite as new as I made it out to be. It was, after all, Tocqueville who said that ‘the past has ceased to throw its light onto the future, and the mind of man wanders in darkness.’ This is the situation since the middle of the last century, and, seen from the viewpoint of Tocqueville, entirely true. I always thought that one has got to start thinking as though nobody had thought before, and then start learning from everybody else.”

I’m not sure that I agree with Arendt in every respect, but I think we should take her call to start thinking as though nobody had thought before quite seriously.

I’ll leave you with one more encouragement in that general direction, this one from a recent piece by Alan Jacobs.

“I guess what I’m asking for is pretty simple: for writers of all kinds, journalists as well as fiction writers, and artists and academics, to strive to extricate themselves from an ‘artificial obvious’ that has been constructed for us by the dominant institutions of our culture. Simple; also probably impossible. But it’s worth trying. Few things are more worth trying.”

One step in this direction, I think, is to avoid the temptation presented to us by the way of the moderate as I’ve described it here. Very often what is needed is to, somehow, break altogether from the false dilemmas and binary oppositions presented to us.