In December 2016, I observed, alluding to Tolkien’s Lord of the Rings, that Twitter was akin to Trump’s ring of power. Just as the One Ring in Tolkien’s fictional world had been devised to control all other rings of power, so Trump deployed Twitter to manipulate other media. With one tweet, Trump could own the news cycle.
Here’s a brief elaboration of the analogy without reference to Trump.
In Tolkien’s world, the One Ring makes the wearer invisible to those around them but with a catch: it makes the wearer visible to the Sauron, the true master of the Ring, and corrupts the soul of the wearer. The power Twitter confers, on the other hand, is precisely the power to become visible, potentially to a great many people. The catch is that such visibility is ephemeral and distorting. It is gone in a flash and what it makes visible is often a reduction of the person distorted to fit the demands of the medium.
First point, then, starkly put: Use of Twitter, like wearing the ring, can disfigure and corrupt the soul of the user as it disfigured and corrupted Gollum.
As with the Ring of Power, it is often the case that the wisest among us, like Gandalf, refuse to bear it at all.
With Twitter as with the Ring, those like Boromir who are most naively intent on using its power for good are also those who are most likely to be corrupted by its power.
Yet it is also the case with the ring of power that some were called upon to bear it for a time, Bilbo and Frodo most notably.
It is true that Frodo took up the burden of bearing the ring only so that it might be destroyed. While destroying Twitter in the fires of Mount Doom is not in the cards, it is not impossible to imagine using Twitter in a way that countermands its dominant tendencies. At the council of Elrond, it was made clear that, if the Hobbits succeed, it will be because they are working against the logic of the ring and its maker, refusing its power rather than seeking maximize it.
But … Bilbo and Frodo are not left unchanged by the ring. Humbly noble as they may have been, they were not beyond its corrupting influence. So it is with Twitter: there will be very few who are so virtuous, indeed so holy, as to take it up and be left unchanged. About this we should be clear-eyed.
An expanded version of these observations will appear on the Front Porch Republic sometime next week as part of an ongoing discussion about Twitter, social media, Wendell Berry, and localism. The initial post in that discussion is here.
“I often tell students,” media scholar Henry Jenkins once noted, “that the history of new media has been shaped again and again by four key innovative groups – evangelists, pornographers, advertisers, and politicians, each of whom is constantly looking for new ways to interface with their public.”
It’s a provocative grouping, which gives the observation its punch, and, as far as I can tell, it is also an accurate assessment. The engine, so to speak, that drives the media-related transformations of political or religious culture is the imperative to “get your message out.” Of course, as media theorists have observed, how you get your message out may transform the message itself and the audience.
We might say, then, that Twitter is to Trump what radio was to FDR or TV was to Kennedy or Reagan.
It is not that FDR was the first to use radio, or Kennedy TV, or Trump Twitter–those firsts were Coolidge, Truman, and Obama, respectively. Rather it is that these were the first presidents to fully exploit the potential of each medium, for better or for worse. Their success depended upon a confluence of personal qualities, existing cultural dynamics, and the affordances of the medium. Their success also reconfigured the norms of political culture and discourse–there was no going back and no way to undo the consequences.
The striking thing about Trump’s use of Twitter is how he deploys it, not only to circumvent the press but to control the other media as well. Need to shake up the news cycle? No problem, a tweet will do it. Thus the real power of Twitter was not necessarily that of reaching the audience on Twitter itself, which is quite small compared to TV, but in setting the agenda for how other media would cover the election and transition. It is as if Twitter were Sauron’s ring, the one ring to rule them all. In this case, the one medium to rule all media.
Incidentally, not unlike Sauron’s ring, Twitter also tempts users with power, the power to torment and destroy ideological opponents by unleashing armies of underlings, for example. But, like the One Ring, the power it offers to those who would wield it is ultimately illusory and destructive. The wise refuse it. They even refuse the temptation to do good by its use, for they know the ring serves its own ends and ultimately they cannot control the forces they unleash.
You can subscribe to my newsletter, The Convivial Society, here.
I’ve chosen to take my debates on Twitter. I’ve done so mostly in the interest of exploring what difference it might make to take in the debates on social media rather than on television.
Of course, the first thing to know is that the first televised debate, the famous 1960 Kennedy/Nixon debate, is something of a canonical case study in media studies. Most of you, I suspect, have heard at some point about how polls conducted after the debate found that those who listened on the radio were inclined to think that Nixon had gotten the better of Kennedy while those who watched the debate on television were inclined to think that Kennedy had won the day.
As it turns out, this is something like a political urban legend. At the very least, it is fair to say that the facts of the case are somewhat more complicated. Media scholar, W. Joseph Campbell of American University, leaning heavily on a 1987 article by David L. Vancil and Sue D. Pendell, has shown that the evidence for viewer-listener disagreement is surprisingly scant and suspect. What little empirical evidence did point to a disparity between viewers and listeners depended on less than rigorous methodology.
Campbell, who’s written a book on media myths, is mostly interested in debunking the idea that viewer-listener disagreement was responsible for the outcome of the election. His point, well-taken, is simply that the truth of the matter is more complicated. With this we can, of course, agree. It would be a mistake, however, to write off the consequences over time of the shift in popular media. We may, for instance, take the first Clinton/Trump debate and contrast it to the Kennedy/Nixon debate and also to the famous Lincoln/Douglas debates. It would be hard to maintain that nothing has changed. But what is the cause of that change?
Does the evolution of media technology alone account for it? Probably not, if only because in the realm of human affairs we are unlikely to ever encounter singular causes. The emergence of new media itself, for instance, requires explanation, which would lead us to consider economic, scientific, and political factors. However, it would be impossible to discount how new media shape, if nothing else, the conditions under which political discourse evolves.
Not surprisingly, I turned to the late Neil Postman for some further insight. Indeed, I’ve taken of late to suggesting that the hashtag for 2016, should we want one, ought to be #NeilPostmanWasRight. This was a sentiment that I initially encountered in a fine post by Adam Elkus on the Internet culture wars. During the course of his analysis, Elkus wrote, “And at this point you accept that Neil Postman was right and that you were wrong.”
I confess that I rather agreed with Postman all along, and on another occasion I might take the time to write about how well Postman’s writing about technology holds up. Here, I’ll only cite this statement of his argument in Amusing Ourselves to Death:
“My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content—in a phrase, by creating new forms of truth-telling.”
This is the argument Postman presents in a chapter aptly title “Media as Epistemology.” Postman went on to add, admirably, that “I am no relativist in this matter, and that I believe the epistemology created by television not only is inferior to a print-based epistemology but is dangerous and absurdist.”
Let us make a couple of supporting observations in passing, neither of which is original or particularly profound. First, what is it that we remember about the televised debates prior to the age of social media? Do any of us, old enough to remember, recall anything other than an adroitly delivered one-liner? And you know exactly which I have in mind already. Go ahead, before reading any further, call to mind your top three debate memories. Tell me if at least one of these is not among the three.
Reagan, when asked about his age, joking that we would not make an issue out of his opponent’s youth and inexperience.
Sen. Bentsen reminding Dan Quayle that he is no Jack Kennedy.
Admiral Stockdale, seemingly lost on stage, wondering, “Who am I? Why am I here?”
So how did we do? Did we have at least one of those in common? Here’s my point: what is memorable and what counts for “winning” or “losing” a debate in the age of television had precious little to do with the substance of an argument. It had everything to do with style and image. Again, I claim no great insight in saying as much. In fact, this is, I presume, conventional wisdom by now.
(By the way, Postman gets all the more credit if your favorite presidential debate memories involved an SNL cast member, say Dana Carvey, for example.)
Consider as well an example fresh from the first Clinton/Trump debate.
You tell me what “over-prepared” could possibly mean. Moreover, you tell me if that was a charge that you can even begin to imagine being leveled against Lincoln or Douglas or, for that matter, Nixon or Kennedy.
Let’s let Marshall McLuhan take a shot at explaining what Mr. Todd might possibly have meant.
I know, you’re not going to watch the whole thing. Who’s got the time? [#NeilPostmanWasRight] But if you did, you would hear McLuhan explaining why the 1976 Carter/Ford debate was an “atrocious misuse of the TV medium” and “the most stupid arrangement of any debate in the history of debating.” Chiefly, the content and the medium were mismatched. The style of debating both candidates embodied was ill-suited for what television prized, something approaching casual ease, warmth, and informality. Being unable to achieve that style means “losing” the debate regardless of how well you knew your stuff. As McLuhan tells Tom Brokaw, “You’re assuming that what these people say is important. All that matters is that they hold that audience on their image.”
Incidentally, writing in Slate about this clip in 2011, David Haglund wrote, “What seems most incredible to me about this cultural artifact is that there was ever a time when The Today Show would spend ten uninterrupted minutes talking about the presidential debates with a media theorist.” [#NeilPostmanWasRight]
So where does this leave us? Does social media, like television, present us with what Postman calls a new epistemology? Perhaps. We keep hearing a lot of talk about post-factual politics. If that describes our political climate, and I have little reason to doubt as much, then we did not suddenly land here after the advent of social media or the Internet. Facts, or simply the truth, has been fighting a rear-guard action for some time now.
I will make one passing observation, though, about the dynamics of following a debate on Twitter. While the entertainment on offer in the era of television was the thrill of hearing the perfect zinger, social media encourages each of us to become part of the action. Reading tweet after tweet of running commentary on the debate, from left, right, and center, I was struck by the near unanimity of tone: either snark or righteous indignation. Or, better, the near unanimity of apparent intent. No one, it seems to me, was trying to persuade anybody of anything. Insofar as I could discern a motive factor I might on the one hand suggest something like catharsis, a satisfying expunging of emotions. On the other, the desire to land the zinger ourselves. To compose that perfect tweet that would suddenly go viral and garner thousands of retweets. I saw more than a few cross my timeline–some from accounts with thousands and thousands of followers and others from accounts with a meager few hundred–and I felt that it was not unlike watching someone hit the jackpot in the slot machine next to me. Just enough incentive to keep me playing.
A citizen may have attended a Lincoln/Douglas debate to be informed and also, in part, to be entertained. The consumer of the television era tuned in to a debate ostensibly to be informed, but in reality to be entertained. The prosumer of the digital age aspires to do the entertaining.
I’ve never read the tweets of John Mayer, but I suspect they are an improvement on the tweets of Kanye West. In any case, Mayer apparently tweeted a lot, in fact, he recently owned up to a case of Twitter addiction. At a Berklee Performance Center Clinic for aspiring musicians, he had this to say:
“The tweets are getting shorter, but the songs are still 4 minutes long. You’re coming up with 140-character zingers, and the song is still 4 minutes long…I realized about a year ago that I couldn’t have a complete thought anymore. And I was a tweetaholic. I had four million twitter followers, and I was always writing on it. And I stopped using twitter as an outlet and I started using twitter as the instrument to riff on, and it started to make my mind smaller and smaller and smaller. And I couldn’t write a song.”
Mayer’s comments came to the attention of John Piper, a prominent Christian pastor with his own not insignificant Twitter following. In a blog post, Piper offered his own experience as an alternative to Mayer’s:
My experience of publishing three Tweets a day (usually written and scheduled a week or two ahead of time) is different. Mayer said, “I couldn’t have a complete thought anymore.” To me this is almost the opposite of what happens. But that may depend on what we aim to do with Twitter.
Piper goes on to add, in what amounts to his philosophy of tweeting, that he aims to be capacious, concise, and compelling when he composes his tweets (preachers seem to have a hard time resisting alliteration). Along the way he likens tweets to proverbs and explains that, “Tweets for me are a kind of poetry.” This is all very nicely put, and I suspect it makes for pretty decent tweets.
Two users, admittedly two very different users, and two quite different experiences. Of course, there is nothing particularly surprising about this. We shouldn’t necessarily expect any two users to have the same experience with any technology. Yet, I couldn’t help but wonder what might account for the difference, or if one were more typical.
We get a hint at what the difference might be when Piper writes, “My experience of publishing three Tweets a day (usually written and scheduled a week or two ahead of time) is different.”
That parenthetical statement suggests to me that Piper is not really tweeting. Obviously, he is composing words that are eventually shared via Twitter, but he is doing so in a manner that almost renders the platform irrelevant from the standpoint of personal experience. I would even bet that Piper is not the one interacting with the Twitter interface, in other words I wonder if he passes his composed tweets on to a third party who then types them in and publishes them. (There wouldn’t be a thing wrong with this, of course.) A quick look at Piper’s Twitter feed confirms the suspicion that his use of the interface is minimal since we find only the carefully composed statements and occasional links, but no interaction with other Twitter users. He has 185,000+ followers, and follows only 66. There are no @s0andso, no signs of conversation.
So let me suggest that Piper might as well be composing fortune cookie messages. Piper’s experience and habits are solidly in the world of old media. Piper’s thinking is not being influenced by Twitter because he is not using Twitter. In other words, Piper’s experience tells us nothing about the consequences of Twitter for someone who is actually robustly engaged with the interface, like John Mayer for example.
Piper’s closing paragraph further suggests that he is not exactly experiencing Twitter:
I don’t ask that others Tweet the way I try to. I only write this blog post to explain why I don’t experience Twitter the way John Mayer did, and why you don’t have to either. If your goal is to spread capacious, concise, compelling truth about God and his ways, the Tweet is a fruitfully demanding form.
To describe a Tweet as a “fruitfully demanding form” is to view Twitter through the lens of the literary. Piper is measuring Twitter strictly in light of its verbal qualities, in the same way he might view a sonnet or a haiku. This a valid level of engagement and analysis, but has little to do with the way Twitter is experienced by most of its users. Moreover, it misses the significant points of contrast between print and digital media environments.
I hope it is clear that I’m certainly not criticizing Piper. My point is to understand the influence of technology, and I suspect that it is found in part in the habits formed by use of technology, or the practice of an interface. Piper has not experienced the effects of Twitter because he has not entered into the practice of Twitter defined as a robust and sustained engagement with the interface. I suspect that his Twitter account isn’t open on his desktop or smartphone. He probably is not emerged in the flow of TwitterTime. This is probably a good thing. By not really using the medium, he is not being used by it either. In any case, I would suggest that a tool’s influence will not really be felt until its use becomes a practice integrated into our form of life.
“Twitter relies on people’s desire to be the same.” At least that’s what A. C. Goodall claims in a recent New Statesman article, “Is Twitter the Enemy of Self-Expression?” This is, it would seem, a rather vague and unsubstantiated claim. In his brief comments, Alan Jacobs writes that Goodall’s piece amounts to “assertions without evidence.” Jacobs goes on to argue that it is unhelpful to make sweeping claims about something like Twitter which is “a platform and a medium,” rather than an organized, coherent unit with an integral “character.” A medium or platform is subject to countless implementations by users, and, as the history of technology has shown, these uses are often surprising and unexpected.
On the whole I’m sympathetic to Jacobs comments. His main point echoes Michel de Certeau’s insistence that we pay close attention to the use that consumers make of products. In his time, the critical focus had fallen on the products and producers; consumers were tacitly assumed to be passive and docile recipients/victims of the powers of production. De Certeau made it a point, especially in The Practice of Everyday Life, to throw light on the multifarious, and often impertinent, uses to which consumers put products. Likewise, Jacobs is reminding us that generalizations about a medium can be misleading and unhelpful because users put any medium to widely disparate ends.
This is a fair point. However (and if there weren’t a “however” I wouldn’t be writing this), I’m a bit of a recalcitrant McLuhanist and tend to think that the medium may have its influence regardless of the uses to which it is put. And perhaps, I might better label myself an Aristotelian McLuhanist, which is to say that I’m tending toward localizing the impact of a medium in the realm of habit and inclination. The use of a medium over time creates certain habits of mind and body. These habits of mind and body together yield, in my own way of using this language, a habituated sensibility. The difficulty this influence poses to critique is that, precisely because it is habituated, it tends to operate below the level of conscious awareness.
I don’t think the focus on use and the attention to the effects of a medium are necessarily mutually exclusive. Habits after all are only formed through significant and repeated use. Perhaps they are two axes of a grid on which the impact of technology may be plotted. In any case, it would help to provide an example.
Consider our experience of time. It seems that the human experience of time, how we sense and process the passage of time, is not a fixed variable of human nature. My sense is that we habituate ourselves to a certain experience of time and it is difficult to immediately adjust to another mode. Consider those rare moments when we find ourselves having nothing to do. How often do we then report that we were unable to just relax; we had the urge to do something, anything. We were restless precisely at the moment when we could have taken a rest. Or, at a wider scale, consider the various ways cultures approach time. We tend to naturalize the Western habits of precise time keeping and partitioning until we enter another culture which operates by a very different set of attitudes toward time. It would take something much longer than a blog post to explore this fully, but it would seem plausible that certain technologies — some, like the mechanical clock, very old — mold our experience of time.
Bernard Stiegler has commented along similar lines on the media environment and consequent experience of time fostered by television. To begin with he notes, going back to the establishment of the first press agency in Paris in 1835 near a new telegraph, that the “value of information as commodity drops precipitously with time …” He goes on to describe industrial time in the following context:
“…. an event becomes an event — it literally takes place — only in being ‘covered.’ Industrial time is always at least coproduced by the media. ‘Coverage’ — what is to be covered — is determined by criteria oriented toward producing surplus value. Mass broadcasting is a machine to produce ready-made ideas, ‘cliches.’ Information must be ‘fresh’ and this explains why the ideal for all news organs is the elimination of delay in transmission time.”
To be sure, more than the logic of the medium is at play here, but it may be difficult and beside the point to parse out the logic of the medium from other factors.
The ability to eliminate of the delay between event and transmission that characterized industrial time has been radically democratized by digital media. We are all operating under these conditions now. You may vaguely remember, by contrast, the time that elapsed between snapping a picture, getting it developed, and finally showing it to others. That time has been collapse, not only for large news organizations, but for anyone with an internet enabled smart phone. In the interest of creating catchy labels, perhaps we may call this, not industrial time, but Twitter time. “Twitter” here is just a synecdoche for the ability to immediately capture and broadcast information, an ability that is now widely available. My guess is that this capacity, admittedly used in various ways, will affect the sensibility that we label our “experience of time.”
Stiegler continues (with my apologies for subjecting you to the rather dense prose):
“With an effect of the real (of presence) resulting from the coincidence of the event and its seizure and with the real-time or ‘live’ transmission resulting from the coincidence of the event and its reception, a new experience of time, collective as well as individual, emerges. This new time betokens an exit from the properly historical epoch, insofar as the latter is defined by an essentially deferred time — that is, by a constitutive opposition, posited in principle, between the narrative and that which is narrated. This is why Pierre Nora can claim that the speed of transmission of analog and digital transmissions promotes ‘the immediate to historical status’:
‘Landing on the moon was the model of the modern event. Its condition remained live retransmission by Telstar . . . . What is proper to the modern event is that it implies an immediately public scene, always accompanied by the reporter-spectator, who sees events taking place. This ‘voyeurism’ gives to current events both their specificity with regard to history and their already historical feel as immediately out of the past.’
There is a lot to unpack in all of that. We are all reporter-spectators now. Deferred time, time between event and narration, is eclipsed. Everything is immediately “out of the past,” or, at least as I understand it, the whole of the past is collapsed into a moment that is not now. The earthquake and tsunami in Japan, just two months past, might as well have taken place five years ago. The killing of bin Laden, likewise, will very soon appear to be buried in the indiscriminate past.
Twitter as a medium, used to the point of fostering a habituated sensibility (but regardless of particularized uses), would seem to accelerate this economy of time and expand its province into private life. It doesn’t create this economy of time, but it does heighten and reinforce its trajectory. In fact, the relentless flow of the Twitter “timeline” (not an insignificant designation), or better, our effort to keep up with it and make sense of it, may be an apt metaphor for our overall experience of time.
All of this to say that while a medium or platform can be used variously and flexibly, it is not infinitely malleable; a certain underlying logic is more or less fixed and this logic has its own consequences. Of course, none of this necessarily amounts to saying Twitter is “bad”, only to note that its use can have consequences.
Speaking of habit, I’m curious if anyone felt the urge to click the “1 New Tweet” image?