When Silence is Power

In The Human Condition, Hannah Arendt wrote, “What first undermines and then kills political communities is loss of power and final impotence.” She went on to add, “Power is actualized only where word and deed have not parted company, where words are not empty and deeds not brutal, where words are not used to veil intentions but to disclose realities, and deeds are not used to violate and destroy but to establish relations and create new realities.”

In our present media environment, the opposite of this formula may be closer to the truth, at least in certain situations. In these cases, the refusal to speak is action. Silence is power.

The particular situation I have in view is the hijacking of public discourse (and consequently the political order) by the endless proliferation of manufactured news and fabricated controversy.

These pseudo-events are hyperreal. They are media events that exist as such only in so far as they are spoken about. “To go viral” is just another way of describing the achievement of hyperreality . To be “spoken about” is to be addressed within our communication networks. In a networked society, we are the relays and hyperreality is an emergent property of our networked acts of communication.

Every interest that constitutes our media environment and media economy is invested in the perpetuation of hyperreality.

Daily, these pseudo-events consume our attention and our mental and emotional energy. They feed off of and inspire frustration, rage, despair, paranoia, revenge, and, ultimately, cynicism. It is a daily boom/bust cycle of the soul.

Because they are constituted by speech, the pseudo-events are immune to critical speech. Speaking of them, even to criticize them, strengthens them.

When speaking is the only perceived form of action–it is, after all, the only way of existing on our social media networks–then that which thrives by being spoken about will persist.

How does one protest when acts of protest are consistently swallowed up by that which is being protested? When the act of protest has the perverse effect of empowering that which is being protested?

Silence.

Silence is the only effective form of boycott. Traditional boycotts, the refusal to purchase goods or patronize establishments, are ineffective against hyperreality. They are sucked up into the pseudo-events.

Finally, the practice of silence must be silent about itself.

Here the practice of subversive silence threatens to fray against the edge of our media environment. When the self is itself constituted by acts of speech within the same network, then refusal to speak feels like self-deprivation. And it is. Silence under these conditions is an ascetic practice, a denial of the self that requires considerable discipline.

But if we are relays in the network, then self-sabotage becomes a powerful act of protest.

Perhaps the practice of this kind of self-imposed, unacknowledged silence may be the power that helps resuscitate public discourse.


You can subscribe to my newsletter, The Convivial Society, here.

Flame Wars, Punctuation Cartoons, and Net-heads: The Internet in 1988

I remember having, in the late 1980s, an old PC with a monochrome monitor, amber on black, that was hooked up to the Internet through Prodigy. I was on the Internet, but I had no idea what the Internet was. All I knew was that I could now get near-realtime updates on the score of Mets games.

Back in November, the Washington Post’s tech page posted the text of an article from the newspaper covering the Internet in 1988, right about the time I was messing around on Prodigy:  “Here’s how The Post covered the ‘grand social experiment’ of the Internet in 1988.” It’s a fascinating portal into what feels like another world, sort of. I usually think of the modern Internet experience first taking shape with the World Wide Web in the early and mid-90s. Reading this article, I was struck by how early many of the contours of the web as we know it, and the language we use to describe, began to appear. To be sure, there are also striking discontinuities, but the Internet in 1988 — before AOL, WWW, the dot-com bubble, Web 2.0, social, mobile, and all that — exhibited characteristics that will be readily recognizable. 

Consider the early example of “crowd-sourcing” or collective intelligence with which the author opens the article:

Kreisel was looking for an efficient way to paint patterns inside computer-drawn shapes. Paul Heckbert, a graduate student in California, did not know Kreisel, but he had a pretty good piece of code, or computer programming, to do the job. He dispatched it to Kreisel’s machine, and seconds later the New Yorker’s problem was solved.

Of course, the snake was already in the garden. The article is, in fact, occasioned by “a rogue program, known as a ‘virus,'” designed by a student at Cornell that “did not destroy any data but clogged computers and wasted millions of dollars’ worth of skilled labor and computer time.”

The virus, we’re told, “frightens many network visionaries, who dream of a ‘worldnet’ with ever more extensive connections and ever fewer barriers to the exchange of knowledge.” (Cyber-utopianism? Check. Of course, the roots of cyber-utopianism go back further still than 1988.) According to a Harvard astrophysicist, the Internet is a “community far more than a network of computers and cables.” “When your neighbors become paranoid of one another,” he added, “they no longer cooperate, they no longer share things with each other. It takes only a very, very few vandals to … destroy the trust that glues our community together.”

The scale clearly appeared massive, but today it seems quaint: “Together the news groups produce about 4 million characters of new material a day, the equivalent of about five average books.” But don’t worry about trying to keep up with it all, “90 percent of it is complete and utter trash,” at least as far as that astrophysicist was concerned. Hard to imagine that he was far off the mark (or that the ratio has shifted too much in the ensuing years).

At the time, “thousands of men and women in 17 countries swap recipes and woodworking tips, debate politics, religion and antique cars, form friendships and even fall in love.” Honestly, that’s not a bad sample of the sorts of things we’re still doing on the Internet. In part, this is because it is not a bad sample of what human beings do generally, so it’s what we end up doing in whatever social spaces we end up creating with our tools.

And when human beings find themselves interacting in new contexts created by new communication technologies, there are bound to be what we might kindly call “issues” while norms and conventions adapt to account for the new techno-social configuration. So already in 1988, we read that the Internet “has evolved its own language, social norms and “netiquette.'” More than twenty years hence, that project is still ongoing.

The author focused chiefly on the tone of discourse on Internet forums and, rightly I think, attributed its volatility to the characteristics of the medium:

Wouk’s riposte is a good example of why arguments sometimes intensify into bitter feuds known as flame wars, after the tendency of one character in Marvel Comics’ Fantastic Four to burst into flames (“Flame on!” he shouts) when he is angry. Is Wouk truly angry or just having a good time? Because the written word conveys no tone of voice, it isn’t always easy to tell.

“In a normal social setting,” said Chuq von Rospach of Sun Microsystems, “chances are the two of them would have a giggle over it and it would go away. On a network it tends to escalate. The feedback mechanisms that tell you to back off, or tell you that this person’s joking, aren’t there. The words are basically lifeless.”

Not only would I have failed to guess that flame wars was already in use in 1988, I had no idea of its etymology. But the author doesn’t use emoticon when describing their use:  “True net-heads sometimes resort to punctuation cartoons to get around the absence of inflection. They may append a :-) if they are making a joke (turn your head to the left) or use :-( for an ersatz frown.”

“Net-heads” appears not to have caught on, but what the author calls “punctuation cartoons” certainly have. (What we now call emoticons, by the way, first appeared in the late 19th century.)

Finally, lest the Internet of 1988 appear all to familiar, here’s one glaring point of discontinuity on which to close: “The one unbending rule is that thou shalt not post commercial announcements. It isn’t written anywhere, but heaven help the user who tries to broadcast an advertisement.”

And for good measure, here’s another snippet of the not-too-distant past (1994) that nonetheless manages to feel as if it were ages ago. “Allison, can you explain what Internet is?”

Recent Readings

This past Friday I took the first of my comprehensive exams. I think it went well, well enough anyway. If my committee agrees, I’ll have two more to go, which I’ll be taking within the next three months or so.

The first exam was over selections from my program’s core list of readings. The list features a number of authors that I’ve mentioned before including Walter Ong, Lev Manovich, Jerome McGann, N. Katherine Hayles, and Gregory Ulmer. There were also a number of “classic” theorists as well: Benjamin, Barthes, Foucault, Baudrillard.

Additionally, there were a few titles that were new to me or that I had never gotten around to reading. I thought it would be worthwhile to briefly note a few of these.

Daniel Headrick’s When Information Came of Age: Technologies of Knowledge in the Age of Reason and Revolution tells the story of a number of information systems — for classifying, storing, transforming, and transmitting information — that preceded the advent of what we ordinarily think of as the digital information revolution.

I finally read one of Donald Norman’s books, Living with Complexity. Hands down the easiest read on the list. Engaging and enlightening on design of everyday objects and experiences.

From Papyrus to Hypertext: Toward the Universal Digital Library, a translated work by French scholar Christian Vandendorpe, is laid out as a series of short reflections on the history of texts and reading. It was originally published more than a decade ago, so it is a little dated. Nonetheless, I found it useful.

The most important title that I encountered was Bruno Latour’s Reassembling the Social: An Introduction to Actor-Network-Theory. It’s a shame that it has taken me so long to finally read something from Latour. I heartily recommend it. I’ll be giving it another read soon and may have something to say about it in the future, but for now I’ll simply note that I found it quite enlightening.

Historian Thomas Misa gives a fine account of the entanglement of technology and Western culture in Leonardo to the Internet: Technology and Culture from the Renaissance to the Present.

Then there was this 2010 blogpost by Ian Bogost, which was anthologized in Debates in the Digital Humanities“The Turtlenecked Hairshirt: Fetid and Fragrant Futures for the Humanities.” It is, how shall I put it, bracing.

Finally, this wasn’t part of my readings for the exam, but I did stumble upon a 1971 review of Foucault’s The Order of Things: An Archaeology of the Human Sciences by George Steiner: “The Mandarin of the Hour-Michel Foucault.” The review is not quite so dismissive as the title might suggest.

There you have it, that’s what I have to show for the past couple of month’s reading. As the semester winds down and grades get turned in, who knows but some new blog posts may appear.

On the xkcd Philosophy of Technology, Briefly

Before I say anything else, let me say this: I get where this is coming from, really, I do.

simple_answers.xkcd

This is a recent xkcd comic that you may have seen in the past day or two. It makes a valid point, sort of. The point that I’d say is worth taking from this comic is this:  both unbridled enthusiasm and apocalyptic fear of any new technology are probably misguided. Some people need to hear that.

That said, this comic is similar in spirt to an earlier xkcd offering that Alan Jacobs rightfully picked apart with some searching questions. Although it is more pronounced in the earlier piece, both exhibit a certain cavalier “nothing-to-see-here-let’s-move-it-along” attitude with regards to technological change. In fact, one might be tempted to conclude that according to the xkcd philosophy of technology, technology changes nothing at all. Or, worse yet, that with technology it is always  que sera, sera  and we do well to stop worrying and enjoy the ride wherever it may lead.

In truth, I might’ve let the whole thing go without comment were it not for that last entry on the chart. It’s a variation of a recurring, usually vacuous rhetorical move that latches on to any form of continuity in order to dismiss concerns, criticism, etc. Since we have always already been X, this new form of X is inconsequential. It is as if reality were an undifferentiated, uncaused monistic affair in which the consequences of every apparent change are always contained and neutralized by its antecedents. But, of course, the fact that human beings have always died does not suggest to anyone in their right mind that we should never investigate the causes of particular deaths so as to prevent them when it is reasonable and possible to do so.

Similarly, pointing out that human beings have always used technology is perhaps the least interesting observation one could make about the relationship between human beings and any given technology. Continuity of this sort is the ground against which figures of discontinuity appear, and it is the figures that are of interest. Alienation may be a fact of life (or maybe that is simply the story that moderns tell themselves  to bear it), but it has been so to greater and lesser extents and for a host of different reasons. Pointing out that we have always been alienated is, consequently, the least interesting and the least helpful thing one could say about the matter.


If you’ve appreciated what you’ve read, consider supporting the writer.