Clever Blog Post Title

Reading about Jon Stewart’s recent D. C. rally, I noticed a picture of a participant wearing a “V for Vendetta” mask and carrying a sign.  It wasn’t the mask, but the words on the sign that caught my eye.  The sign read, “ANGRY SIGN.”

I wouldn’t have paid much attention to it had I not recently crossed paths with a guy wearing a T-shirt that read:  “Words on a T-shirt.”  Both of these then reminded me of the very entertaining, “Academy Award Winning Trailer” — a self-referential spoof of movie trailers embedded at the end of this post.  The “trailer” opens with a character saying, “A toast — establishing me as the wealthy, successful protagonist.”  And on it goes in that vein.  The comments section on the clip’s Youtube page likewise yielded a self-referential parody of typical Youtube comments.  Example:  “Quoting what I think is the funniest part of the video and adding “LOL” at the end of the comment.”

There is a long tradition of self-reference in the art world, Magritte’s pipe is classic example, but now it seems the self-referentiality that was a mark of the avant-garde is an established feature of popular culture.  Call it vulgar self-referentiality if you like.  Our sophistication, in fact, seems to be measured by the degree of our reflexivity and self-awareness — “I know, that I know, that I know, etc.” — which elides with a mood of ironic detachment.   Earnestness in this environment becomes a vice.  We are in a sense “mediating” or re-presenting our own selves through this layer of reflexivity, and we’re pretty sure everyone else is too.

On the surface perhaps, there is a certain affinity with the Socratic injunction, “Know thyself.”  But this is meta-Socrates, “Knowingly know thyself.”  At issue for some is whether there is  a subject there to know that would localize and contain the knowing, or whether in the absence of a subject all there is to know is a process of knowing, knowing itself.

Leaning on Thomas de Zengotita’s Mediated: How the Media Shapes Your World and the Way You Live in It and Hannah Arendt’s The Human Condition, here’s a grossly oversimplified thumbnail genealogy of our heightened self-consciousness.  On the heels of the Copernican revolution, we lost confidence in the ability of our senses to apprehend the world truthfully (my eyes tell me the world is stationary and the sun moves, turns out my eyes lied), but following Descartes we sought certainty in our inner subjective processes — “I think, therefore I am” and all that.  Philosophically then our attention turned to our own thinking — from the object out there, to the subject in here.

Sociologically, modernity has been marked by an erosion of cultural givens; very little attains a taken-for-granted status relative to the pre-modern and early modern world.  In contrast to the medieval peasant, consider how much of your life is not pre-determined by cultural necessity.  Modernity then is marked by the expansion of choice, and choice ultimately foregrounds the act of choosing which yields an attendant lack of certainty in the choice – I am always aware that I could have chosen otherwise.  In other words, identity grounded in the objective (reified) structures of traditional society is superseded by an identity which is the aggregate of the choices we make — choice stands in for fate, nature, providence, and all the rest.  Eventually an awareness of this process throws even the notion of the self into question; I could have chosen otherwise, thus I could be otherwise.  The self, as the story goes, is decentered.  And whether or not that is really the case, it certainly seems to be what we feel to be the case.

So the self-referentialism that marked the avant-garde and the theories of social constructivism that were controversial a generation ago are by now old news.  They are widely accepted by most people under 35 who didn’t pick it all up by visiting the art houses or reading Derrida and company, but by living with and through the material conditions of their society.

First the camera, then the sound recorder, the video recorder, and finally the digitization and consequent democratization of the means of production and distribution (cheap cameras/Youtube, etc.) created a society in which we all know that we know that we are enacting multiple roles and that no action yields a window to the self, if by self we mean some essential, unchanging core of identity.  Foucault’s surveillance society has produced a generation of method actors. In de Zengotita’s words,

Whatever the particulars, to the extent that you are mediated, your personality becomes an extensive and adaptable tool kit of postures . . . You become an elaborate apparatus of evolving shtick that you deploy improvisationally as circumstances warrant.  Because it is all so habitual, and because you are so busy, you can almost forget the underlying reflexivity.

Almost.

There seems to be a tragically hip quality to this kind of hyper-reflexivity, but it is also like looking into a mirror image of a mirror — we get mired in an infinite regress of our own consciousness.  We risk self-referential paralysis, individually and culturally, and we experience a perpetual crisis of identity.  My sense is that a good deal of our cynicism and apathy is also tied into these dynamics.  Not sure where we go from here, but this seems to be where we are.

Or maybe this is all just me.

“The Things We Make, Make Us”

A while ago I posted about a rather elaborate Droid X commercial which featured a man’s arm morphing into a mechanical, cyborg arm from which the Droid phone then emerges.  This commercial struck me as a useful and vivid illustration of an important theoretical metaphor, deployed by Marshall McLuhan among others, to help us understand the way we relate to our technologies:  technology as prosthesis.

This weekend I came across another commercial that once again captured, intentionally or otherwise, an important element of our relationship with technology.  This time it was a commercial (which you can watch below) for the 2011 Jeep Grand Cherokee.  The commercial situates the new Grand Cherokee within the mythic narrative of American technology.  “The things that make us Americans,” we are told in the opening lines, “are the things we make.”  In 61 seconds flat we are presented with a series of iconic American technologies:  the railroad, the airplane, the steel mill, the cotton gin, the Colt .45, the skyscraper, the telegraph, the light bulb, and, naturally, the classic Jeep depicted in World War II era footage.  As if to throw in the mythical/American kitchen sink, at one point the image of a baseball hitting a catcher’s mitt is flashed on the screen.

(Never mind that a rather dark tale could be woven out of the history of the development and deployment of many of these technologies and that their sacred quality was lost on some rather consequential Americans including Nathaniel Hawthorne and Herman Melville.  For more on the place of technology in American history and culture take a look at David Nye’s American Technological Sublime and Leo Marx’s The Machine in the Garden: Technology and the Pastoral Ideal in America.)

In any case, the commercial closes with the line, “The things we make, make us.”  I suspect that this is an instance of someone speaking better than they know.  My guess is that the intended meaning is something like, “Making and inventing is part of what it means to be an American.”  We build amazing machines, that’s who we are.  But there is a deeper sense in which it is true that “the things we make, make us.”  We are conditioned (although, I would want to argue, not wholly determined) creatures.  That is to say that we are to some degree a product of our environment and that environment is shaped in important respects by the tools and technologies encompassed by it.

Nothing new here, this is a point made in one way or another by a number of observers and critics.  For example consider the argument advanced by Walter Ong in Orality and Literacy.  The technology of writing, Ong contends, transforms oral societies and the way its members experience the world.  Ong and others have explored the similar significance of printing to the emergence of modern society and modern consciousness.  Lewis Mumford famously suggested that it is to the invention of the clock that we owe the rise of the modern world and the particular disposition toward time that characterizes it.  Historians and social critics have also explored the impact of the steam engine, the car, the telephone, the radio, the television, and, most recently, the Internet on humans and their world.  Needless to say, we are who we are in part because of the tools that we have made and that now are in turn making us.  And as I’ve noted before, Katherine Hayles (and she is not alone) goes so far as to suggest that as a species we have “codeveloped with technologies; indeed, it is no exaggeration,” she writes in Electronic Literature, “to say modern humans literally would not have come into existence without technology.”

Now this may be a bit more than what Jeep had in mind, but thanks to their commercial we are reminded of an interesting and important facet of the human condition.

Medium Matters

“The medium is the message.” Or so Marshall McLuhan would have it.  The idea behind the catchy line is simple:  the medium is at least as significant, if not more so, as the content of a message.  In Understanding Media, McLuhan puts it this way:

Our conventional response to all media, namely that it is how they are used that counts is the numb stance of the technological idiot.  For the “content” of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.  (UM, 18)

Or, in case that wasn’t straightforward enough,

The content or message of any particular medium has about as much importance as the stenciling on the casing of an atomic bomb. (The Essential Mcluhan, 238)

This has remained one of media studies guiding principles.  However, earlier this week, in a post titled “Content Matters”, Jonah Lehrer offers the following comments on an article in the journal Neuron:

One of the recurring themes in the article is that it’s that very difficult to generalize about “technology” in the abstract. We squander a lot of oxygen and ink worrying about the effects of “television” and the “internet,” but the data quickly demonstrates that these broad categories are mostly meaningless. When it comes to changing the brain, content is king. Here are the scientists:

In the same way that there is no single effect of ‘‘eating food,’’ there is also no single effect of ‘‘watching television’’ or ‘‘playing video games.’’ Different foods contain different chemical components and thus lead to different physiological effects; different kinds of media have different content, task requirements,and attentional demands and thus lead to different behavioral effects.

You can read the study, “Children, Wired: For Better or for Worse,” online.  The article makes the case that different content presented by the same medium will impact children in different ways.  So, for example, children who watch Sesame Street test better for literacy than do children who watch Teltubbies.  The report also concluded that while media that was intended to be educational, such as Baby Einstein videos, can some times have detrimental consequences, media that were intended for entertainment, such as action video games, could sometimes yield positive educational outcomes.  On that note, Lehrer quoted the following excerpt:

A burgeoning literature indicates that playing action video games is associated with a number of enhancements in vision, attention, cognition, and motor control. For instance, action video game experience heightens the ability to view small details in cluttered scenes and to perceive dim signals, such as would be present when driving in fog (Green and Bavelier, 2007; Li et al., 2009). Avid players display enhanced top-down control of attention and choose among different options more rapidly (Hubert-Wallander et al., 2010; Dye et al., 2009a). They also exhibit better visual short-term memory (Boot et al., 2008; Green and Bavelier, 2006), and can more flexibility switch from one task to another (Boot et al., 2008; Colzato et al., 2010; Karle et al., 2010).

Now perhaps I’m being somewhat of a curmudgeon, but it seems to me that, a heightened ability to drive in the fog notwithstanding, most of this amounts to saying that people who play video games get better at the skills needed to play video games.  All in all, I think we might prefer that people learn to make certain kinds of decision more deliberately, rather than more rapidly.  In any case, the article goes on to conclude that more research is needed and that researchers are just now beginning to get their footing in the field.

The point Lehrer seizes on, that content matters, is true enough.  I don’t know too many people who would argue that all content on any given media is necessarily equal.  However, this is not to say that the content is all that matters.  The studies cited by the article focused on different content within the same medium, but what of those who don’t use the medium at all compared to those who do regardless of the content they receive.  In other words, is there more of a difference between those who grow up watching television and those who don’t than there is between those who watch two different kinds of television programs?  Unless I missed something, the article (and the studies it cites) does not really address that issue.

By way of contrast, in “How to Raise Boys That Read,” Thomas Spence cites a study that seems to get at that question:

Dr. Robert Weis, a psychology professor at Denison University, confirmed this suspicion in a randomized controlled trial of the effect of video games on academic ability. Boys with video games at home, he found, spend more time playing them than reading, and their academic performance suffers substantially. Hard to believe, isn’t it, but Science has spoken.

The secret to raising boys who read, I submit, is pretty simple—keep electronic media, especially video games and recreational Internet, under control (that is to say, almost completely absent). Then fill your shelves with good books.

Ignore the unfortunate “Science has spoken” bit — I’m not sure what the capitalization is supposed to suggest anyway — and notice that this study is considering not differences in content within a medium (which is not insignificant), but differences between media.

To use a taxonomy coined by Joshua Meyrowitz, the first study focuses on media as conduits or vessels that merely transmit information.  On this model the vessel is less important than the content being transmitted.  There is certainly a place for this kind of analysis, but there is usually more going on.  Meyrowitz encourages us to look at media not only as conduits, but as environments that have significant consequences beyond the particular effects of the content.  As Meyrowitz puts it,

Of course media content is important, especially in the short term. Political, economic, and religious elites have always attempted to maintain control by shaping the content of media . . . But content questions alone, while important, do not foster sufficient understanding of the underlying changes in social structures encouraged or enabled by new forms of communication.

Content matters, but so does the medium (arguably more so).

“The storm is what we call progress”

Via Alan Jacobs at Text Patterns, I read the following excerpt from Arikia Millikan’s short piece “I Am a Cyborg and I Want My Google Implant Already” on The Atlantic’s web site:

By the time I finished elementary school, writing letters to communicate across great distances was an archaic practice. When I graduated middle school, pirating music on Napster was the norm; to purchase was a fool’s errand. At the beginning of high school, it still may have been standard practice to manually look up the answer to a burning question (or simply be content without knowing the answer). Internet connection speeds and search algorithms improved steadily over the next four years such that when I graduated in the class of 2004, having to wait longer than a minute to retrieve an answer was an unbearable annoyance and only happened on road trips or nature walks. The summer before my freshman year of college was the year the Facebook was released to a select 15 universities, and almost every single relationship formed in the subsequent four years was prefaced by a flood of intimate personal information.

Now, I am always connected to the Web. The rare exceptions to the rule cause excruciating anxiety. I work online. I play online. I have sex online. I sleep with my smartphone at the foot of my bed and wake up every few hours to check my email in my sleep (something I like to call dreamailing).

But it’s not enough connectivity. I crave an existence where batteries never die, wireless connections never fail, and the time between asking a question and having the answer is approximately zero. If I could be jacked in at every waking hour of the day, I would, and I think a lot of my peers would do the same. So Hal, please hurry up with that Google implant. We’re getting antsy.

Well, hard to beat honesty I suppose.  I did find it slightly ironic that the Google executive who is interviewed for this piece was named Hal.

Jacobs aptly titled his post “The saddest thing I have read in some time,” and he added simply, “There’s a name for this condition: Stockholm Syndrome.”  Well put, of course.

Perhaps it was reading that piece that prepared me to read Walter Benjamin’s IX Thesis on the Philosophy of History later on that day with a certain melancholy resonance:

A Klee painting named “Angelus Novus” shows an angel looking as though he is about to move away from something he is fixedly contemplating.  His eyes are staring, his mouth is open, his wings are spread.  This is how one pictures the angel of history.  His face is turned toward the past.  Where we perceive a chain of events, he sees one single catastrophe  which keeps piling wreckage upon wreckage and hurls it in front of his feet.  The angel would like to stay, awaken the dead, and make whole what has been smashed.  But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them.  This storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward.  The storm is what we call progress.

In any case, I tend to agree with Jacobs — it was rather sad.

When Words and Action Part Company

I’ve not been one to jump on the Malcolm Gladwell bandwagon; I can’t quite get past the disconcerting hair.  That said, his recent piece in The New Yorker, “Small Change:  Why the revolution will not be tweeted,” makes a compelling case for the limits of social media when it comes to generating social action.

Gladwell frames his piece as a study in contrasts.  He begins by recounting the evolution of the 1960 sit-in movement that began when four freshmen from North Carolina A & T sat down and ordered coffee at the lunch counter of the local Woolworth’s and refused to move when the waitress insisted, “We don’t serve Negroes here.”  Within days the protest grew and spread across state lines and tensions mounted.

Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

Almost reflexively now, the devotees of social media power will trot out the Twitter-enabled 2009 Iranian protests as an example of what social media can do.  Gladwell, anticipating as much, quotes Mark Pfeifle, a former national-security adviser, who believes that, “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy.”  Pfeifle went so far as to call for Twitter’s nomination for the Nobel Peace Prize.  That is a bit of a stretch one is inclined to believe, and Gladwell explains why:

In the Iranian case … the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

You can read the Foreign Policy article by Esfandiari Gladwell, “Misreading Tehran:  The Twitter Devolution,” online.   Gladwell argues that social media is unable to promote significant and lasting social change because they foster weak rather than strong-tie relationships.  Promoting and achieving social change very often means coming up against entrenched cultural norms and standards that will not easily give way.  And as we know from the civil rights movement, the resistance is often violent.  As Gladwell reminds us,

. . . Within days of arriving in Mississippi, three [Freedom Summer Project] volunteers—Michael Schwerner, James Chaney, and Andrew Goodman—were kidnapped and killed, and, during the rest of the summer, thirty-seven black churches were set on fire and dozens of safe houses were bombed; volunteers were beaten, shot at, arrested, and trailed by pickup trucks full of armed men. A quarter of those in the program dropped out. Activism that challenges the status quo—that attacks deeply rooted problems—is not for the faint of heart.

A subsequent study of the participants in the Freedom Schools was conducted by Doug McAdam:

“All  of the applicants—participants and withdrawals alike—emerge as highly committed, articulate supporters of the goals and values of the summer program,” he concluded. What mattered more was an applicant’s degree of personal connection to the civil-rights movement . . . . [P]articipants were far more likely than dropouts to have close friends who were also going to Mississippi. High-risk activism, McAdam concluded, is a “strong-tie” phenomenon.

Gladwell also goes on to explain why hierarchy, another feature typically absent from social media activism, is indispensable to successful movements while taking some shots along the way at Clay Shirky’s much more optimistic view of social media outlined in Here Comes Everybody: The Power of Organizing Without Organizations.

Not suprisingly, Gladwell’s piece has been making the rounds online the past few days. In response to Gladwell, Jonah Lehrer posted “Weak Ties, Twitter and the Revolution” on his blog The Frontal Cortex.  Lehrer begins by granting, “These are all worthwhile and important points, and a necessary correction to the (over)hyping of Twitter and Facebook.”  But he believes Gladwell has erred in the other direction.  Basing his comments on Mark Granovetter’s 1973 paper, “The Strength of Weak Ties,” Lehrer concludes:

. . . I would quibble with Gladwell’s wholesale rejection of weak ties as a means of building a social movement. (I have some issues with Shirky, too.) It turns out that such distant relationships aren’t just useful for getting jobs or spreading trends or sharing information. According to Granovetter, they might also help us fight back against the Man, or at least the redevelopment agency.

Read the whole post to get the full argument and definitely read Lehrer’s excellent review of Shirky’s book linked in the quotation above.  Essentially Lehrer is offering a kind of middle ground between Shirky and Gladwell.  Since I tend toward mediating positions myself, I think he makes a valid point; but I do lean toward Gladwell’s end of the spectrum nonetheless.

Here, however, is one more angle on the issue:  perhaps the factors working against the potential of social media are not only inherent in the form itself, but also a condition of society that predates the arrival of digital media by generations.  In The Human Condition, Hannah Arendt argued that power, the kind of power to transform society that Gladwell has in view,

. . . is actualized only where word and deed have not parted company, where words are  not empty and deeds not brutal, where words are not used to veil intentions but to disclose realities, and deeds are not used to violate and destroy but to establish relations and create new realities.

Arendt made that claim in the late 1950’s and she argued that even then words and deeds had been drifting apart for some time.  I suspect that since then the chasm has yawned ever wider and that social media participates in and reinforces that disjunction.  It would be unfair, however, to single out social media since the problem extends to most forms of public discourse, of which social media is but one example.

In The Disenchantment of Secular Discourse, Steven D. Smith argues that

It is hardly an exaggeration to say that the very point of ‘public reason’ is to keep the public discourse shallow – to keep it from drowning in the perilous depths of questions about ‘the nature of the universe,’ or ‘the end and object of life,’ or other tenets of our comprehensive doctrines.

If Smith is right — you can read Stanley Fish’s review in the NY Times to get more of a feel for his argument — social media already operate within a context in which the habits of public discourse have undermined our ability to take words seriously.  To put it another way, the assumptions shaping our public discourse encourage the divorce of words and deeds by stripping our language of its appeal to the deeper moral and metaphysical resources necessary to compel social action.  We tend to get stuck in the analysis and pseudo-debate without ever getting to action. As Fish puts it:

While secular discourse, in the form of statistical analyses, controlled experiments and rational decision-trees, can yield banks of data that can then be subdivided and refined in more ways than we can count, it cannot tell us what that data means or what to do with it . . . . Once the world is no longer assumed to be informed by some presiding meaning or spirit (associated either with a theology or an undoubted philosophical first principle) . . . there is no way, says Smith, to look at it and answer normative questions, questions like “what are we supposed to do?” and “at the behest of who or what are we to do it?”

Combine this with Kierkegaard’s 19th century observations about the Press that now appear all the more applicable to the digital world.  Consider the following summary of Kierkegaard’s fears offered by Hubert Dreyfus in his little book On the Internet:

. . . the new massive distribution of desituated information was making every sort of information immediately available to anyone, thereby producing a desituated, detached spectator.  Thus, the new power of the press to disseminate information to everyone in a nation led its readers to transcend their local, personal involvement . . . . Kierkegaard saw that the public sphere was destined to become a detached world in which everyone had an opinion about and commented on all public matters without needing any first-hand experience and without having or wanting any responsibility.

Kierkegaard suggested the following motto for the press:

Here men are demoralized in the shortest possible time on the largest possible scale, at the cheapest possible price.

I’ll let you decide whether or not that motto may be applied even more aptly to existing media conditions.  In any case, the situation Kierkegaard believed was created by the daily print press in his own day is at least a more likely possibility today.  A globally connected communications environment geared toward creating a constant, instantaneous, and indiscriminate flow of information, together with the assumptions of public discourse described by Smith, numbs us into docile indifference — an indifference social media may be powerless to overthrow, particularly when the stakes are high.  We are offered instead the illusion of action and involvement, the sense of participation in the debate.  But there is no meaningful debate, and by next week the issue, whatever the issue is, will still be there, and we’ll be busy discussing the next thing.  Meanwhile action walks further down a lonely path, long since parted from words.