When Words and Action Part Company

I’ve not been one to jump on the Malcolm Gladwell bandwagon; I can’t quite get past the disconcerting hair.  That said, his recent piece in The New Yorker, “Small Change:  Why the revolution will not be tweeted,” makes a compelling case for the limits of social media when it comes to generating social action.

Gladwell frames his piece as a study in contrasts.  He begins by recounting the evolution of the 1960 sit-in movement that began when four freshmen from North Carolina A & T sat down and ordered coffee at the lunch counter of the local Woolworth’s and refused to move when the waitress insisted, “We don’t serve Negroes here.”  Within days the protest grew and spread across state lines and tensions mounted.

Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

Almost reflexively now, the devotees of social media power will trot out the Twitter-enabled 2009 Iranian protests as an example of what social media can do.  Gladwell, anticipating as much, quotes Mark Pfeifle, a former national-security adviser, who believes that, “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy.”  Pfeifle went so far as to call for Twitter’s nomination for the Nobel Peace Prize.  That is a bit of a stretch one is inclined to believe, and Gladwell explains why:

In the Iranian case … the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

You can read the Foreign Policy article by Esfandiari Gladwell, “Misreading Tehran:  The Twitter Devolution,” online.   Gladwell argues that social media is unable to promote significant and lasting social change because they foster weak rather than strong-tie relationships.  Promoting and achieving social change very often means coming up against entrenched cultural norms and standards that will not easily give way.  And as we know from the civil rights movement, the resistance is often violent.  As Gladwell reminds us,

. . . Within days of arriving in Mississippi, three [Freedom Summer Project] volunteers—Michael Schwerner, James Chaney, and Andrew Goodman—were kidnapped and killed, and, during the rest of the summer, thirty-seven black churches were set on fire and dozens of safe houses were bombed; volunteers were beaten, shot at, arrested, and trailed by pickup trucks full of armed men. A quarter of those in the program dropped out. Activism that challenges the status quo—that attacks deeply rooted problems—is not for the faint of heart.

A subsequent study of the participants in the Freedom Schools was conducted by Doug McAdam:

“All  of the applicants—participants and withdrawals alike—emerge as highly committed, articulate supporters of the goals and values of the summer program,” he concluded. What mattered more was an applicant’s degree of personal connection to the civil-rights movement . . . . [P]articipants were far more likely than dropouts to have close friends who were also going to Mississippi. High-risk activism, McAdam concluded, is a “strong-tie” phenomenon.

Gladwell also goes on to explain why hierarchy, another feature typically absent from social media activism, is indispensable to successful movements while taking some shots along the way at Clay Shirky’s much more optimistic view of social media outlined in Here Comes Everybody: The Power of Organizing Without Organizations.

Not suprisingly, Gladwell’s piece has been making the rounds online the past few days. In response to Gladwell, Jonah Lehrer posted “Weak Ties, Twitter and the Revolution” on his blog The Frontal Cortex.  Lehrer begins by granting, “These are all worthwhile and important points, and a necessary correction to the (over)hyping of Twitter and Facebook.”  But he believes Gladwell has erred in the other direction.  Basing his comments on Mark Granovetter’s 1973 paper, “The Strength of Weak Ties,” Lehrer concludes:

. . . I would quibble with Gladwell’s wholesale rejection of weak ties as a means of building a social movement. (I have some issues with Shirky, too.) It turns out that such distant relationships aren’t just useful for getting jobs or spreading trends or sharing information. According to Granovetter, they might also help us fight back against the Man, or at least the redevelopment agency.

Read the whole post to get the full argument and definitely read Lehrer’s excellent review of Shirky’s book linked in the quotation above.  Essentially Lehrer is offering a kind of middle ground between Shirky and Gladwell.  Since I tend toward mediating positions myself, I think he makes a valid point; but I do lean toward Gladwell’s end of the spectrum nonetheless.

Here, however, is one more angle on the issue:  perhaps the factors working against the potential of social media are not only inherent in the form itself, but also a condition of society that predates the arrival of digital media by generations.  In The Human Condition, Hannah Arendt argued that power, the kind of power to transform society that Gladwell has in view,

. . . is actualized only where word and deed have not parted company, where words are  not empty and deeds not brutal, where words are not used to veil intentions but to disclose realities, and deeds are not used to violate and destroy but to establish relations and create new realities.

Arendt made that claim in the late 1950’s and she argued that even then words and deeds had been drifting apart for some time.  I suspect that since then the chasm has yawned ever wider and that social media participates in and reinforces that disjunction.  It would be unfair, however, to single out social media since the problem extends to most forms of public discourse, of which social media is but one example.

In The Disenchantment of Secular Discourse, Steven D. Smith argues that

It is hardly an exaggeration to say that the very point of ‘public reason’ is to keep the public discourse shallow – to keep it from drowning in the perilous depths of questions about ‘the nature of the universe,’ or ‘the end and object of life,’ or other tenets of our comprehensive doctrines.

If Smith is right — you can read Stanley Fish’s review in the NY Times to get more of a feel for his argument — social media already operate within a context in which the habits of public discourse have undermined our ability to take words seriously.  To put it another way, the assumptions shaping our public discourse encourage the divorce of words and deeds by stripping our language of its appeal to the deeper moral and metaphysical resources necessary to compel social action.  We tend to get stuck in the analysis and pseudo-debate without ever getting to action. As Fish puts it:

While secular discourse, in the form of statistical analyses, controlled experiments and rational decision-trees, can yield banks of data that can then be subdivided and refined in more ways than we can count, it cannot tell us what that data means or what to do with it . . . . Once the world is no longer assumed to be informed by some presiding meaning or spirit (associated either with a theology or an undoubted philosophical first principle) . . . there is no way, says Smith, to look at it and answer normative questions, questions like “what are we supposed to do?” and “at the behest of who or what are we to do it?”

Combine this with Kierkegaard’s 19th century observations about the Press that now appear all the more applicable to the digital world.  Consider the following summary of Kierkegaard’s fears offered by Hubert Dreyfus in his little book On the Internet:

. . . the new massive distribution of desituated information was making every sort of information immediately available to anyone, thereby producing a desituated, detached spectator.  Thus, the new power of the press to disseminate information to everyone in a nation led its readers to transcend their local, personal involvement . . . . Kierkegaard saw that the public sphere was destined to become a detached world in which everyone had an opinion about and commented on all public matters without needing any first-hand experience and without having or wanting any responsibility.

Kierkegaard suggested the following motto for the press:

Here men are demoralized in the shortest possible time on the largest possible scale, at the cheapest possible price.

I’ll let you decide whether or not that motto may be applied even more aptly to existing media conditions.  In any case, the situation Kierkegaard believed was created by the daily print press in his own day is at least a more likely possibility today.  A globally connected communications environment geared toward creating a constant, instantaneous, and indiscriminate flow of information, together with the assumptions of public discourse described by Smith, numbs us into docile indifference — an indifference social media may be powerless to overthrow, particularly when the stakes are high.  We are offered instead the illusion of action and involvement, the sense of participation in the debate.  But there is no meaningful debate, and by next week the issue, whatever the issue is, will still be there, and we’ll be busy discussing the next thing.  Meanwhile action walks further down a lonely path, long since parted from words.

Warning: A Liberal Education Leads to Independent Thinking

File this one under “Unintended Consequences.”

In the 1950’s, at the height of the Cold War, Bell Telephone Company of Pennsylvania put its most promising young managers through a rigorous 10-month training program in the Humanities with the help of the University of Pennsylvania.  During that time they participated in lectures and seminars, read voraciously, visited museums, attended the symphony, and toured Philadelphia, New York and Washington.  To top it off, many of the leading intellectuals of the time were brought in to lecture these privileged few and discuss their books.  Among the luminaries were poet W. H. Auden and sociologist David Reisman whose 1950 book, The Lonely Crowd, was a classic study of the set to which these men belonged.

The idea behind the program was simple.  Managers with only a technical background were competent at their present jobs, but they were not sufficiently well-rounded for the responsibilities of upper management.  As sociologist E. Digby Baltzell put it, “A well-trained man knows how to answer questions, they reasoned; an educated man knows what questions are worth asking.”  Already in the early 20th century “information overload” was deemed a serious problem for managers, but by the early 1950’s it was believed that computers were going to solve the problem. (I know.  That in itself is worth elaboration, but it will have to wait for another post.)  The automation associated with computers, however, ushered in a new problem — the danger that the manager would become a thoughtless, unoriginal, technically competent conformist.  Writing in 1961, Walter Buckingham warned against the possibility that automation would lead not only to a “standardization of products,” but also to a “standardization of thinking.”

But there were other worries as well.  It was feared that the Soviet Union was pulling away in the sheer numbers of scientists and engineers creating a talent gap between the USSR and America.  As a way of undercutting this advantage, many looked to the Humanities and a liberal education.  According to Thomas Woody, writing in 1950, “Liberal education was an education for free men, competent to fit them for freedom.”  Thus a humanistic education became not only a tool to better prepare business executives for the complexity of their jobs, it was a weapon against Communism.

In one sense, the program was a success.  The young men were reading more, their intellectual curiosity was heightened, and they were more open minded and able to see an argument from both sides.  There was one problem, however.  The Bell students were now less willing to be a cog in the corporate machinery.  Their priorities were reordered around family and community.  According to one participant, “Now things are different.  I still want to get along in the company, but I now realize that I owe something to myself, my family, and my community.”  Another put it this way,

Before this course, I was like a straw floating with the current down the stream.  The stream was the Bell Telephone Company.  I don’t think I will ever be like that straw again.

Consequently, the program began to appear as a threat to the company.  One other strike against the program:  a survey revealed that after passing through the program participants were likely to become more tolerant of socialism and less certain that a free democracy depended upon free business enterprise.  By 1960, the program was disbanded.

This is a fascinating story about the power of an education in the humanities to enlarge the mind and fit one for freedom.  But it is also a reminder that in an age of conformity, thinking for oneself is not always welcomed even if it is paid lip service.  After all, remember how well things turned out for Socrates.

__________________

A note about sources:  I first read about the Institute for Humanistic Studies for Executives in an op-ed piece by Wes Davis in the NY Times.  The story fascinated me and I subsequently found an article on the program written in the journal The Historian in 1998 by Mark D. Bowles titled, “The Organization Man Goes To College:  AT&T’s Experiment in Humanistic Education, 1953-1960.”  Quotes in the post are drawn from Bowles’ article.

Life Amid Ruins

In Status Anxiety — his part philosophically-minded self-help book, part social history — Alain de Botton describes two fashions that were popular in the art world during the 17th and 18th century respectively.  The first, vanitas art, took its name from the biblical book of Ecclesiastes in which it is written, “Vanity of vanity, all is vanity.”  Vanitas art which flourished especially in the Netherlands, and also in Paris, was, as the biblical citation implies, concerned with life’s fleeting nature.  As de Botton describes them,

Each still-life featured a table or sideboard on which was arranged a contrasting muddle of objects.  There might be flowers, coins, a guitar or mandolin, chess pieces, a book of verse, a laurel wreath or wine bottle: symbols of frivolity and temporal glory.  And somewhere among these would be set the two great symbols of death and the brevity of life:  a skull and an hourglass.

A bit morbid we might think, but as de Botton explains,

The purpose of such works was not to send their viewers into a depression over the vanity of all things; rather, it was to embolden them to find fault with particular aspects of their own experience, while at the same time attending more closely to the virtues of love, goodness, sincerity, humility and kindness.

Okay, still a bit morbid you might be thinking, but fascinating nonetheless.  Here is the first of two examples provided in Status Anxiety:

Philippe de Champaigne, circa 1671

Here is the second example:

Simon Renard de Saint-Andre, circa 1662

And here are a few others from among the numerous examples one can find online:

Edwart Collier, 1640
Pieter Boel, 1663
Adam Bernaert, circa 1665
Edwart Collier, 1690

Less morbid and more nostalgic, the second art fashion de Botton examines is the 18th and 19th century fascination with ruins.  This fascination was no doubt inspired in part by the unearthing of Pompeii’s sister city, Herculaneum, in 1738.  The most intriguing subset of these paintings of ancient ruins, however, were those paintings that imagined not the past in ruins, but the future.  “A number of artists,” according to de Botton, “have similarly delighted in depicting their own civilization in a tattered future form, as a warning to, and reprisal against, the pompous guardians of the age.”  Consider these the antecedents of the classic Hollywood trope in which some famous city and its monuments lies in ruins — think Planet of the Apes and the Statue of Liberty.

Status Anxiety provides three examples these future ruins.  The first depicts the Louvre in ruins:

Hubert Robert, Imaginary View of the Grande Gallerie of the Louvre in Ruins, 1796

The second depicts the ruins of the Bank of London:

Joseph Gandy, View of the Rotunda of the Bank of England in Ruins, 1798

And the third, from a later period, depicts the city of London in ruins being sketched by a man from New Zealand, “the country that in Dore’s day symbolized the future,” in much the same way that Englishmen on their Grand Tours would sketch the ruins of Athens or Rome.

Gustav Dore, The New Zealander, 1871

Finally, both of these art fashions suggested to my mind Nicolas Poussin’s Et in Arcadia ego from 1637-1638:

Nicolas Poussin, Et in Arcadia ego, 1637-38

Here, shepherds stumble upon some ancient tomb in which they read the inscription, Et in Arcadia ego.  There has been some debate about the precise way the phrase should be taken.  It may be read as the voice of death personified saying “even in Arcadia I exist,” or it may mean “the person buried in this tomb lived in Arcadia.”  In either case the moral is clear.  Death comes for the living.  It is a memento mori, a reminder of death (note the appearance of that phrase in the last piece of vanitas art above).

Admittedly, these are not the most uplifting of reflections.  However, de Botton’s point and the point of the artists who painted these works strikes me as sound:  we make a better go of the present if we live with the kind of perspective engendered by these works of art.  Our tendency to ignore our mortality and our refusal to acknowledge the limitations of a single human life may be animating much of our discontent and alienation.  Perhaps.  Certainly there is some wisdom here we may tap into.  This is pure conjecture, of course, but I wonder how many, having contemplated Gandy’s paiting, would have found the phrase “too big too fail” plausible?  Might we not, with a renewed sense of our mortality, reorder some of our priorities bringing into sharper focus the more meaningful elements of life?

It is also interesting to consider that not only do we have few contemporary equivalents of the kind of art work we’ve considered, but neither do we have any actual ruins in our midst.  America seems uniquely prepared to have been a country without a sense of the past.  Not only are we an infant society by historical standards, but even the ancient inhabitants of our lands, unlike those further south, left no monumental architecture, no tokens of antiquity.  Those of us who live in suburbia may go days without casting our eyes on anything older than twenty years.  We have been a forward looking society whose symbolic currency has not been — could not have been — ruins of the past, but rather the Frontier with its accent on the future and what may be.

I would not call into question the whole of this cultural sensibility, but perhaps we could have used just a few ruins.

Multitasking Monks?

In her essay, “Medieval Multitasking:  Did We Ever Focus?”, Elizabeth Drescher addresses the Nicolas Carr/Clay Shirky debate on the relative merits of the Internet.  Drescher’s piece distinguishes itself by taking, as her title suggests, a long view of the issue and by its breezy, phenomenological style.  I think she is right to look for historical antecedents that shed light on our use of new media, however, I have reservations about where she ends up.  I tend to see more discontinuity than she does, particularly in the kind of relationship with the text encouraged by certain features of new media. You can read some of my thoughts in the Letters section below Drescher’s essay or here.  Quick excerpt:

Modularity, or what Manovich also calls the “fractal structure of new media,” allows for individual elements of a hypertext (text, image, video, chart, audio, etc.) to retain their integrity and be easily abstracted and recombined in another setting. Now to get a sense of the significance of this development, imagine a medieval monk attempting to easily abstract the graphic elements of an illuminated manuscript for use in another setting.

I single out modularity because it gets at an important distinction the gets lost if we lay all the emphasis on continuity. Modularity has contributed to a massive reconfiguration of the relationship between the media artifact and the user. The conditions of new media have allowed us to approach texts (and I use that term in the widest possible sense) on the Internet as potential creators, as well, users . . .

We now seem less apt at receiving a text and, at least to begin with, submitting ourselves to it. This is a particularly important development in religious contexts. We are now more likely to jump into the creation of our own meaning and our own texts without first allowing the texts to read us as it were. We are less likely to listen to the text before wanting to speak back to it or speak it anew. We are first disposed to shape the text rather than being open to how the text may shape us.

Along the way Drescher links to the op-ed piece by Steven Pinker that we noted here earlier, but she also links to an op-ed by David Brooks, “The Medium is the Medium”, which I had missed.  In his piece Brooks makes some interesting distinctions and observations, yet my initial response is mixed.  Perhaps more on that later.

Jack Kerouac (possibly drunk)

That is how The New Republic titled one of the many fascinating archival video clips it has assembled on its website.  When you watch that particular clip you realize that inserting “possibly” into the parenthetical statement was an act of inspired generosity.

Along with the Kerouac clip you will find videos of J. R. R. Tolkien, Isaiah Berlin, Sigmund Freud, Georges Bataille, Leon Trotsky, Aldous Huxley, Michel Foucault, Jean Paul Sartre, Walker Percy, Eudora Welty, Samuel Beckett, George Bernard Shaw, and several more.  Most of the clips are quite short, many are a bit grainy, and a few are not in English (such as Camus at a soccer match).

Just in case you needed something to help you waste away some time.  Although it’s not quite a waste, I don’t think.