A Political Question of the First Order

In the course of urging us to “think what we are doing,” Hannah Arendt made the following observations in her 1958 work, The Human Condition:

“This future man, whom scientists tell us they will produce in no more than a hundred years seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself. There is no reason to doubt our abilities to accomplish such an exchange, just as there is no reason to doubt our present ability to destroy all organic life on earth. The question is only whether we wish to use our new scientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order and therefore can hardly be left to the decision of professional scientists or professional politicians.”

Later on she adds:

“But it could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do. In this case, it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking. If it should turn out to be true that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

As is always the case with such excerpts, I offer them for your consideration because I find them interesting and provocative.

Laborers Without Labor

Kevin Drum in Mother Jones (2013):

“This is a story about the future. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It’s the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they’re computers: They never get tired, they’re never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.

The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It’s up to us.”

Hannah Arendt in The Human Condition (1958):

“Closer at hand and perhaps equally decisive is another no less threatening event. This is the advent of automation, which in a few decades probably will empty the factories and liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity. Here, too, a fundamental aspect of the human condition is at stake, but the rebellion against it, the wish to be liberated from labor’s ‘toil and trouble,’ is not modern but as old as recorded history. Freedom from labor itself is not new; it once belonged among the most firmly established privileges of the few. In this instance, it seems as though scientific progress and technical developments had been only taken advantage of to achieve something about which all former ages dreamed but which none had been able to realize.

However, this is so only in appearance. The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society.  The fulfillment of the wish, therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaninfgul activities for the sake of which this freedom would deserve to be won. Within this society, which is egalitarian because this is labor’s way of making men live together, there is no class left, no aristocracy of either a political or spiritual nature from which a restoration of the other capacities of man could start anew . . . What we are confronted with is the prospect of a society of laborers without labor, that is, without the only activity left to them.  Surely, nothing could be worse.”

Update: A short while after I published this post, I was reminded of an article by Philip Blond I’d linked to a couple of years ago. It included this”

… according to Blond, “Neither Left nor Right can offer an answer because both ideologies have collapsed as both have become the same.”  The left lives by an “agenda of cultural libertarianism” while the right espouses an agenda of “economic libertarianism,” and there is, in Blond’s view, little or no difference between them.  They have both contributed to a shattered society.  “A vast body of citizens,” Blond argues, “has been stripped of its culture by the Left and its capital by the Right, and in such nakedness they enter the trading floor of life with only their labor to sell.”

“With only their labor to sell” – an arresting phrase that, in present context, raises the question: What if even this is taken away?

Landlines, Cell Phones, and Their Social Consequences

Photo credit: Dean Terry

if you’re of a certain age, you’ll remember the pre-cellular days of household phones. One line for everyone, and only one person on the phone at a time. Under the best of circumstances this situation would often lead to more than a few inconveniences. In less than ideal cases, inconvenience could yield to much, much worse.  I’m not entirely sure what got me thinking about the place of the phone in my high school years, but once I started collecting memories, I began to realize that a number of experiences and situations that were then common have disappeared following the emergence of cell phones. And, it seems to me, that not all of these transformations are altogether trivial.

For the record, my high school years were in the 1990s; cell phones were not quite rare and they had already evolved well past the “brick” era. Yet, they were not exactly common either, and they certainly had not displaced the landline. Beepers were then the trendy communication accessory of choice.

As I thought back to the pre-cellular era, it was the rather public nature of the landline conversation that most caught my attention. The household phone was a not a subtle creature. Placing a call to a friend meant, in a sense, placing a call to their whole family. The ring of a phone was indiscriminate and so it was that your call was a matter of public record. if your friend picked up, they may later be asked who it was that called because everyone knew someone had called. If they did not pick up, then you might end up talking to a family member, hopefully one that was kind and polite. So not, for example, a bratty sibling or a cranky parent. Or both, since there was always the possibility that more than one person would pick up and the awkward process of determining who the call was for and getting them and them alone on the line would ensue.

That possibility alone, of perforce having to interact with someone other than the person you intended to speak to, functioned as a form of socialization. It meant that you got to know your friend’s family, including adults, whether you wanted to or not. Consider that it is not altogether unusual for us now to resort to texting so as not to talk to even the person with which we intend to communicate. (Not that there is anything wrong with this … necessarily.) Back then, we not only aimed to talk to someone, but we ran the risk of talking to other people as well. This strikes me as somewhat consequential.

Then, of course, there were all of those not quite licit conversations and the devious ingenuity they occasioned. For example, aiming to talk past a curfew or after other members of the family had gone to bed, one would arrange a set time for the call and then sit waiting with hand on phone, may be even finger on hook, in order to pick up the call at the very first vibration of sound. Or the more serious variety, which often involved the maintenance of unacknowledged and disapproved relationships. Again, if you are of a certain age, I suspect you will be able to supply a number of anecdotes on that score.

This dynamic was recently dramatized in the series Mad Men, set in the early 1960s as both Don and Betty Draper maintain illicit relationships and their phone calls, placed and received, constantly threaten to unravel their secrets.

Also, the landline was public not only in that it made phone calls a matter of public record, but it was also a shared resource. If you were on the phone, someone else could not be; and so some equitable system of sharing this resource, that was at times in heavy demand, would need to be devised. The difficulty of arriving at such an equitable distribution of resources was, naturally, directly proportional to the number of teenagers in the house.

All of this together led me to recall the distinctions Hannah Arendt made in her hefty book, The Human Condition, among the private, public, and social realms. I want to borrow these distinctions to think about the differences between landlines and cell phones, but I won’t be using the terms in quite the same way that she does. On one point, though, I do want to track more closely to her usage, and that is her conception of what constitutes the public realm: disclosure. The public realm was one in which individuals acted in such a manner that they disclosed themselves to others and were, in turn, acknowledge by others. The public realm was a function of scale. Its scale was such that the individual acted among many, but not so many that identity was lost and action rendered unintelligible.

The social realm featured a multiplicity of individuals as well — it was not private — but it took place on a mass scale and even though (or, because) it included multitudes, it was, in fact, a realm of anonymity — its image was the faceless crowd. This differentiation between the public and the social is especially useful now that the digital social realm has emerged over the last decade. Even though we can’t simply elide what we call social media with Ardent’s social realm, the awareness of a distinction among ways of not being by oneself is all the more important.

In Arendt’s analysis, what counted as the private realm shifted its terms according to whether it was paired with the public or social. In relation to the public realm, the private was the relative seclusion of household, a publicly respected zone. But as the household itself became a province of the social, privacy was reconfigured as anonymity.

Consider the landline an instance of the public dynamic and the cell phone a manifestation of the social dynamic, loosely following Arendt’s model. For all the reasons listed above, the landline brought the user into public view. It entailed a necessary appearing in the midst of others, the taking of a certain responsibility for one’s actions, the negotiation of rights to a shared resource, and it yielded a privacy that must be granted by others rather than seized by seclusion.

On that last point consider that while one could lock themselves in their room to have some privacy, the holy grail of teenage life back then, this privacy could rather easily be violated through numerous forms of eavesdropping. To be actualized, this privacy must be conceived of as a transaction of public trust.

By contrast the cell phone allows for a form of privacy that is closer to mere anonymity rather than to a publicly acknowledge and respected right. The cell phone also encourages concealment, rather than disclosure. If my phone is silenced, there is hardly any necessary reason why anyone would know that I have received a call, and if I require privacy I simply take myself and my phone were no one can hear me. I absent myself, I make myself disappear and consequently make no claims upon the civility or trust of others in order to have my privacy. What’s more, the cell phone is typically not shared materially, even though something abstract, like minutes, may be shared in a family plan. No limits are therefore placed on use of the resource, at least for those who can afford high-end plans.

If we take the habits of phone use to be a practice that reinforces certain ways of being, then the differences between the landline and the cell phone are not insignificant. Landlines yielded a public self, constituted privacy as a right premised upon public virtues, and instilled a sense of limits that come from the use of a shared and bounded resource. Cell phones, by contrast, yield an anonymous self, constitute privacy as a function of anonymity and dis-appearing, and instill habits of unbounded and unlimited consumption.

Now my question to you: is this all overblown and overwrought analysis? Or, does this all amount to a development of individual and social consequence?

The Internet, the Body, and Unconscious Dimensions of Thought

Thinking What We Are Doing

Part One of Three (projected).

Writing near the midpoint of the last century, Hannah Arendt worried that we were losing the ability “to think and speak about the things which nevertheless we are able to do.” The advances of science were such that representing what we knew about the world could be done only in the language of mathematics, and efforts to represent this knowledge in a publicly meaningful and accessible manner would become increasingly difficult, if not altogether impossible.  Under such circumstances speech and thought would part company and political life, premised as it is on the possibility of meaningful speech, would be undone.  Consequently, “it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking.”

Arendt was nearly prescient.  She clearly believed this to be a dystopian scenario that would result in the enslavement of humanity, not so much to our machines, but to one narrow constituent element of our humanity – our “know-how,” that is our ability to make tools.  What Arendt did not imagine was the possibility that digitally, and thus artificially, augmented human thought might avert the very enslavement she foresaw.

On the eve of the 21st century, similar concerns were articulated by Paul Virilio who believed that our technologies, particularly the Internet, created a situation in which a total and integral accident was possible – an accident unlike anything we have heretofore experienced and one that we could not, as of yet, imagine.  Virilio termed this possibility the general accident.  Like Arendt, Virilio believed that the emerging shape of our technological society threatened the possibility of politics; and if politics failed, Virilio claimed, the general accident would be inevitable. Again, like Arendt, Virilio too seems unable to imagine that the way forward may lay through, not against technology, particularly the Internet.

If the concerns expressed by both Arendt and Virilio continue to resonate, it is because the structure of the challenge they articulated remains intact.  The pace of technological development outstrips our ability to think through its attendant social and ethical implications; moreover, the political sphere appears so captivated by the ensuing spectacle that it is ensnared by the very problems we call upon it to solve.  We are confronted, then, with a technologically induced failure of thought and politics, along the lines anticipated by Arendt and Virilio.

Gregory Ulmer is likewise concerned about the challenges presented to our thinking and our politics by technology, specifically the Internet; but Ulmer is more sanguine about the possibility of inventing new forms of thought adequate to our circumstances.  Electracy, according to Ulmer, will be to the digital age what literacy has been to the age of print: an apparatus of thought and practice directed toward the perennial question:  “why do things go wrong?”

Ulmer further elaborates the function of electracy in reference to subjectivity:

If the literate apparatus produced subjectivation in the mode of individual selves organized collectively in democratic nation-states, electracy seems to allow the possibility of a group subjectivation with a self-conscious interface between individual and collective . . .

Ulmer begins Electronic Monuments with a discussion of Paul Virilio’s general accident because, in Ulmer’s view, Virilio has “most forcefully” articulated concerns about “the Internet as the potential source of a general accident.” Unlike Virilio, however, Ulmer believes the best response to the potential of the general accident lies not in opposition to Internet, but through the possibilities created by the Internet.

In The Human Condition, Arendt set for society a very straightforward goal:  “What I propose, therefore, is very simple:  it is nothing more than to think what we are doing.” While Arendt goes on to help the reader understand “what we are doing,” the matter of thinking what we are doing remains an elusive task.

Ulmer attributes our inability to think what we are doing to the blindness that plagues us, both individually and collectively, and he draws on a combination of Greek tragedy and psychoanalysis to frame and theorize this blindness.  Reflecting on Greek tragedy, an “oral-literate hybrid” bridging oral and literate forms of problem recognition, Ulmer explains, “The aspect of tragedy of most interest in our context is (in Greek) ATH (até in lowercase), which means ‘blindness’ or ‘foolishness’ in an individual, and ‘calamity’ or ‘disaster’ in a collectivity.”

The sources of ATH, according to Ulmer, are “those circumstances already in place and into which we are thrown at birth, providing the default moods enforcing in us the institutional construction of identity.” Marshall McLuhan captures a similar point in characteristically pithy fashion when he observes that, “Environments are invisible. Their groundrules, pervasive structure and overall patterns elude easy perception.”

In the concluding chapter of Electronic Monuments, Ulmer further clarifies the concept of ATH with reference to Jacques Lacan’s exposition of Antigone:  “Lacan is interested in ATH as showing that exterior that is at the heart of me, the intersubjective nature of human identity.” Ulmer also refers to the intersubjective nature of human identity in describing the Internet as a “prosthesis of the unconscious (intersubjective) mind.” On more than one occasion, Ulmer identifies this metaphor – the Internet as prosthesis of the unconscious – as one of the key assumptions informing his development of the apparatus of electracy.

Taking Ulmer’s discussions of ATH, intersubjectivity, and the unconscious together, the following picture emerges:  For Ulmer the unconscious is not necessarily a realm of repressed trauma or libidinal desire, but rather is shorthand for the countless, unarticulated ways in which subjectivity is constructed by the social world it inhabits.  From one angle, Ulmer has given Freud, not a semiotic spin as Lacan had done, but a sociological spin.  The unconscious names the group subject – the exteriority at the heart of me.

The Internet is a prosthesis of this unconscious in the sense that it is a virtually limitless digital repository of all of the features of the social world that have imprinted themselves on the subject.  On Youtube, to take one example, a viewer can locate the toy commercial from their childhood that is still vaguely remembered, and then have links provided for a multitude of other more forgotten commercials, themes songs, and cartoons that, once seen, are remembered, and whose significance can be startling. Like T. S. Eliot’s “unknown, unremembered gate” in “Little Gidding,” the Internet operating as a prosthesis of the unconscious allows the user to “arrive where we started/And know the place for the first time.”

This collective element of group subjectivity, until it is made accessible through the practices of electracy Ulmer develops, functions as a blind spot (ATH).  It is a source of judgment and action that remains hidden from conscious thought analogously to the traditional psychoanalytic unconscious.  This blindness, therefore, presents a powerful obstacle to Arendt’s plea, that we think what we are doing.  Ulmer’s project, then, may be understood as an attempt to employ the Internet in an effort to make conscious thought aware of the way in which it has been constructed by the social.

When Words and Action Part Company

I’ve not been one to jump on the Malcolm Gladwell bandwagon; I can’t quite get past the disconcerting hair.  That said, his recent piece in The New Yorker, “Small Change:  Why the revolution will not be tweeted,” makes a compelling case for the limits of social media when it comes to generating social action.

Gladwell frames his piece as a study in contrasts.  He begins by recounting the evolution of the 1960 sit-in movement that began when four freshmen from North Carolina A & T sat down and ordered coffee at the lunch counter of the local Woolworth’s and refused to move when the waitress insisted, “We don’t serve Negroes here.”  Within days the protest grew and spread across state lines and tensions mounted.

Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

Almost reflexively now, the devotees of social media power will trot out the Twitter-enabled 2009 Iranian protests as an example of what social media can do.  Gladwell, anticipating as much, quotes Mark Pfeifle, a former national-security adviser, who believes that, “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy.”  Pfeifle went so far as to call for Twitter’s nomination for the Nobel Peace Prize.  That is a bit of a stretch one is inclined to believe, and Gladwell explains why:

In the Iranian case … the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

You can read the Foreign Policy article by Esfandiari Gladwell, “Misreading Tehran:  The Twitter Devolution,” online.   Gladwell argues that social media is unable to promote significant and lasting social change because they foster weak rather than strong-tie relationships.  Promoting and achieving social change very often means coming up against entrenched cultural norms and standards that will not easily give way.  And as we know from the civil rights movement, the resistance is often violent.  As Gladwell reminds us,

. . . Within days of arriving in Mississippi, three [Freedom Summer Project] volunteers—Michael Schwerner, James Chaney, and Andrew Goodman—were kidnapped and killed, and, during the rest of the summer, thirty-seven black churches were set on fire and dozens of safe houses were bombed; volunteers were beaten, shot at, arrested, and trailed by pickup trucks full of armed men. A quarter of those in the program dropped out. Activism that challenges the status quo—that attacks deeply rooted problems—is not for the faint of heart.

A subsequent study of the participants in the Freedom Schools was conducted by Doug McAdam:

“All  of the applicants—participants and withdrawals alike—emerge as highly committed, articulate supporters of the goals and values of the summer program,” he concluded. What mattered more was an applicant’s degree of personal connection to the civil-rights movement . . . . [P]articipants were far more likely than dropouts to have close friends who were also going to Mississippi. High-risk activism, McAdam concluded, is a “strong-tie” phenomenon.

Gladwell also goes on to explain why hierarchy, another feature typically absent from social media activism, is indispensable to successful movements while taking some shots along the way at Clay Shirky’s much more optimistic view of social media outlined in Here Comes Everybody: The Power of Organizing Without Organizations.

Not suprisingly, Gladwell’s piece has been making the rounds online the past few days. In response to Gladwell, Jonah Lehrer posted “Weak Ties, Twitter and the Revolution” on his blog The Frontal Cortex.  Lehrer begins by granting, “These are all worthwhile and important points, and a necessary correction to the (over)hyping of Twitter and Facebook.”  But he believes Gladwell has erred in the other direction.  Basing his comments on Mark Granovetter’s 1973 paper, “The Strength of Weak Ties,” Lehrer concludes:

. . . I would quibble with Gladwell’s wholesale rejection of weak ties as a means of building a social movement. (I have some issues with Shirky, too.) It turns out that such distant relationships aren’t just useful for getting jobs or spreading trends or sharing information. According to Granovetter, they might also help us fight back against the Man, or at least the redevelopment agency.

Read the whole post to get the full argument and definitely read Lehrer’s excellent review of Shirky’s book linked in the quotation above.  Essentially Lehrer is offering a kind of middle ground between Shirky and Gladwell.  Since I tend toward mediating positions myself, I think he makes a valid point; but I do lean toward Gladwell’s end of the spectrum nonetheless.

Here, however, is one more angle on the issue:  perhaps the factors working against the potential of social media are not only inherent in the form itself, but also a condition of society that predates the arrival of digital media by generations.  In The Human Condition, Hannah Arendt argued that power, the kind of power to transform society that Gladwell has in view,

. . . is actualized only where word and deed have not parted company, where words are  not empty and deeds not brutal, where words are not used to veil intentions but to disclose realities, and deeds are not used to violate and destroy but to establish relations and create new realities.

Arendt made that claim in the late 1950′s and she argued that even then words and deeds had been drifting apart for some time.  I suspect that since then the chasm has yawned ever wider and that social media participates in and reinforces that disjunction.  It would be unfair, however, to single out social media since the problem extends to most forms of public discourse, of which social media is but one example.

In The Disenchantment of Secular Discourse, Steven D. Smith argues that

It is hardly an exaggeration to say that the very point of ‘public reason’ is to keep the public discourse shallow – to keep it from drowning in the perilous depths of questions about ‘the nature of the universe,’ or ‘the end and object of life,’ or other tenets of our comprehensive doctrines.

If Smith is right — you can read Stanley Fish’s review in the NY Times to get more of a feel for his argument — social media already operate within a context in which the habits of public discourse have undermined our ability to take words seriously.  To put it another way, the assumptions shaping our public discourse encourage the divorce of words and deeds by stripping our language of its appeal to the deeper moral and metaphysical resources necessary to compel social action.  We tend to get stuck in the analysis and pseudo-debate without ever getting to action. As Fish puts it:

While secular discourse, in the form of statistical analyses, controlled experiments and rational decision-trees, can yield banks of data that can then be subdivided and refined in more ways than we can count, it cannot tell us what that data means or what to do with it . . . . Once the world is no longer assumed to be informed by some presiding meaning or spirit (associated either with a theology or an undoubted philosophical first principle) . . . there is no way, says Smith, to look at it and answer normative questions, questions like “what are we supposed to do?” and “at the behest of who or what are we to do it?”

Combine this with Kierkegaard’s 19th century observations about the Press that now appear all the more applicable to the digital world.  Consider the following summary of Kierkegaard’s fears offered by Hubert Dreyfus in his little book On the Internet:

. . . the new massive distribution of desituated information was making every sort of information immediately available to anyone, thereby producing a desituated, detached spectator.  Thus, the new power of the press to disseminate information to everyone in a nation led its readers to transcend their local, personal involvement . . . . Kierkegaard saw that the public sphere was destined to become a detached world in which everyone had an opinion about and commented on all public matters without needing any first-hand experience and without having or wanting any responsibility.

Kierkegaard suggested the following motto for the press:

Here men are demoralized in the shortest possible time on the largest possible scale, at the cheapest possible price.

I’ll let you decide whether or not that motto may be applied even more aptly to existing media conditions.  In any case, the situation Kierkegaard believed was created by the daily print press in his own day is at least a more likely possibility today.  A globally connected communications environment geared toward creating a constant, instantaneous, and indiscriminate flow of information, together with the assumptions of public discourse described by Smith, numbs us into docile indifference — an indifference social media may be powerless to overthrow, particularly when the stakes are high.  We are offered instead the illusion of action and involvement, the sense of participation in the debate.  But there is no meaningful debate, and by next week the issue, whatever the issue is, will still be there, and we’ll be busy discussing the next thing.  Meanwhile action walks further down a lonely path, long since parted from words.