The Triumph of the Social

“Social” is big, in case you missed it.

The most noticeable and  significant development in our media environment over the last decade or so has been the emergence of social elements across digital media platforms and the subsequent migration of social media into traditional media fields.  We might, to borrow a phrase, call this the triumph of the social.   Whatever we call it, this development marks a significant departure from the trend toward individualism that characterized the modern era.

Modernity, according to the standard storyline, was characterized by the individual’s liberation from the constraints of place, tradition, institutions, and, to some degree, biology.  The Renaissance, the Reformation, the Enlightenment — each is an episode in the rise of the individual.  Protestantism, democracy, capitalism — each features the individual and his soul, his rights, his property prominently.  From this angle, postmodernity is, in fact, hyper-modernity; it is not a break with the trajectory of modernity with respect to identity, it is its consummation.   The individual is liberated even from the notion of the persistent or essential self.  Identity, according to the usual suspect theorists, is constructed all the way down (except, of course, that it is not).

I can reasonably follow this story through to the early Internet age, but then something changes. Social media reasserts the social self.  This could be read as a further development of the individualist trajectory – the liberated individual is simply given a larger stage from which to pursue the project of creating their identity free of constraints.  If so, then what we might analyze is the new mode of identity construction.  For example, we might note that we now perform our identity by sharing.  The “Like” button becomes the instrument of identity construction.  The bands, television shows, websites, products, news stories, movies, clothes, cars, companies, causes, etc. that we “Like” signal to our social network who we “are”.

Whatever truth there may be too that, and there is some, there is one other way to read the rise of the social.  In a recent blog post, sociologist Peter Berger offered some reflections on the July 2011 issue of The Annals of the American Academy of Political and Social Science devoted to the topic of “Patrimonial Power in the Modern World.”  In Berger’s summary, patrimonial power is “is power on the basis of kinship and other patron-client relationships. It is the most common form of political authority in traditional societies before the rise of centralized states and empires. Such authority is exercised by way of personal loyalties rather than formal rules. The tribal chief is the prototypical leader in patrimonial regimes.”  Patrimonial power functioned within traditional societies grounded in personal ties and relationships.

Modernity, on the other hand, is characterized by other forms of power and authority.  Berger continues:

The counter-type is the bureaucrat … In a patrimonial system one trusts the chief because he belongs to one’s tribe and embodies its tradition. In what Weber called a “legal-rational” system one trusts the bureaucrat because he occupies an office established by proper procedures; indeed one trusts these procedures rather than the particular individual they have placed in the office.

Bureaucracy, however, not only abstracts power, it abstracts the personal.  In a bureaucracy the person is reduced to a number or an algorithm.  So one of the ironies of modernity is that the rise of the individual was accompanied by the rise of institutions that de-faced the individual.

What’s more, the rise of the individual throughout the modern period was coupled with the simultaneous rise of modern notions of privacy.  The extreme end of the privacy spectrum is complete anonymity, and here too the individual is de-faced, left without personal connection.

Just to be clear, this move toward individualism and privacy was not all bad.  In many respects it was a healthy corrective.  But the pendulum may have swung too far, and Berger does a nice job of explaining where the problem lies:

Robert Musil, in his great novel The Man Without Qualities, recounts an incident when Ulrich, the central character, is arrested and processed in a police station. He experiences what he calls a “statistical disaggregation” of his person. He is reduced to the minimal characteristics relevant to the police investigation, while all the characteristics essential to his self-esteem are ignored. In one way or another, we experience something like this depersonalization in many situations. We are abstract objects of the juridical system in court, abstract patients in a hospital,  abstract consumers in the marketplace. Everything we cherish most about ourselves is strictly irrelevant—our intellectual achievements, our sense of humor or capacity for affection, not to mention the prerogatives of age. In such situations we instinctively reach out for “tribal” connections—for someone who knows who we are, with whom we share an ethnic or religious identity, or even someone who laughs at a joke we tell: in sum, someone who recognizes us in a personal way.

Berger goes on to recall the term he coined with Richard Neuhaus, “mediating structures,” to describe neighborhood, family, church and voluntary associations.  These mediating structures buffered the individual from the impersonal, bureaucratic power of the state, but these structures have themselves been severely compromised leaving the individual isolated and disconnected.  This state of affairs, along with the the presumption that we are indeed political, which is to say social, animals helps explain, in part at least, the triumph of the social.  Social media functions as a mediating structure, a realm in which we are addressed by name and find our individual self publicly acknowledged.

This is not the whole story, of course. Social media in its own way also reduces us to numbers or algorithms, and it cannot provide the all that traditional mediating structures, at their best, are able to offer.  There are also temptations to narcissism and worse.  But, the risks notwithstanding, social media owes its success to the way it addresses a fundamental dimension of being human.

______________________________________________________

Related post:  The (Un)Naturalness of Privacy.

There Can Be Only One: Google+ Takes On Facebook

You are most likely not one of the favored few who have been invited to take Google’s new social networking platform out for a spin and neither am I,  but now we get a glimpse of what Google has been up to. When it does go live,  Google+ will open a new front in the battle against Facebook, and one that appears more promising than the ill-fated Google Buzz.

The Google+ experience is in large measure reminiscent of Facebook with at least one major exception:  Circles.   Facebook’s glaring weakness is its insistence that you indiscriminately present the same persona to every one of your “friends,” a list which may include your best friend from childhood, your ex-girlfriend, your boss, your co-workers, your grandmother, and that kid that lived down the street when you were growing up.  Amusingly, Zuckerberg turns moral philosopher on this point and declares that maintaining more than one online identity signals a lack of integrity.  Google+ more sensibly assumes that not all human relationships are created equal and that our social media experience ought to acknowledge that reality.  It allows you to create Circles into which you can drag and drop names from your contact list.  Whenever you post a picture or a link or a comment, you may designate which Circles will be able to see what you have posted. In other words, it lets you effectively manage the presentation of your self to your multifaceted social media audience.  Google+ appears to have thus solved Facebook’s George Costanza problem:  colliding worlds.

Facebook has gestured in this direction with Groups and Friend lists, but this remains an awkward experience, perhaps because it is at odds with the logic at the core of Facebook’s DNA.  Google+, having taken note of the rumblings of discontent with Facebook’s at times cavalier attitude toward privacy,  also allows users to permanently delete their information from Google’s servers and otherwise presents a more privacy-friendly front.

Even with these features aimed at exposing Facebook’s weaknesses and recent news about kinks in Facebook’s armor, Google+ is not expected to challenge Facebook’s social media supremacy.  Inertia is the main obstacle to the success of Google+.  Many users have committed an immense amount of data to their Facebook profiles and Facebook has worked hard to integrate itself into the whole online experience of its users.  Additionally, Facebook has more or less become a memory archive for many of its users and we don’t easily part with our memory.  Most significantly, perhaps, Google+ starts from a position of relative weakness as far as social media platforms are concerned — it has few users.  Most people will, for a long time to come, more readily find those they know on Facebook.

That said, Facebook’s early success against Myspace was predicated on a certain exclusivity.  It may be that an early disadvantage — relatively few members — may present itself as an important advantage in the eyes of enough people to generate momentum for Google+.  It is also hard to tell how many would-be social media users have been kept at bay by Facebook’s shortcoming and will now venture into social media waters given the refinements offered by Google+.  Casual Facebook users may also find it relatively painless to make a move.

Hard to tell from here, as is most of the future, but I wouldn’t be too surprised if Google+ significantly eroded Facebook’s base. Despite, my Highlander-esque title, the most likely outcome may be that both platforms co-exist by appealing to different sets of sensibilities, priorities, and expectations.

Internet Pleasures

Brain science is an endlessly fascinating field.  Each day, it seems,  a new neurological study is published revealing a link between this or that activity and this or that region of the brain, or that a certain neurotransmitter is related to the regulation of a certain behavior, and so on.  Yesterday’s encounter with the wonders of neurology came while listening to David Linden, professor of neuroscience at the Johns Hopkins University School of Medicine, being interviewed on NPR.  Linden’s new book, The Compass of Pleasure: How Our Brains Make Fatty Foods, Orgasm, Exercise, Marijuana, Generosity, Vodka, Learning, and Gambling Feel So Good, as the inelegant subtitle more than suggests, explores the role of the brain in the experience of pleasure.

Much of the interview focuses on a discussion of the neurology of addiction leading Linden to warn,

“Any one of us could be an addict at any time,” Linden says. “Addiction is not fundamentally a moral failing — it’s not a disease of weak-willed losers. When you look at the biology, the only model of addiction that makes sense is a disease-based model, and the only attitude towards addicts that makes sense is one of compassion.”

Initially, I was struck by two considerations after listening to the interview, both relating to the practical consequences of the science Linden discussed.  First, how oddly Aristotelian all the practical considerations come out sounding:  virtues and vices, habits, and moderation.   Secondly, how little difference this knowledge made for Linden in his own lived experience.  Here is the very last exchange from the interview:

NPR: Since you have studied pleasure and the pleasure circuitry of the brain, has that affected your own relationship with pleasure and the things that you seek or try not to get pleasure from?

Linden: Well, I try deeply not to let it do that.  I certainly — when I’m enjoying a glass of wine I don’t want to be thinking about dopamine levels and, for the most part, fortunately I have been able to avoid doing that. I’m blessed with not having a particularly addictive personality — although I’m a bit of a hedonist — so it hasn’t actually made t0o much of an impact on my own life.

This is a rather jarring note on which to wrap up the interview.  I’ve ordinarily been one to subscribe to A.E. Housman’s line, “All Human Knowledge is precious whether or not it serves the slightest human use.”  And mostly, I would still want to defend something like that claim.  Yet, there is something peculiar about our coming to know more about the biological and neurological base of human life, purportedly the real stuff on which all human life and action rests, only to find that for an ordinary, healthy adult steeped in this knowledge, it makes not much of a difference at all, and, in fact, that he consciously tries to disassociate his knowledge from his experience.  This bears more reflection, but there was one final thought, more directly related to the usual themes on this blog that I wanted to note.

Understanding the Internet’s personal and social consequences involves venturing into the territory mapped out by Linden and others in his field.  Pleasure of some sort — whether benign,  problematic, or illicit — is involved in our daily interactions with the Internet.  If there is a certain compulsiveness to our online experience, then it is because our internet experience shares in an economy of desire, pleasure, and cycles of stimulation and diminishing return that potentially lead  to addictive behavior.

We know that society tolerates certain addictive behaviors more than others, sometimes in seemingly arbitrary fashion.  Internet addiction may carry only a slight social stigma if any at all;  one is tempted rather to conclude that it carries a certain social cachet.  Whether socially acceptable or not, compulsive (or addictive, take your pick) Internet use does appear to have certifiably negative physical consequences in the brain.  A study just published in PLoS ONE suggests that heavy Internet use, particularly online gaming, leads to significant alterations in brain structure with detrimental consequences for cognitive function.  You can read more about the study here, here, and here.

Perhaps not surprisingly, the first of those three articles concludes its report with an appeal to the ancient Roman writer Petronius: “Moderation in all things, including moderation.”  I’m not sure if the writer meant to endorse Petronius’ playful, perhaps satirical tone; more likely it was intended as a straightforward prescription of moderation.

The Unsettledness at the Heart of our Experience

Unsettled — I’m beginning to think that is a helpful word to capture what it feels like to be alive at present.

[Okay, fair warning, what follows is more speculative and exploratory than what I usually feel comfortable writing on here.  Thoughts and criticism welcome.]

Unsettled is usually used in conversation to mean something like troubled or worried or disconcerted.  More literally it suggests being unanchored, untethered, without grounding, deracinated, adrift, without center.  To view it another way, it is to speak of alienation.

Is it legitimate to speak of alienation in the context of ubiquitous social networks and communication?  Might it be that our connectedness veils a deeper alienation that bubbles up to the surface of consciousness as a pervasive unsettledness?  This is my hypothesis for the moment.

We have known for a long time that as moderns we are no longer connected to place in any significant sense.  Mobility and the autonomy that it purchases come at a cost.  We hardly expect to die in the place we were born.  Most of us will move many times, from city to city, or state to state, or even country to country, before we finally move to Florida or Arizona.  Each move uproots us.  With each move we start over again to some degree.  Many of us are hard pressed to name our home in any traditional sense, so home is simply where we happen to be.  We are, then, spatially or geographically unsettled.

Is there a sense in which we are also temporally unsettled?  Is there an alienation at the heart of our experience of time as well as place?  Here I am thinking again of our mediated experience of the present.  Consider what we might call simply lived experience as a kind of baseline.  Life carried on with a certain immediacy, life lived as a subject interacting with the world beyond our skin.  Now consider what I’m going to call, perhaps problematically*, mediated experience.  This is life lived with a view to its own (re)presentation, life as conscious performance — for the camera, for Facebook, for our blog, etc.  At such times it seems we have inserted a layer of mediation between the present and our experience of it.  If so, might we then speak of a temporal alienation, a temporal unsettledness? Are we not only untethered from place, but also from time?

When we experience life with a view to its future presentation, with what Nathan Jurgenson has aptly called “documentary vision”, we are no longer in the moment as subject.  We are, so to speak, no longer acting in our own life, we are directing; we have become spectators of our own lives.  In a sense we have objectified ourselves; we are looking at our selves. In my memories of events, I often see only the image of pictures I am in.  The memory is not my own first person memory, it is an image that stands in for my own lived experience of the event in which I am an object and not the subject — perhaps because I was not, properly speaking, experiencing the event as a lived experience.

If there is, in fact, a vague unsettled quality to our experience, perhaps it is because we have managed to uproot ourselves not only from place and the stability it brings, but also from the flow of time, from the lived present, in such a way that there is something like an oddly disjointed quality to our sense of self — as if we were watching a film with a time lag between the image and the sound.

While not exactly what T. S. Eliot had in mind, we might say that this begins to answer his poetic query, “Where is the Life we have lost in living?”

_______________________________________________________________________

* I say “problematically” because at some level, in some sense all experience is mediated even if only by our own use of language in our minds.

Place and Image, Death and Memory

The sociability of social networking sites such as Facebook is built upon an archive of memories.  Facebook trades in memory in at least two ways.  On the one hand, and perhaps especially for older users, Facebook is platform that facilitates the search for memories.  Old friends and old flames can be found on Facebook.  Reconnecting with a high school buddy activates a surge of interconnected memories that lead to other long forgotten memories and so on.  On the other hand, and this perhaps especially for younger users, Facebook also renders present experience already a depository of potential memories.  The future past impinges upon the present.  Our experience is a conducted as a search for memories yet to be formed which will be archived on Facebook.  In a sense then we hunt for memories past and, paradoxically, for memories future.

This post is the second in a series situating Facebook within the memory theater/arts of memory tradition.  In the first post I set the stage by describing how social networking sites and internet enabled smart phones have constituted experience as a field of potential memories.  I also suggested that how we store and access our memories makes a difference.  The cultural and personal significance of memory is not a static category of human nature. Memory and its significance evolve over time, often in response to changing technologies.  So the question, then, is something like this:  What difference does it make, personally and culturally, that Facebook has become such a prominent mode of memory? 

In order to explore that question, I’ll delve briefly into the history of the art of memory, a set of memory practices with which, I believe, Facebook shares interesting similarities.  But as promised at the end of the last post, we’ll start with a story.

Spatiality, images, and death have long been woven together in the complex history of remembering.  Each appears prominently in the founding myth of what Frances Yates has called the “art of memory” as recounted by Cicero in his De oratore. According to the story, the poet Simonides of Ceos was contracted by Scopas, a Thessalian nobleman, to compose a poem in his honor.  To the nobleman’s chagrin, Simonides devoted half of his oration to the praise of the gods Castor and Pollux.  Feeling himself cheated out of half of the honor, Scopas brusquely paid Simonides only half the agreed upon fee and told him to seek the rest from the twin gods.  Not long afterward that same evening, Simonides was summoned from the banqueting table by news that two young men were calling for him at the door.  Simonides sought the two callers, but found no one.  While he was out of the house, however, the roof caved in killing all of those gathered around the table including Scopas. As Yates puts it, “The invisible callers, Castor and Pollux, had handsomely paid for their share in the panegyric by drawing Simonides away from the banquet just before the crash.”

Cicero

The bodies of the victims were so disfigured by the manner of death that they were rendered unidentifiable even by family and friends.  Simonides, however, found that he was able to recall where each person was seated around the table and in this way he identified each body.  This led Simonides to the realization that place and image were the keys to memory, and in this case, also a means of preserving identity through the calamity of death.  In Cicero’s words,

[Simonides] inferred that persons desiring to train [their memory] must select places and form mental images of the things they wish to remember and store those images in the places, so that the order of the places will preserve the order of the things, and the images of the things will denote the things themselves, and we shall employ the places and images respectively as a wax writing-tablet and the letters written on it.

Cicero is one of three classical sources on the principles of artificial memory that evolved in the ancient world as a component of rhetorical training.  The other two sources are Quintilian’s Institutio oratoria and the anonymous Ad Herennium.  It is through the Ad Herennium, mistakenly attributed to Cicero, that the art of memory migrates into Medieval culture where it is eventually assimilated into the field of ethics.  Cicero’s allusion to the wax-writing table, however, reminds us that discussion of memory in the ancient world was not limited to the rhetorical schools.  Memory as a block of wax upon which we make impressions is a metaphor attributed to Socrates in Plato’s Theaetetus where it appears as a gift of Mnemosyne, the mother of the muses:

Imagine, then, for the sake of argument, that our minds contain a block of wax, which in this or that individual may be larger or smaller, and composed of wax that is comparatively pure or muddy, and harder in some, softer in others, and sometimes of just the right consistency.

Let us call it the gift of the Muses’ mother, Memory, and say that whenever we wish to remember something we see or hear or conceive in our own minds, we hold this wax under the perceptions or ideas and imprint them on it as we might stamp the impressions of a seal ring.  Whatever is so imprinted we remember and know so long as the image remains; whatever is rubbed out or has not succeeded in leaving an impression we have forgotten and do not know.

Plato and Aristotle in Rafeal's "School of Athens"

The Platonic understanding of memory was grounded in a metaphysic and epistemology which located the ability to apprehend truth in an act of recollection.  Plato believed that the highest forms of knowledge were not derived from sense experience, but were first apprehended by the soul in a pre-existent state and remain imprinted deep in a person’s memory.  Truth consists in matching the sensible experience of physical reality to the imprint of eternal Forms or Ideas whose images or imprints reside in memory.  Consequently the chief aim of education is the remembering of these Ideas and this aim is principally attained through “dialectical enquiry,” a process, modeled by Plato’s dialogs, by which a student may arrive at a remembering of the Ideas.

At this point, we should notice that the anteriority, or “pastness,” of the knowledge in question is, strictly speaking, incidental.  What is important is the presence of the absent Idea or Form.  It is to evoke the presence of this absence that remembering is deployed.  It is the presence of eternal Ideas that secures the apprehension of truth, goodness, or beauty in the present.  Locating the memory within the span of time past does not bear upon its value which rests in its being possessed as a model against which to measure experience.

Paul Ricoeur, in Memory, History, Forgetting, begins his consideration of the heritage of Greek reflections on memory with the following observation:

Socratic philosophy bequeathed to us two rival and complementary topoi on this subject, one Platonic, the other Aristotelian.  The first, centered on the theme of the eikōn [image], speaks of the present representation of an absent thing; it argues implicitly for enclosing the problematic of memory within that of imagination.  The second, centered on the theme of the representation of a thing formerly perceived, acquired, or learned, argues for including the problematic of the image within that of remembering.

As he goes on to note, from these two framings of the problematic of memory “we can never completely extricate ourselves.”

Reflecting for just a moment on the nature of our own memories it is not difficult to see why this might be the case.  If we remember our mother, for example, we may do so either by contemplating some idealized image of her in our mind’s eye or else by recollecting a moment from our shared past.  In both cases we may be said to be remembering our mother, but the memories differ along the Platonic/Aristotelian divide suggested by Ricoeur.  In the former case I remember her in a way that seeks her presence without reference to time past; in the latter, I remember her in a way that situates her chronologically in the past.

At this point, I’m sure it seems that we’ve wandered a bit from the art of memory and father still from social networking sites.  There is a method to this madness, however, but demonstrating that will have to wait for the next post.  Already, I am pushing the limits of acceptable blog post length.

Looking forward to the next post in this series, then, here are the tasks that remain:

  • Exploring memory as an index of desire.
  • Setting the art of memory tradition, and Facebook, within Ricoeur’s schema.
  • Asking what difference all of this makes.