Social Media, Social Memory: Remembering with Facebook

I’m a casual Facebook user.  I have a profile, I’ve got friends, I occasionally check in.  I rarely post anything other than links to this blog (shameless self-promotion), I don’t post status updates, I haven’t uploaded a picture in over a year.  Clearly, I’m not heavily invested and I’ve posted more than a few critical remarks about Facebook’s hegemony and its consequences on this blog.  But recently I’ve been thinking about Facebook in relationship to memory, memory being a recurring theme of late.

For example, in light of the research article I summarized yesterday, “Is Memory in the Brain? Remembering as Social Behavior”, one could view Facebook as a form of social remembering.  Rather than reminiscing in person, we have asynchronous reminiscences with friends, past and present, often centered on posted photographs.  We might even view tagging photos as a kind of social remembering, a collective curating of shared memories.  Often a very old photograph will be posted by one of a circle of friends, the other friends will be tagged, and a round of reminiscing will follow on the comments.

One other analogy that I’ve been toying with is Facebook as memory theater.  I’ve mentioned memory theaters in at least a couple of past posts (here and here), the basic idea is that one constructs an imagined space in the mind (similar to the work of the Architect in the film Inception, only you’re awake) and then populates the space with images that stand in for certain ideas, people, words, or whatever else you want to remember.  The theory is that we remember images and places better than we do abstract ideas or concepts.  During the Renaissance these mental constructs were sometimes externalized in built structures that housed all sorts of artifacts visually representing the store of human knowledge.

Perhaps it is a stretch, but Facebook seems to function in some respects as kind of externalized memory theater.  Instead of storing speeches or knowledge of the world, it is used to store autobiographical memory.  The architecture of the application is the constructed space and profiles are like images kept in the places, each profile carrying with it by association a trove of particular memories.  Most people report as one of the joys of Facebook the reconnection with an old friend from childhood.  While certainly some of these reconnections lead into renewed and sustained contact, most I imagine do not.  We exchange a message or two, we look over their life as it is now, and then we don’t really keep in touch any better than we used to.  But the memories have been activated, and now their profile takes its place in our memory theater, happily recalling those same memories whenever we like.  The profile is not the friend, of course; it simply becomes a placeholder for particular set of memories.

Facebook taps in to more than one aspect of our psychology.  I have often explained its appeal as a function of our desire to be noticed, to receive attention; and surely this is part of the mix. Lately Facebook’s role in the political sphere has been receiving a good deal of attention.  But it may be that its trade in our memories gives Facebook its uncanny persistence.  Increasingly we hear people taking issue with Facebook’s privacy protocols or otherwise complaining about the pressures of always on social media.  Not too long ago, I noted the grumblings over Facebook’s bid to become the ambient background of the Internet and Zuckerber’s disingenuous push for online “integrity.”  Recent studies have also drawn attention to the potentially negative effect of Facebook on psychological well-being, particularly for women.  But for all of this most people struggle to kill their accounts permanently. Like a bad high school romance, we break up with Facebook, only to flirt and make up, and then break up again.

This begins to make sense when we realize that Facebook has become a prosthetic of our memory.  But not just a prosthetic of memory in general, a simple list on a scrap of paper is that much; it is a prosthetic of our autobiographical memory.  It’s a part of our identity, and it is very difficult to kill off a part of one’s self.

One last thought, only a suggestive one at that: it is one thing to artificially condition one’s memory to store up vast amounts of information about the world or large chunks of poetry, it is quite another to artificially store up one’s autobiographical memory.  Our technology has made the storage of memory cheap and easy, but there is something to be said for forgetting.  The artificial extension of autobiographical memory involves us in some of the more complex regions of human psychology and personality.  We enter into the realm of mourning, catharsis, obsession, fantasy, and more.   We might consider as well that healing and forgetting very often go hand in hand.  In any case, we have a good deal to contemplate.

“They tell, and here is the enigma, that those consulting the oracle of Trophonios in Boetia found there two springs and were supposed to drink from each, from the spring of memory and from the spring of forgetting.”  Jacques Derrida (Memoires for Paul de Man)

__________________________________________________________

If this post was of interest, you may also want to consider Social Media and the Arts of Memory.

No, I Don’t Want To Be A Medieval Peasant

Admittedly, most days I tend toward critique, not praise of digital media and technology.  Aware of my proclivity, I try to compensate and hope that I strike a reasonable balance.  I’m sure everyone thinks they are successful in their efforts to achieve balanced views, so I’m probably the last person to judge how well I do or don’t.  That said, I do get into a fair number of discussions about technology in a variety of settings, and, more often than not, I’m raising certain questions and concerns, urging for discernment, moderation, et cetera, et cetera, et cetera … (The last bit best read in Yule Brenner tones.)  What this usually is taken to mean, judging from typical responses, is that I would like to be Amish, live without electricity, farm my own food, and wear black homespun clothes in the heat of the summer sun.   Okay, so maybe, there is a tiny part of me that wouldn’t mind trying that out for awhile, but generally speaking this actually isn’t my goal, and much less, the point.

What the reaction reveals, however, is that we tend to think in binary oppositions — this or that, either/or — and that the binary opposite of contemporary technology, in many people’s minds, is some past technological state, for some reason often associated with the 19th century religious sectarianism or Medieval Europe.  So it seems that on the assumption of this binary opposition, any critique of present technology necessarily groups you with either the Amish or the “bring out your dead” crowd.  In fact, there is a good deal of wisdom residing in the past and in intentional communities, but this is beside the more narrow point I’d like to make here.

Binary oppositions are often inherently unstable or else false dilemmas.  But even if we were to set up a binary opposition with present day technology being one member of the pair, who says that some past technological state must be the other member?  We could just as easily imagine the other member being some ideal future state.  I don’t mean this is in some strong utopian sense.  The idealized future is  more dangerous than the idealized past.  However, most of us have certain ideas about what a marginally better world might look like, even if only on the very limited scale of our own personal lives.   So why not make this desire for a better way, which at its best is informed by the past, the other of the present ecology of technology?  In this light, we might consider reasonable critiques of our technologies not as interventions in favor of an unrecoverable past, but rather as steps toward a better, attainable future.

It may be worth remembering that one very famous critic of technology, Marshall McLuhan, believed  that, “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening” (The Medium is the Massage).  Sometimes, however, it is precisely the contemplation part that we struggle with.  Perhaps because in technology, as in politics, binary oppositions tend to undermine, rather than encourage, thought.

Mark Zuckerberg, Moral Philosopher of Identity

In a recent blog post, Steve Cheney bemoans the ongoing progress that Facebook is making toward becoming the ambient background of the Internet.  Specifically, he is concerned that Facebook is killing your authenticity:

… now Facebook’s sheer scale is pushing it in a new direction, one that encroaches on your authenticity.

Facebook is no longer a social network. They stopped being one long before the movie. Facebook is really a huge broadcast platform. Everything that happens between its walls is one degree away from being public, one massive auditorium filled with everyone you’ve ever met, most of whom you haven’t seen or spoken to in years.

Cheney’s post was triggered by the recent adoption of Facebook commenting by a number of large websites, a move that builds on the earlier integration of the “Like” button into almost every commercial, news, and entertainment site of note as part of Facebook’s “Open Graph” platform.  The trajectory here seems fairly clear.  Facebook is forging a global internet identity for you, one that it owns, of course, and with which it stands to make a fair bit of money.

Helpfully, Cheney did not frame his complaint within a denial of the basically social nature of human beings along the lines suggested by Andrew Keen not too long ago.  On the contrary, Cheney acknowledges our social impulses and is concerned that one singular online identity will not do justice to the complexity of human personality and truly social interaction.  One indiscriminate identity will result in one inauthentic and shallow identity that will inhibit rather than promote meaningful sociability.

“A George divided against itself cannot stand!”

The George in question is, of course, the character of George Costanza on Seinfeld.  In one of the more memorable exchanges from the remarkably memorable series, George explains what would happen if Relationship George were to come into contact with Independent George – Independent George would be no more.  We can relate to George in this situation because most of us maintain a handful of different personas that we cycle through as we navigate our way through life.  There are elements of our personality we reveal in some settings that we do not disclose in others; we present some aspects of our selves to certain people and not to others.  When for some reason these roles come into contact with one another it is possible that a little tension and confusion may ensue.  No news here.

In the early days of the Internet, when a kind of felicitous anarchy seemed to reign, it was fashionable to view the anonymity of the web as a playhouse of identity.  Individuals were able to try on and experiment with all sorts of identities — for better or for ill —  with relative safety and little worry of being found out.  It would have been unthinkable that one single and fully transparent identity would mark us across our Internet experience.

But that is exactly the trajectory we have been on for the last several years and this increases the odds of our many worlds colliding occasionally leading us to experience the kind of existential crisis that George’s histrionics embodied.  When our worlds collide, we too begin to sense that we might be losing our independent self, or the ability to control what people see and hear of us, control of what we might call our public identities.  We have a more difficult time calibrating our public personas to fit specific audiences and tasks.

Take for example the awkwardness and angst that arose when parents began joining Facebook and attempted to “friend” their children.  A Washington Post story on the topic from September 2008 cited protest groups formed in response with less than subtle names such as “What Happens in College Stays in College: Keep Parents Off Facebook!”  The author noted that it might seem odd that a “generation accustomed to sharing everything online” and with little or no apparent awareness of the distinction between private and public becomes apoplectic when merely two more people gain access to their already remarkably public personas.  But this misses the point.  What was at stake, of course, was control over who knew what.  The students experienced exactly what George did – their worlds collided and their anxiety reflected the increasing difficulty of controlling their public identity.

The ubiquity of one dominant social media platform makes it harder to exercise effective control over the presentation of our identities.  Mark Zuckerberg, moral philosopher that he is, rather conveniently believes,

You have one identity… The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly… Having two identities for yourself is an example of a lack of integrity.

Facebook’s near monopoly on social networking has reigned in the proliferation of profiles and, if fact, studies suggest that a Facebook profile tracks fairly closely to the truth about a person.  But there is still the question of who sees that more or less truthful public approximation of our personality and how much they see.  Furthermore, should Facebook, or any social media site be in the business of compelling people to live with integrity, particularly while profiting from the enforcement of this integrity?  More importantly, is it really integrity that is being forced upon us?  Or, to put it another way, does the maintenance of various personas necessarily entail a morally problematic lack of integrity? Is duplicity the only reason why we would withhold some aspect of our personality in certain circumstances?

Authentic and meaningful relationships typically depend upon the natural evolution of interpersonal trust and confidence.  Demanding immediate and equal transparency across the board works against the natural progression of social interaction.  Pace Mr. Zuckerberg, there are good reasons why we don’t reveal ourselves in equal measure to everyone and in all circumstances that have nothing to do with a lack of integrity.

The (Un)Naturalness of Privacy

Andrew Keen is not an Internet enthusiast, at least not since the emergence of Web 2.0.  That much has been clear since his 2007 book The Cult of the Amateur: How Today’s Internet is Killing Our Culture and his 2006 essay equating Web 2.0 with Marxism in The Weekly Standard, a publication in which such an equation is less than flattering.  More recently, Keen has taken on all things social in a Wired UK article, “Your Life Torn Open: Sharing is a Trap.”

Now, I’m all for a good critique and lampoon of the excesses of social media, really, but Keen may have allowed his disdain to get the better of him and veered into unwarranted, misanthropic excess.  From the closing paragraph:

Today’s digital social network is a trap. Today’s cult of the social, peddled by an unholy alliance of Silicon Valley entrepreneurs and communitarian idealists, is rooted in a misunderstanding of the human condition. The truth is that we aren’t naturally social beings. Instead, as Vermeer reminds us in The Woman in Blue, human happiness is really about being left alone. …. What if the digital revolution, because of its disregard for the right of individual privacy, becomes a new dark ages? And what if all that is left of individual privacy by the end of the 21st century exists in museums alongside Vermeer’s Woman in Blue? Then what?

“Human happiness is really about being left alone” and “The truth is that we aren’t naturally social beings”? Striking statements, and almost certainly wrong. It seems rather that the vast majority of human beings long for, and sometimes despair of never finding, meaningful and enduring relationships with other human beings. That human flourishing is conditioned on the right balance of the private and the social, individuality and relationship, seems closer to the mark. And while I suppose one could be raised by wolves in legends and stories, I’d like to know how infants would survive biologically in isolation from other human beings.  On this count, better stick with Aristotle.  The family, the clan, the tribe, the city — these are closer to the ‘natural’ units of human existence.

The most ironic aspect of these claims is Keen’s use of Vermeer’s “Woman in Blue” or, more precisely, “Woman in Blue Reading a Letter,” to illustrate them.  That she is reading a letter is germane to the point at issue here which is the naturalness of privacy.  Contrary to Keen’s assertion of the natural primacy of privacy, it is closer to the truth to correlate privacy with literacy, particularly silent reading (which has not always been the norm), and the advent of printing.  Changing socio-economic conditions also factor into the rise of modern notions of privacy and the individual.  Notions formalized by Locke and Hobbes who enshrine the  atomized individual as the foundation of society, notably, with founding myths which are entirely a-historical.

In The Vineyard of the Text, Ivan Illich, citing George Steiner, suggests this mutual complicity of reading and privacy:

According to Steiner, to belong to ‘the age of the book’ meant to own the means of reading.  The book was a domestic object; it was accessible at will for re-reading.  The age presupposed private space and the recognition of the right to periods of silence, as well as the existence of echo-chambers such as journals, academies, or coffee circles.

Likewise, Walter Ong, drawing on Eric Havelock, explains that

By separating the knower from the known, writing makes possible increasingly articulate introspectivity, opening the psyche as never before not only to the external objective world quite distinct from itself but also to the interior self agaisnt whom the objective world is set.

Privacy emerges from the dynamics of literacy.  The more widespread literacy becomes, as for example with the printing press, the more widespread and normalized the modern sense of privacy becomes.  What Keen is bemoaning is the collapse of the experience privacy wrought by print culture.  I do think there is something to mourn there, but to speak of its “naturalness” misconstrues the situation and seems to beget a rather sociopathic view human nature.

Finally, it is also telling that Vermeer’s woman is reading a letter.  Letters, after all, are a social genre; letter writing is a form of social life.  To be sure, it is a very different form of social life than what the social web offers, but it is social.  And were we not social beings we would not, as Auden puts its, “count some days and long for certain letters.” The Woman in Blue Reading a Letter reminds us that privacy is bound to literate culture and human beings are bound to one another.

_____________________________________________________________

Updates:  To reinforce that it is a balance we are after, also consider this article in yesterday’s Boston Globe, “The Power of Lonely.” Excerpt:

But an emerging body of research is suggesting that spending time alone, if done right, can be good for us — that certain tasks and thought processes are best carried out without anyone else around, and that even the most socially motivated among us should regularly be taking time to ourselves if we want to have fully developed personalities, and be capable of focus and creative thinking.

My attention has also been drawn to an upcoming documentary for which Andrew Keen was interviewed.  PressPausePlay will be premiering at South By Southwest and addresses the issues of creativity and art in the digital world.  Here’s  a preview featuring Andrew Keen:

“It’s like, you know … the end of print disciplined speech?”

In “What Happens in Vagueness Stays in Vagueness,” Clark Whelton takes aim at what he calls “the linguistic virus that infected spoken language in the late twentieth century” — vagueness.  Here’s the opening example:

I recently watched a television program in which a woman described a baby squirrel that she had found in her yard. “And he was like, you know, ‘Helloooo, what are you looking at?’ and stuff, and I’m like, you know, ‘Can I, like, pick you up?,’ and he goes, like, ‘Brrrp brrrp brrrp,’ and I’m like, you know, ‘Whoa, that is so wow!’ ” She rambled on, speaking in self-quotations, sound effects, and other vocabulary substitutes, punctuating her sentences with facial tics and lateral eye shifts. All the while, however, she never said anything specific about her encounter with the squirrel.

In the mid-1980s, Mr. Whelton began noticing increasingly aberrant speech patterns in prospective interns for New York City mayor Edward Koch’s speech writing staff.  “Like,” “you know,” “like, you know,” along with non-committal interrogative tones particularly distressed Whelton.  He goes on to add,

Undergraduates … seemed to be shifting the burden of communication from speaker to listener. Ambiguity, evasion, and body language, such as air quotes—using fingers as quotation marks to indicate clichés—were transforming college English into a coded sign language in which speakers worked hard to avoid saying anything definite.

Whelton comes closest to the true nature of the situation here, but I think there is an important consideration that is missing.  I’m inclined to think that the sorts of language patterns Whelton criticizes reflect a reversion to language environments that are more oral in nature than they are literate (a situation that Walter Ong called secondary orality).

The cadences and syntax of “high,” “correct,” “proper,” etc. English are a product of writing in general and intensified by print; they are not a necessary function of spoken language itself which is ordinarily much more chaotic.  Writing is removed from the holistic context that helps give face-to-face communication its meaning.  To compensate writing must work hard to achieve clarity and precision since the words themselves bear the burden of conveying the whole of the meaning.  Oral communication can tolerate vagueness in words and syntax because it can rely on intonation, volume, inflection, and other non-verbal cues to supply meaning. As an experiment try transcribing anyone of your countless verbal exchanges and note the sometimes startling difference between spoken language and written language.

Where print monopolizes communication, the patterns of written speech begin to discipline spoken language. “Vague” talk then may be characteristic of those whose speech patterns, because they have been formed in a world in which print’s monopoly has been broken, have not been so disciplined by print literacy.

Interestingly, new media is often quite “print-ish,” that is text isolated from sound — emails, text messaging, Twitter, blogs, Facebook (although with images there) — and this has required the invention of a system of signs aimed at taming the inherent “vagueness” of written communication that is restricted in length and thus not given the freedom to compensate for the loss of non-verbal and auditory cues with precise syntax and copious language.  : )