Offloaded Memory and Its Discontents (or, Why Life Isn’t a Game of Jeopardy)

It is not surprising to learn that we are taking the time to remember less and less given the ubiquitous presence of the Internet and the consequent ability to “Google it” when you need it whatever “it” happens to be.  A new study in the journal Science affirms what most of us have already witnessed or experienced.  According to one report on the study:

Columbia University psychologist Betsy Sparrow and her colleagues conducted a series of experiments, in which they found that people are less likely to remember information when they are aware of its availability on online search engines …

“Our brains rely on the Internet for memory in much the same way they rely on the memory of a friend, family member or co-worker. We remember less through knowing information itself than by knowing where the information can be found,” said Sparrow.

If you sense that something of significance is lost in the transition from internal memory to prosthetic memory, you may also find that it takes a little work to pin point and articulate that loss.  Nicholas Carr, commenting on the same study, also has reservations and he puts them this way:

If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn’t much matter. But external storage and biological memory are not the same thing. When we form, or “consolidate,” a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but “the cohesion” which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?

I’ve articulated some questions about off-loaded memory and the formation of identity in the past few months as well, along with thoughts on the relationship between internal memory and creativity.  Here I want to draw on an observation by Owen Barfield in Saving the Appearances.  Barfield is writing about perception when he notes:

I do not perceive any thing with my sense-organs alone, but with a great part of my whole human being. Thus, I may say, loosely, that I ‘hear a thrush singing’. But in strict truth all that I ever merely ‘hear’ — all that I ever hear simply by virtue of having ears — is sound. When I ‘hear a thrush singing’, I am hearing, not with my ears alone, but with all sorts of other things like  mental habits, memory, imagination, feeling and (to the extent at least that the act of attention involves it) will.  Of a man who merely heard in the first sense, it could meaningfully be said that ‘having ears’ (i. e. not being deaf) ‘he heard not’.

Barfield reminds us that our perception of reality is never merely a function of our senses.  Our perception, which in some respects is to say our interpretation of reality into meaningful experience, is grounded in, among other things, our memory, and certainly not our offloaded memory.  Offloaded memory is not ready-to-hand for our minds to use in its remarkable work of meaning-making.  Perception in this sense is impoverished by our willingness to offload what we might otherwise have internalized.

All of this leads me to ask, What assumptions are at play that make it immediately plausible for so many to believe that we can move from internalized memory to externalized memory without remainder?  It would seem, at least, that the ground was prepared by an earlier reduction of knowledge to information or data.  Only when we view knowledge as the mere aggregation of discreet bits of data, can we then believe that it makes little difference whether that data is stored in the mind or in a database.

We seem to be approaching knowledge as if life were a game of Jeopardy which is played well by merely being able to access trivial knowledge at random.  What is lost is the associational dimension of knowledge which constructs meaning and understanding by relating one thing to another and not merely by aggregating data.  This form of knowledge, which we might call metaphorical or analogical, allows us to experience life with the ability to “understand in light of”, to perceive through a rich store of knowledge and experience that allows us to see and make connections which richly texture and layer our experience of reality.

Augustine famously described memory as a vast storehouse in which there are treasures innumerable. It is our experience of life that is enriched by drawing on the treasures deposited into the storehouse of memory.

____________________________________________________

Update:  Also see “The Extended Mind — How Google Affects Our Memories” and Johah Lehrer’s post on the study.

Virtual Communities, Anonymity, and the Risks of Self-Disclosure

In the beginning Cold War-era engineers created the Internet and then utopian, hippy enthusiasts rescued the Internet and gave it to the world. Regrettably, spammers, venture capitalists, marketers, and corporations entered into the garden of digital delights and a communitarian paradise devolved into a virtual mall.

This, more or less, is the storyline of Evgeny Morozov’s very brief history of the Internet, “Two Decades of the Web: A Utopia No More” in Prospect. The author of The Net Delusion: The Dark Side of Internet Freedom, in case it wasn’t clear from the two titles, is not exactly optimistic about the direction the Internet has taken. The plot of his short overview of Internet history moves from early promise to eventual decline; it is a tale of utopian hopes disciplined by unfortunate realities.

Morozov’s history takes the idea of “virtual community” as its theme and features a set of Internet “cheerleaders” – Stuart Brand, Kevin Kelly, John Perry Barlow, and the Wired crowd – as its tragic protagonists. Tragic because their high-minded, lofty ideals were undercut, in Morozov’s telling, by an accompanying naiveté that left Paradise unguarded against the corporate snakes. “These men,” according to Morozov, “emphasised the importance of community and shared experiences; they viewed humans as essentially good, influenced by rational deliberation, and tending towards co-operation. Anti-Hobbesian at heart, they viewed the state and its institutions as an obstacle to be overcome—and what better way to transcend them than via cyberspace?”

This anarchist/libertarian proclivity exposed the online community to dangers trivial and grave:

Perhaps the mismatch between digital ideals and reality can be ascribed to the naivety of the technology pundits. But the real problem was that the internet’s early visionaries never translated their aspirations for a shared cyberspace into a set of concrete principles on which online regulation could be constructed. It’s as if they wanted to build an exemplary city on the hill, but never bothered to spell out how to keep it exemplary once it started growing.

The law of entropy took over from there. The allusion to a city on a hill recalls the Puritan experiment in pious self-government that never quite managed to pass on its vision to the next generation. Pursuing the analogy, Morozov fills the role of the preachers who evolved the jeremiad – a genre of sermon identified by historian Perry Miller that denounced the community’s departure from its founding principles and called for repentance and renewal. Morozov’s commentary fits the genre neatly, and that is not at all to detract from the value of his critique.

The connection to the Puritans is worth pursuing even further, but we have to see beyond the typical tropes with which they are associated, those that lend color to the term puritanical, for just a moment. The Puritans loom large in most tellings of early American history and their influence has long been both celebrated and lamented. In one respect, though, they are out of step with the subsequent evolution of American culture. The rugged individualism that came to dominate the American psyche would have been an unwelcome anomaly within the deeply communitarian ethos of the early Puritan settlers. John Winthrop’s sermon aboard the Arbella in which we find the “city on a hill” imagery is fundamentally a communitarian tract urging self-restraint, self-sacrifice, mercy, justice, and generosity for the good of the community. Shared sacrifice, shared risk, shared resources, shared lives – out of such was community forged.

With this in mind we can begin to sound out a tension within Morozov’s account. In his view, personalization and the loss of privacy are among the chief temptations that ultimately lead to the fall of the digital community. So for example, he notes with displeasure: “The logical end of this ever-increasing personalisation is of each user having his or her own online experience. This is a far cry from the early vision of the internet as a communal space.” And further on we read that, “For many internet users, empowerment was an illusion. They may think they enjoy free access to cool services, but in reality, they are paying for that access with their privacy.”

Yet, certain constructions of privacy and an aversion to personalization sit uneasily alongside of a communitarian ideal. Pressed to their extremes privacy and depersonalization converge in anonymity, and it was, in fact, the anonymity of the early Internet that thrilled theorist with the possibilities for experimentation with identity and its construction. Unfortunately, the early digital communitarian ideal was tied to this vision of privacy/anonymity, and you are not likely to have anything like a community in any strong sense on those terms, at least not in a way that answers to the human social impulse. Social media has provided something like an experience of community precisely because it has been tied to personalization (all of its attendant problems notwithstanding).

Maybe the real problem with the early internet was its commitment to anonymity. Personalization, not regulation, may have curbed the sorts of behaviors and developments that Morozov laments. The unbridled pursuit of self-interest which is always the enemy of community, virtual or otherwise, is abetted by the lack of accountability engendered by anonymity. This is a lesson at least as old as the Ring of Gyges story told by Plato. Community, and the fully human life it enables, is on the other hand built upon the risk of self-disclosure. A point not lost on Hannah Arendt when she noted that, “To live an entirely private life means above all to be deprived of things essential to a truly human life: to be deprived of the reality that comes from being seen and heard by others.” And, she later adds, “Action without a name, a ‘who’ attached to it, is meaningless …”

Admittedly, community is an amorphous and abstract concept and the pursuit of its virtual analogue may be finally incoherent. What’s more, anonymity is in certain circumstances clearly desirable, and commodified personalization is undoubtedly problematic. Finally, there are clearly important and necessary forms of privacy which must be protected. With these qualifications noted, it remains the case that any kind of online community, if it is to serve as a mediating structure that allows for social interaction on a scale somewhere between the anonymity of total seclusion and that of mass society, needs to be built on some degree of measured self-disclosure and consistency of identity.

This entails all sorts of risks and even sacrifices which means that what we may need is less of a jeremiad and more Winthrop-esque vision setting.

_________________________________________________________

Article first published as Virtual Communities, Anonymity, and the Risks of Self-Disclosure on Blogcritics.

The Triumph of the Social

“Social” is big, in case you missed it.

The most noticeable and  significant development in our media environment over the last decade or so has been the emergence of social elements across digital media platforms and the subsequent migration of social media into traditional media fields.  We might, to borrow a phrase, call this the triumph of the social.   Whatever we call it, this development marks a significant departure from the trend toward individualism that characterized the modern era.

Modernity, according to the standard storyline, was characterized by the individual’s liberation from the constraints of place, tradition, institutions, and, to some degree, biology.  The Renaissance, the Reformation, the Enlightenment — each is an episode in the rise of the individual.  Protestantism, democracy, capitalism — each features the individual and his soul, his rights, his property prominently.  From this angle, postmodernity is, in fact, hyper-modernity; it is not a break with the trajectory of modernity with respect to identity, it is its consummation.   The individual is liberated even from the notion of the persistent or essential self.  Identity, according to the usual suspect theorists, is constructed all the way down (except, of course, that it is not).

I can reasonably follow this story through to the early Internet age, but then something changes. Social media reasserts the social self.  This could be read as a further development of the individualist trajectory – the liberated individual is simply given a larger stage from which to pursue the project of creating their identity free of constraints.  If so, then what we might analyze is the new mode of identity construction.  For example, we might note that we now perform our identity by sharing.  The “Like” button becomes the instrument of identity construction.  The bands, television shows, websites, products, news stories, movies, clothes, cars, companies, causes, etc. that we “Like” signal to our social network who we “are”.

Whatever truth there may be too that, and there is some, there is one other way to read the rise of the social.  In a recent blog post, sociologist Peter Berger offered some reflections on the July 2011 issue of The Annals of the American Academy of Political and Social Science devoted to the topic of “Patrimonial Power in the Modern World.”  In Berger’s summary, patrimonial power is “is power on the basis of kinship and other patron-client relationships. It is the most common form of political authority in traditional societies before the rise of centralized states and empires. Such authority is exercised by way of personal loyalties rather than formal rules. The tribal chief is the prototypical leader in patrimonial regimes.”  Patrimonial power functioned within traditional societies grounded in personal ties and relationships.

Modernity, on the other hand, is characterized by other forms of power and authority.  Berger continues:

The counter-type is the bureaucrat … In a patrimonial system one trusts the chief because he belongs to one’s tribe and embodies its tradition. In what Weber called a “legal-rational” system one trusts the bureaucrat because he occupies an office established by proper procedures; indeed one trusts these procedures rather than the particular individual they have placed in the office.

Bureaucracy, however, not only abstracts power, it abstracts the personal.  In a bureaucracy the person is reduced to a number or an algorithm.  So one of the ironies of modernity is that the rise of the individual was accompanied by the rise of institutions that de-faced the individual.

What’s more, the rise of the individual throughout the modern period was coupled with the simultaneous rise of modern notions of privacy.  The extreme end of the privacy spectrum is complete anonymity, and here too the individual is de-faced, left without personal connection.

Just to be clear, this move toward individualism and privacy was not all bad.  In many respects it was a healthy corrective.  But the pendulum may have swung too far, and Berger does a nice job of explaining where the problem lies:

Robert Musil, in his great novel The Man Without Qualities, recounts an incident when Ulrich, the central character, is arrested and processed in a police station. He experiences what he calls a “statistical disaggregation” of his person. He is reduced to the minimal characteristics relevant to the police investigation, while all the characteristics essential to his self-esteem are ignored. In one way or another, we experience something like this depersonalization in many situations. We are abstract objects of the juridical system in court, abstract patients in a hospital,  abstract consumers in the marketplace. Everything we cherish most about ourselves is strictly irrelevant—our intellectual achievements, our sense of humor or capacity for affection, not to mention the prerogatives of age. In such situations we instinctively reach out for “tribal” connections—for someone who knows who we are, with whom we share an ethnic or religious identity, or even someone who laughs at a joke we tell: in sum, someone who recognizes us in a personal way.

Berger goes on to recall the term he coined with Richard Neuhaus, “mediating structures,” to describe neighborhood, family, church and voluntary associations.  These mediating structures buffered the individual from the impersonal, bureaucratic power of the state, but these structures have themselves been severely compromised leaving the individual isolated and disconnected.  This state of affairs, along with the the presumption that we are indeed political, which is to say social, animals helps explain, in part at least, the triumph of the social.  Social media functions as a mediating structure, a realm in which we are addressed by name and find our individual self publicly acknowledged.

This is not the whole story, of course. Social media in its own way also reduces us to numbers or algorithms, and it cannot provide the all that traditional mediating structures, at their best, are able to offer.  There are also temptations to narcissism and worse.  But, the risks notwithstanding, social media owes its success to the way it addresses a fundamental dimension of being human.

______________________________________________________

Related post:  The (Un)Naturalness of Privacy.

There Can Be Only One: Google+ Takes On Facebook

You are most likely not one of the favored few who have been invited to take Google’s new social networking platform out for a spin and neither am I,  but now we get a glimpse of what Google has been up to. When it does go live,  Google+ will open a new front in the battle against Facebook, and one that appears more promising than the ill-fated Google Buzz.

The Google+ experience is in large measure reminiscent of Facebook with at least one major exception:  Circles.   Facebook’s glaring weakness is its insistence that you indiscriminately present the same persona to every one of your “friends,” a list which may include your best friend from childhood, your ex-girlfriend, your boss, your co-workers, your grandmother, and that kid that lived down the street when you were growing up.  Amusingly, Zuckerberg turns moral philosopher on this point and declares that maintaining more than one online identity signals a lack of integrity.  Google+ more sensibly assumes that not all human relationships are created equal and that our social media experience ought to acknowledge that reality.  It allows you to create Circles into which you can drag and drop names from your contact list.  Whenever you post a picture or a link or a comment, you may designate which Circles will be able to see what you have posted. In other words, it lets you effectively manage the presentation of your self to your multifaceted social media audience.  Google+ appears to have thus solved Facebook’s George Costanza problem:  colliding worlds.

Facebook has gestured in this direction with Groups and Friend lists, but this remains an awkward experience, perhaps because it is at odds with the logic at the core of Facebook’s DNA.  Google+, having taken note of the rumblings of discontent with Facebook’s at times cavalier attitude toward privacy,  also allows users to permanently delete their information from Google’s servers and otherwise presents a more privacy-friendly front.

Even with these features aimed at exposing Facebook’s weaknesses and recent news about kinks in Facebook’s armor, Google+ is not expected to challenge Facebook’s social media supremacy.  Inertia is the main obstacle to the success of Google+.  Many users have committed an immense amount of data to their Facebook profiles and Facebook has worked hard to integrate itself into the whole online experience of its users.  Additionally, Facebook has more or less become a memory archive for many of its users and we don’t easily part with our memory.  Most significantly, perhaps, Google+ starts from a position of relative weakness as far as social media platforms are concerned — it has few users.  Most people will, for a long time to come, more readily find those they know on Facebook.

That said, Facebook’s early success against Myspace was predicated on a certain exclusivity.  It may be that an early disadvantage — relatively few members — may present itself as an important advantage in the eyes of enough people to generate momentum for Google+.  It is also hard to tell how many would-be social media users have been kept at bay by Facebook’s shortcoming and will now venture into social media waters given the refinements offered by Google+.  Casual Facebook users may also find it relatively painless to make a move.

Hard to tell from here, as is most of the future, but I wouldn’t be too surprised if Google+ significantly eroded Facebook’s base. Despite, my Highlander-esque title, the most likely outcome may be that both platforms co-exist by appealing to different sets of sensibilities, priorities, and expectations.

Internet Pleasures

Brain science is an endlessly fascinating field.  Each day, it seems,  a new neurological study is published revealing a link between this or that activity and this or that region of the brain, or that a certain neurotransmitter is related to the regulation of a certain behavior, and so on.  Yesterday’s encounter with the wonders of neurology came while listening to David Linden, professor of neuroscience at the Johns Hopkins University School of Medicine, being interviewed on NPR.  Linden’s new book, The Compass of Pleasure: How Our Brains Make Fatty Foods, Orgasm, Exercise, Marijuana, Generosity, Vodka, Learning, and Gambling Feel So Good, as the inelegant subtitle more than suggests, explores the role of the brain in the experience of pleasure.

Much of the interview focuses on a discussion of the neurology of addiction leading Linden to warn,

“Any one of us could be an addict at any time,” Linden says. “Addiction is not fundamentally a moral failing — it’s not a disease of weak-willed losers. When you look at the biology, the only model of addiction that makes sense is a disease-based model, and the only attitude towards addicts that makes sense is one of compassion.”

Initially, I was struck by two considerations after listening to the interview, both relating to the practical consequences of the science Linden discussed.  First, how oddly Aristotelian all the practical considerations come out sounding:  virtues and vices, habits, and moderation.   Secondly, how little difference this knowledge made for Linden in his own lived experience.  Here is the very last exchange from the interview:

NPR: Since you have studied pleasure and the pleasure circuitry of the brain, has that affected your own relationship with pleasure and the things that you seek or try not to get pleasure from?

Linden: Well, I try deeply not to let it do that.  I certainly — when I’m enjoying a glass of wine I don’t want to be thinking about dopamine levels and, for the most part, fortunately I have been able to avoid doing that. I’m blessed with not having a particularly addictive personality — although I’m a bit of a hedonist — so it hasn’t actually made t0o much of an impact on my own life.

This is a rather jarring note on which to wrap up the interview.  I’ve ordinarily been one to subscribe to A.E. Housman’s line, “All Human Knowledge is precious whether or not it serves the slightest human use.”  And mostly, I would still want to defend something like that claim.  Yet, there is something peculiar about our coming to know more about the biological and neurological base of human life, purportedly the real stuff on which all human life and action rests, only to find that for an ordinary, healthy adult steeped in this knowledge, it makes not much of a difference at all, and, in fact, that he consciously tries to disassociate his knowledge from his experience.  This bears more reflection, but there was one final thought, more directly related to the usual themes on this blog that I wanted to note.

Understanding the Internet’s personal and social consequences involves venturing into the territory mapped out by Linden and others in his field.  Pleasure of some sort — whether benign,  problematic, or illicit — is involved in our daily interactions with the Internet.  If there is a certain compulsiveness to our online experience, then it is because our internet experience shares in an economy of desire, pleasure, and cycles of stimulation and diminishing return that potentially lead  to addictive behavior.

We know that society tolerates certain addictive behaviors more than others, sometimes in seemingly arbitrary fashion.  Internet addiction may carry only a slight social stigma if any at all;  one is tempted rather to conclude that it carries a certain social cachet.  Whether socially acceptable or not, compulsive (or addictive, take your pick) Internet use does appear to have certifiably negative physical consequences in the brain.  A study just published in PLoS ONE suggests that heavy Internet use, particularly online gaming, leads to significant alterations in brain structure with detrimental consequences for cognitive function.  You can read more about the study here, here, and here.

Perhaps not surprisingly, the first of those three articles concludes its report with an appeal to the ancient Roman writer Petronius: “Moderation in all things, including moderation.”  I’m not sure if the writer meant to endorse Petronius’ playful, perhaps satirical tone; more likely it was intended as a straightforward prescription of moderation.