The Protest Will Be Pervasively Documented

The picture below was taken Wednesday night by Ta-Nehisi Coates, a blogger at The Atlantic. The picture depicts one slice of the Million Hoodie March for Trayvon Martin that took place in New York City. It is a parable of our age.

Image by Ta-Nehisi Coates (click image for source)

To begin with, as Coates put it in a previous tweet, “Crazy just a week ago this was just a few bloggers reporters and activists.” In the age of digital/social media, things move quickly. A great deal, of course, has already been written on social media and protests or even revolution, particularly with reference to the Arab Spring. But that is not what I want to focus on here.

What struck me in this image was the double-mirror-like effect of a photograph of people photographing an event. In short, we live in an age of pervasive documentation. What difference does this make, that’s the question.

What difference does it make for the marchers? What difference does it make for the present spectators? What difference does it make for those who experience (if that’s the right word) the event through its pervasive documentation? Thorough answers to those questions would probably yield book length discussions. In the space of a blog post, I’ll offer a few tentative considerations that suggest themselves to me.

First, of course, the march becomes a performance. Naturally, all such protests and marches have always been performative, that is their nature. So perhaps it is best to say that they are performative in a different key. Or, maybe that all such actions were in the past symbolic actions and now they are symbolic actions and something more. What is that something more though? I’m not sure, I’m not a student of protest movements, but I would venture to say the solidity of the record yielded by pervasive documentation foregrounds the individual dimension of the action. The gaze of the camera hails the participant in their potentially distinguishable individuality.

In other words, if the march was only being witnessed and not documented, as a participant the experience would feel more collective. The self is subordinated to the crowd in its collective symbolic gesture. When the symbolic action is being pervasively documented, the self then comes to the fore because my image, my face now becomes potentially distinguishable when the documented record becomes (more) permanent and public.

This is, I should make clear, entirely speculative on my part.

Secondly, for the witnesses present at the scene, those seen documenting the event in Coates’ photograph, the act of documentation changes the experience of witnessing. The witness becomes a documenter, and the two are not equivalent. The consequence, it would seem, is that the event is objectified by the process of documentation. It is no longer an event in whose thrall I am in, it is now an object to me that I seek to capture. This has the effect of stripping the symbolic action of its power, to some degree. The witness is in some sense in the event, while the documenter stands outside of it. I would also suggest that the documenting stance is also in some sense a rationalist, or rationalizing stance. It is creates a different mode of experience — removed, quasi-analytic, detached.

At the same time, however, it may also yield a heightened sense of participation of a different kind. To tweet a picture, for example, or post one to Facebook adds to the totality of the event. The witness has not only witnessed, but also acted in a way that expands the reach of the event. But the action is complex because in a sense the focus is no longer necessarily on the event and its symbolic power, but rather on the witness/documenter’s presence at the event.

Perhaps we could speak of such symbolic actions before pervasive documentation as enacted (rather than preformed) and such actions in an age of pervasive documentation as both enacted and performed. The former locks the participants and witnesses into a mutually constituted field of experience. The latter disintegrates the field into its individual parts, making the participants more conscious of their selves as actors in a spectacle. This is further complicated and heightened when the participants are self-documenting their own participation, which while not capture by this particular photograph as far as I can tell, undoubtedly happened. Also in the latter case, the witnesses are likewise distanced from the event as individuals documenting and publicizing.

The parting question, then, is still what difference does this all make. What difference does it make for the power and influence of such events?

The Internet & the Youth of Tomorrow: Highlights from the Pew Survey

The Pew Internet & American Life Project conducted “an opt-in, online survey of a diverse but non-random sample of 1,021 technology stakeholders and critics … between August 28 and October 31, 2011.” The survey presented two scenarios for the youth of 2020, asked participants to choose which they thought more likely, and then invited elaboration.

Here are the two scenarios and the responses they garnered:

Some 55% agreed with the statement:

In 2020 the brains of multitasking teens and young adults are “wired” differently from those over age 35 and overall it yields helpful results. They do not suffer notable cognitive shortcomings as they multitask and cycle quickly through personal- and work- related tasks. Rather, they are learning more and they are more adept at finding answers to deep questions, in part because they can search effectively and access collective intelligence via the internet. In sum, the changes in learning behavior and cognition among the young generally produce positive outcomes.

Some 42% agreed with the opposite statement, which posited:

In 2020, the brains of multitasking teens and young adults are “wired” differently from those over age 35 and overall it yields baleful results. They do not retain information; they spend most of their energy sharing short social messages, being entertained, and being distracted away from deep engagement with people and knowledge. They lack deep-thinking capabilities; they lack face-to-face social skills; they depend in unhealthy ways on the internet and mobile devices to function. In sum, the changes in behavior and cognition among the young are generally negative outcomes.

However the report also noted the following:

While 55% agreed with the statement that the future for the hyperconnected will generally be positive, many who chose that view noted that it is more their hope than their best guess, and a number of people said the true outcome will be a combination of both scenarios.

In all honesty, I am somewhat surprised the results split so evenly. I would have expected the more positive scenario to perform better than it did. The most interesting aspect of the report, however, are of course the excerpts presented from the respondents’ elaborations. Here are few with some interspersed commentary.

A number of respondents wrote about the skills that will be valued in the emerging information ecosystem:

  • There are concerns about new social divides. “I suspect we’re going to see an increased class division around labor and skills and attention,” said media scholar danah boyd.
  • “The essential skills will be those of rapidly searching, browsing, assessing quality, and synthesizing the vast quantities of information,” wrote Jonathan Grudin, principal researcher at Microsoft. “In contrast, the ability to read one thing and think hard about it for hours will not be of no consequence, but it will be of far less consequence for most people.”

Among the more interesting excerpts was this from Amber Case, cyberanthropologist and CEO of Geoloqi:

  • “The human brain is wired to adapt to what the environment around it requires for survival. Today and in the future it will not be as important to internalize information but to elastically be able to take multiple sources of information in, synthesize them, and make rapid decisions … Memories are becoming hyperlinks to information triggered by keywords and URLs. We are becoming ‘persistent paleontologists’ of our own external memories, as our brains are storing the keywords to get back to those memories and not the full memories themselves.”
I’m still not convinced at all by the argument against internalization. (You can read why here, here, and here.) But she is certainly correct about our “becoming ‘persistent paleontologists’ of our own external memory.” And the point was memorably put as well. We are building vast repositories of external memory and revisiting those stores in ways that are historically novel. We’ve yet to register the long term consequences.

The notion of our adaptability to new information environments was also raised frequently:

  • Cathy Cavanaugh, an associate professor of educational technology at the University of Florida, noted, “Throughout human history, human brains have elastically responded to changes in environments, society, and technology by ‘rewiring’ themselves. This is an evolutionary advantage and a way that human brains are suited to function.”

This may be true enough, but what is missing from these sorts of statements is any discussion of which environments might be better or worse for human beings. To acknowledge that we adapt is to say nothing about whether or not we ought to adapt. Or, if one insists, Borg-like, that we must adapt or die, there is little discussion about whether this adaptation leaves us on the whole better off. In other words, we ought to be asking whether the environment we are asked to adapt to is more or less conducive to human flourishing. If it is not, then all the talk of adaptation is a thinly veiled fatalism.

Some, however, did make strong and enthusiastic claims for the beneficence of the emerging media environment:

  • “The youth of 2020 will enjoy cognitive ability far beyond our estimates today based not only on their ability to embrace ADHD as a tool but also by their ability to share immediately any information with colleagues/friends and/or family, selectively and rapidly. Technology by 2020 will enable the youth to ignore political limitations, including country borders, and especially ignore time and distance as an inhibitor to communications. There will be heads-up displays in automobiles, electronic executive assistants, and cloud-based services they can access worldwide simply by walking near a portal and engaging with the required method such as an encrypted proximity reader (surely it will not be a keyboard). With or without devices on them, they will communicate with ease, waxing philosophic and joking in the same sentence. I have already seen youths of today between 20 and 35 who show all of these abilities, all driven by and/or enabled by the internet and the services/technologies that are collectively tied to and by it.”

This was one of the more techno-uptopian predictions in the survey. The notion of “embracing ADHD as a tool” is itself sufficiently jarring to catch one’s attention. One gets the gist of what the respondent is foreseeing — a society in which cognitive values have been radically re-ordered. Where sustained attention is no longer prized, attention deficit begins to seem like a boon. The claims about the irrelevance of geographic and temporal limits are particularly interesting (or disconcerting). They seemingly make a virtue of disembodied rootlessness. The youth of the future will, in this scenario, be temporally and spatially homeless, virtually dispersed. (The material environment of the future imagined here also invites comparison to the dystopian vision of the film Wall-E.)

Needless to say, not all respondents were nearly so sanguine. Most interestingly, many of the youngest respondents were among the most concerned:

  • A number of the survey respondents who are young people in the under-35 age group—the central focus of this research question—shared concerns about changes in human attention and depth of discourse among those who spend most or all of their waking hours under the influence of hyper connectivity.

This resonates with my experience teaching as well. There’s a palpable unease among many of the most connected with the pace, structure, and psychic consequences of the always on life. They appear to be discovering through experience what is eloquently put by Annette Liska:

  • Annette Liska, an emerging-technologies design expert, observed, “The idea that rapidity is a panacea for improved cognitive, behavioral, and social function is in direct conflict with topical movements that believe time serves as a critical ingredient in the ability to adapt, collaborate, create, gain perspective, and many other necessary (and desirable) qualities of life. Areas focusing on ‘sustainability’ make a strong case in point: slow food, traditional gardening, hands- on mechanical and artistic pursuits, environmental politics, those who eschew Facebook in favor of rich, active social networks in the ‘real’ world.”

One final excerpt:

  • Martin D. Owens, an attorney and author of Internet Gaming Law, Just as with J.R.R. Tolkien’s ring of power, the internet grants power to the individual according to that individual’s wisdom and moral stature. Idiots are free to do idiotic things with it; the wise are free to acquire more wisdom. It was ever thus.

In fact, the ring in Tolkien’s novels is a wholly corrupting force. The “wisdom and moral stature” of the wearer may only forestall the deleterious effects. The most wise avoided using it at all. I won’t go so far as to suggest that the same applies to the Internet, but I certainly couldn’t let Tolkien be appropriated in the service of misguided view of technological neutrality.

To Remember, Or To Forget …

Two (and a half) articles for your consideration today:

a. “Technologically Enhanced Memory” by Evan Selinger at Slate.

b. “The Forgetting Pill Erases Painful Memories” by Jonah Lehrer at Wired. See also Lehrer’s related blog post, “Learning to Forget”.

Selinger frames his essay as a discussion of the implications of transactive memory and extended cognition; in short, the ability to offload our memory and thinking onto our environment. That our environment serves as a “memory-prompting tool” is hardly controversial. That in this way it becomes part of our thinking process or an extended mind is a little more so.

Philosopher Andy Clark, a well-known advocate of the extended mind hypothesis, asks us to consider the hypothetical case of Otto and his notebook. Otto suffers from Alzheimer’s and uses a notebook to help him remember information, for example the address of the Museum of Art he wishes to visit. He consults his notebook and acts based on what he finds there in much the same way that someone without an impairment would consult their memory. In this way the notebook is incorporated into his thinking and acting and is thus an extension of his mind (although not, obviously, of his brain).

Notebooks such as Otto’s have a rather elegant history as it turns out. During the nineteenth century, the aide de memoir, a tiny notebook within decorative case on a chain, became a popular (and practical) fashion accessory. Today it seems they flourish mostly on Etsy. For our part, the smart phone has become our aide de memoir. Less elegant perhaps, but more powerful by many orders of magnitude. And it is these orders of magnitude that give Selinger pause. It is one thing to jot down a list of things to do today; it is quite another to have gigabytes of space dedicated to the storage of textual and audiovisual memories. It is quite another still to have the ability to curate those memories for public consumption.

With “Timehop, a lifelogging app that performs “memory engineering” in mind, he cites Elizabeth Lawley who wonders: “If we go through life aware we’re leaving behind a detailed digital archive that future generations can read, might we be inclined to behave inauthentically so that our digital breadcrumbs point back to idealized versions of ourselves?”

This is a dilemma not unlike the emergence of “Facebook Eye” described by Nathan Jurgenson. “Facebook Eye” describes the tendency to experience life with a view to its re-presenation on a social media site like Facebook and the responses that re-presentation is likely to draw.

Pointing to historical antecedents, in this case the aide de memoir, is helpful to a certain degree, but it also risks lulling us into a facile acceptance of a state of affairs that by its quantitative difference becomes also qualitatively (and consequentially) different.

In his Wired article, Lehrer explores the emerging possibility of pharmaceutical forgetting, pills that may be able to target specific and traumatic memories. As Lehrer notes, a more tactical and strategic realization of the capabilities that animate the plot of Eternal Sunshine of Spotless Mind. The premise of Lehrer’s article is that contrary to popular thinking, the best way to handle traumatic memories is not to air them or talk them through, but rather simply to forget them. And a pill that helped patients do just that would be markedly more successful (and efficient) than the talking cure.

Lehrer pays some attention to the ethical concerns, but you’ll have to go elsewhere to consider those more deeply. Analogously to the manner in which a discussion of historical antecedents tends to communicate “there’s nothing of consequence here”, Lehrer is fond of pointing to the natural fallibility of human memory as an antecedent to the pharmaceutically enabled forgetting that, he seems to suggest, renders it more or less unproblematic, at least not seriously so.

It seems odd, however, to point to the natural and inevitable distortions and deletions of memory in defense of drugs designed to help us forget traumatic memories we seem unable to shake.

It is also worth considering what role extended cognition plays in the remembering and forgetting explored by Lehrer. Leher is focused almost exclusively on neurological processes. Yet memory, as the extended mind theorists (among many others) have emphasized, is more than a neurological phenomenon. It is also an embodied, artifactual, spatial, social, and technological reality.

It would not be the first such irony, but perhaps in the future we will take pills to help us dissolve the memories that our technologically enhanced memories won’t let us forget.

“Liking” and Loving: Identity on Facebook

By one of those odd twists of associative memory, John Caputo’s little book, On Religion, came to mind today. Specifically, I recalled a particular question that he posed in the opening pages.

“So the question we need to ask ourselves is the one Augustine puts to himself in the Confessions, “what do I love when I love God?,” or “what do I love when I love You, my God?,” as he also puts it, or running these two Augustinian formulations together, “what do I love when I love my God?”.

I appreciate this formulation because it forces a certain self-critical introspection. It refuses us the comforts of thoughtlessness.

A little further on, Caputo takes the liberty of putting his words to the spirit of Augustine’s quest:

“… I am after something, driven to and fro by my restless search for something, by a deep desire, indeed by a desire beyond desire, beyond particular desires for particular things, by a desire for I-know-not-what, for something impossible. Still, even if we are lifted on the wings of such a love, the question remains, what do I love, what am I seeking?”

Then Caputo makes an important observation:

“When Augustine talks like this, we ought not to think of him as stricken by a great hole or lack or emptiness which he is seeking to fill up, but as someone overflowing with love who is seeking to know where to direct his love. He is not out to see what he can get, but out to see what he can give.”

Not too long ago I posted some thoughts on what I took to be the Augustinian notes sounded in Matt Honan’s account of his time at the Consumer Electronics Show in Las Vegas. In that post, “Hole In Our Hearts,” I employed the language Caputo cautioned against, but I’m now inclined to think that Caputo is on to something. His distinction is not merely academic.

Plummeting, perhaps, from the sublime to the … what to call it, let us just say the ordinary, this formulation somehow triggered the question, “what do I like when I “Like” on Facebook?” Putting it this way suggests that what I like may not, in fact, be what I “like”. The question pushes us to examine why it is that we do what we do in social media contexts (Facebook being here a synecdoche).

Very often what we do on social media platforms is analyzed as a performance or construction of the self. On this view, what we are doing is giving shape to our identity. What we like, if you will, is the projected identity, or better yet, the perception and affirmation of that identity by others. This, of course, does not exhaust what is done with social media, but it is a significant part of it.

There are, remembering Caputo’s distinction, two ways we might understand this. Caputo distinguished between love or desire understood as a lack seeking to be filled and love or desire understood as a surplus seeking to be expended. This distinction can be usefully mapped over the motivations driving our social media activity.

When we think about social media as a field for the construction and enactment of identities, we tend to think of it as the projection, authentic or inauthentic, of a fixed reality. Perhaps we would do well to consider the possibility that identity on social networks is not so much being performed as it is being sought, that behind the identity-work on social media platforms there is an inchoate and fluid reality seeking to take shape by expending itself.

The entanglement of our loves (or, likes) and our identity on social media has, it turns out, an antecedent in the Augustinian articulation of the human condition. Caputo went on to note that the question of what we love is also bound up with another Augustinian query:

“Augustine’s question — “what do I love when I love my God?” — persists as a life-long and irreducible question, a first, last, and constant question, which permanently pursues us down the corridors of our days and nights, giving salt to fire to our lives. That is because that question is entangled with the other persistent Augustinian question, “who am I?” …

What we love and desire and who we are — these two are bound up irrevocably with one another.

“I have been made a question to myself,” Augustine famously declared. And so it is with all of us. The problem with our talk about the performance of identity is that it tends to tacitly assume a fixed and knowing identity engaging in the performance. The reality, as Augustine understood, is more complex and whatever it is we are doing online is tied up with that complexity.

Flânerie and the Dérive, Online and Off

Had I been really ambitious with yesterday’s post, I would have attempted to draw in a little controversy over the practice of flânerie earlier this week. You may be wondering what flânerie is, or if you know what it is, you may be wondering why on earth it would spark controversy.

Short answers today. First, flânerie is the practice of being a flâneur. The flâneur is a figure from the city streets of nineteenth century Paris who made his way through the crowded avenues and arcades watching and observing, engaging in what we could call peripatetic social criticism. He was popularized in the work of the poet Charles Baudelaire and the literary critic Walter Benjamin.

The recent mini-controversy revolved around the practice of cyber-flânerie, playing the part of the flâneur online. The debate was kicked off by Evgeny Morozov’s “The Death of the Cyberflâneur” in the NY Times. Morozov drew a spirited response from Dana Goldstein in a blog post, “Flânerie Lives! On Facebook, Sex, and the Cybercity”.

Morozov is a champion of the open, chaotic web of the early Internet. He is a fierce critic of the neat, closed web of Google, Facebook, and apps. Here is the gist of his argument (although, as always, I encourage you to read the whole piece):

“It’s easy to see, then, why cyberflânerie seemed such an appealing notion in the early days of the Web. The idea of exploring cyberspace as virgin territory, not yet colonized by governments and corporations, was romantic; that romanticism was even reflected in the names of early browsers (“Internet Explorer,” “Netscape Navigator”).

Online communities like GeoCities and Tripod were the true digital arcades of that period, trading in the most obscure and the most peculiar, without any sort of hierarchy ranking them by popularity or commercial value. Back then eBay was weirder than most flea markets; strolling through its virtual stands was far more pleasurable than buying any of the items. For a brief moment in the mid-1990s, it did seem that the Internet might trigger an unexpected renaissance of flânerie.

….

Something similar has happened to the Internet. Transcending its original playful identity, it’s no longer a place for strolling — it’s a place for getting things done. Hardly anyone “surfs” the Web anymore. The popularity of the “app paradigm,” whereby dedicated mobile and tablet applications help us accomplish what we want without ever opening the browser or visiting the rest of the Internet, has made cyberflânerie less likely. That so much of today’s online activity revolves around shopping — for virtual presents, for virtual pets, for virtual presents for virtual pets — hasn’t helped either. Strolling through Groupon isn’t as much fun as strolling through an arcade, online or off.”

Further on, Morozov lays much of the blame on Facebook and the quest for frictionless sharing. All told it seems to me that his critique centers on the loss of anonymity and the lack of serendipity that characterize the Facebook model web experience.

For her part, Goldstien, who has done graduate work on nineteenth century flânerie, questioned whether anonymity was essential to the practice. Morozov cited Zygmunt Bauman’s line, “The art that the flâneur masters is that of seeing without being caught looking.” Goldstein on the other hand writes, “The historian Della Pollock, contra Morozov and Bauman, describes flânerie as ‘observing well and…being well worth observing’ in turn.”

Yet, even if we did grant Bauman’s characterization of flânerie, Goldstein still believes that much of what we do online qualifies as such:

“Seeing without being caught looking. Is there any better description for so much of what we do online? Admit it: You’re well acquainted with your significant other’s ex’s Facebook page. You’ve dived deep into the search results for the name of the person you’re dating, the job applicant you’re interviewing, the prospective tenant or roommate. On the dating site OkCupid, you can even pay for the privilege of “enhanced anonymous browsing,” in which you can see who checks out your profile, but no one can see which profiles you’ve looked at yourself. On Facebook, one of the most common spam bots promises to reveal who’s been looking at your profile. It’s so tempting! People click and the spam spreads, but it’s a trick: Facebook conceals users’ browsing histories from one another.”

I should say that I have no particular expertise in the study of the flâneur beyond a little reading of Benjamnin here and there. My hunch, however, is that the tradition of flânerie is wide enough to accommodate the readings of both Morozov and Goldstein. Moreover, I suspect that perhaps a metaphor has been unhelpfully reified. After all, cyber-space (who knew we would be using “cyber” again) and physical space are only metaphorically analogous. That metaphor is suggestive and has generated insightful analysis, but it is a metaphor that can be pushed too far. Cyberflânerie and, what shall we call it, brick-and-mortar flânerie — these two are also only metaphorically analogous. Again, it is a rich and suggestive metaphor, but it will have its limits. Phenomenologically, clicking on a link is not quite the same thing as turning a corner. The way each presents itself to us and acts on us is quite different.

Additionally, to complicate matters, it might also be interesting to draw into the conversation related practices such the dérive popularized by Guy Debord and the Situationists. That, however, will remain only a passing observation, except to note that intention is of great consequence. Why are we engaging in this practice or that? That seems to me to be the question. Stalking, flânerie, and the dérive may have structural similarities (particularly to the outside observer, the one watching the watcher as it were, but not knowing why the watcher watches), but they are distinguished decisively by their intent. Likewise, online analogs should be distinguished according to their intent.

Although, having just written that, it occurs to me that the dérive analogy does have an interesting dimension to offer. Regardless of our intentions, when we go online we do often find ourselves very far afield from whatever our initial reason for going online might have been. This is something the dérive assumes as an integral part of the practice. Also, while varieties of flânerie involve acting to see and to be seen in debatable portions, the dérive, in analogy with psychotherapy, is less focused on the seeing and being seen altogether. Here is Debord on the dérive:

“One of the basic situationist practices is the dérive, a technique of rapid passage through varied ambiences. Dérives involve playful-constructive behavior and awareness of psychogeographical effects, and are thus quite different from the classic notions of journey or stroll.

“In a dérive one or more persons during a certain period drop their relations, their work and leisure activities, and all their other usual motives for movement and action, and let themselves be drawn by the attractions of the terrain and the encounters they find there. Chance is a less important factor in this activity than one might think: from a dérive point of view cities have psychogeographical contours, with constant currents, fixed points and vortexes that strongly discourage entry into or exit from certain zones.”

This practice seems to better fit (metaphorically still) the online experience of either the early web or its more recent iteration. The difference, it would seem, lies in the object of study. Flânerie oscillates between social criticism and performance. The dérive takes one’s own psychic state as the object of study insofar as it is revealed by the manner in which we negotiate space, online and off.