“Is the Internet Driving Us Mad?”: Attempting a Sane Response

It’s Newsweek’s turn. Perhaps feeling a bit of title envy, the magazine has shamelessly ripped off The Atlantic’s formula with a new cover story alternatively titled “Is the Internet Driving Us Mad?” or “Is the Internet Making Us Crazy?”

You can probably guess what follows already. Start with a rather dramatic and telltale anecdote — in this case Jason Russell’s “reactive psychosis” in the wake of his “Kony 2012” fame — and proceed to cite a number of studies and further anecdotes painting a worrisome picture that appears somehow correlated to heavy Internet use. You’re also likely to guess that Sherry Turkle is featured prominently. For the record, I intend no slight by that last observation, I rather appreciate Turkle’s work.

Before the day is out, this article will be “Liked” and tweeted thousands of times I’m sure. It will also be torn apart relentlessly and ridiculed. Unfortunately, the title will be responsible for both responses. A more measured title would likely have elicited a more sympathetic reading, but also less traffic.

There is much in the article that strikes an alarmist note and many of the anecdotes, while no doubt instances of real human suffering, seem to describe behaviors that are not characteristic of most people using the Internet. I’m also not one to go in for explanations that localize this or that behavior in this or that region of the brain. I’m not qualified to evaluate such conclusions, but they strike me as a tad reductionistic. That said, this does ring true:

“We may appear to be choosing to use this technology, but in fact we are being dragged to it by the potential of short-term rewards. Every ping could be social, sexual, or professional opportunity, and we get a mini-reward, a squirt of dopamine, for answering the bell. “These rewards serve as jolts of energy that recharge the compulsion engine, much like the frisson a gambler receives as a new card hits the table,” MIT media scholar Judith Donath recently told Scientific American. “Cumulatively, the effect is potent and hard to resist.”

Of course, it all hinges on correlation and causation. The article itself suggests as much right at the end. “So what do we do about it?” the author, Tony Dokoupil, asks. “Some would say nothing,” he continues, “since even the best research is tangled in the timeless conundrum of what comes first.” He adds:

“Does the medium break normal people with its unrelenting presence, endless distractions, and threat of public ridicule for missteps? Or does it attract broken souls?

But in a way, it doesn’t matter whether our digital intensity is causing mental illness, or simply encouraging it along, as long as people are suffering.”

Dokoupil concludes on a note of fateful optimism: “The Internet is still ours to shape. Our minds are in the balance.” In truth, whatever we make of the neuroscience involved, there is something to be said for the acknowledgement that we have choices to make about our relationship to the Internet.

After we cut through the hype and debates about causation, it would seem that there is a rather commonsensical way of approaching all of this.

If the article resonates with you because it seems to describe your experience, then it is probably worth taking it as a warning of sorts and an invitation to make some changes. So, for example, if you are sitting down to play video games for multiple hours without interruption and thus putting yourself in danger of developing a fatal blood clot … you may want to rethink your gaming habits.

If you are immediately reaching for your Internet device of choice before you have so much as opened your eyes in the morning, you may want to consider allowing yourself a little time before diving into the stream. Pour some coffee, have a nice breakfast. Give yourself a little time with your thoughts, or your loved ones. If you find that your thoughts are all incessantly begging you to get online and your loved one’s faces are turning disconcertingly into Facebook icons, then perhaps you have all the more reason to restore a little a balance to your Internet use.

If you take an honest inventory of your emotions and feelings, and you find that you’re anchoring a great deal of your self-worth on the replies you get to your status updates, tweets, or blog posts, then perhaps, just maybe, you might want to consider doing a little soul searching and re-prioritizing.

If “‘the computer is like electronic cocaine,’ fueling cycles of mania followed by depressive stretches” somehow resonates with your own experience, then it may be time to talk to your friends about getting a better grip on the way you’re ordering your life.

None of this is rocket science (or neuroscience). I’d like to think that most of us have a pretty good sense of when certain activities are tending toward the counterproductive and unhealthy end of the spectrum. There’s no need to get all apocalyptic about it, we can leave that to the media. Instead, take inventory of what is truly important — you know, the same old sappy but precious and indispensable stuff — and then ask yourself how your Internet use relates to these and take action accordingly. If you find that you are unable to implement the action you know you need to take, then certainly get some help.

In Search of the Real

While advancing age is no guarantee of advancing self-knowledge, I have found that growing up a bit can be enlightening. Looking back, it now seems pretty clear to me that I have always been temperamentally Arcadian – and I’m grateful to W. H. Auden for helping me come to this self-diagnosis. In the late 1940s, Auden wrote an essay distinguishing the Arcadian and Utopian personalities. The former looks instinctively to the past for truth, goodness, and beauty; the latter searches for those same things in the unrealized future.

Along with Auden, but in much less distinguished fashion, I am an Arcadian; there is little use denying it. When I was on the cusp of adolescence, I distinctly recall lamenting with my cousin the passing of what we called the “good old days.” Believe it; it is sadly true. The “good old days” incidentally were the summer vacations we enjoyed not more than two or three years earlier. If I am not careful, I risk writing the grocery list elegiacally. I believe, in fact, that my first word was a sigh. This last is not true, alas, but it would not have been out of character.

So you can see that this presents a problem of sorts for someone who writes about technology. The temptation to criticize is ever present and often difficult to resist. With so many Utopians about, one can hardly be blamed. In truth, though, there are plenty of Arcadians about as well. The Arcadian is the critic of technology, the one whose first instinct is to mourn what is lost rather than celebrate what is gained. It is with this crowd that I instinctively run. They are my kindred spirits.

But Auden knew enough to turn his critical powers upon his own Arcadianism. As Alan Jacobs put it in his Introduction to Auden’s “The Age of Anxiety,” “Arcadianism may have contributed much to Auden’s mirror, but he knew that it had its own way of warping reflections.” And so do I, at least in my better moments.

I acknowledge my Arcadianism by way of self-disclosure leading into a discussion of Nathan Jurgenson’s provocative essay in The New Inquiry, “The IRL Fetish.” IRL here stands for “in real life,” offline experience as opposed to the online or virtual, and Jurgenson takes aim at those who fetishize offline experience. I can’t be certain if he had Marx, Freud, or Lacan in view when he chose to describe the obsession with offline experience as a fetish. I suspect it was simply a rather suggestive term that connoted something of the irrational and esoteric. But it does seem clear that he views this obsession/fetish as woefully misguided at best and this because it is built on an erroneous conceptualization of the relationship between the online and the offline.

The first part of Jurgenson’s piece describes the state of affairs that has given rise to the IRL Fetish. It is an incisive diagnosis written with verve. He captures the degree to which the digital has penetrated our experience with clarity and vigor. Here is a sampling:

“Hanging out with friends and family increasingly means also hanging out with their technology. While eating, defecating, or resting in our beds, we are rubbing on our glowing rectangles, seemingly lost within the infostream.” [There is more than one potentially Freudian theme running through this piece.]

“The power of ‘social’ is not just a matter of the time we’re spending checking apps, nor is it the data that for-profit media companies are gathering; it’s also that the logic of the sites has burrowed far into our consciousness.”

“Twitter lips and Instagram eyes: Social media is part of ourselves; the Facebook source code becomes our own code.”

True. True. And, true.

From here Jurgenson sums up the “predictable” response from critics: “the masses have traded real connection for the virtual,” “human friends, for Facebook friends.” Laments are sounded for “the loss of a sense of disconnection,” “boredom,” and “sensory peace.” The equally predictable solution, then, is to log-off and re-engage the “real” world.

Now it does not seem to me that Jurgenson thinks this is necessarily bad counsel as far as it goes. He acknowledges that, “many of us, indeed, have been quite happy to occasionally log-off …” The real problem, according to Jurgenson, what is “new” in the voices of the chorus of critics is arrogant self-righteousness. Those are my words, but I think they do justice to Jurgenson’s evaluation. “Immense self-satisfaction,” “patting ourselves on the back,” boasting, “self-congratulatory consensus,” constructing “their own personal time-outs as more special” – these are his words.

This is a point I think some of Jurgenson’s critics have overlooked. At this juncture, his complaint is targeted rather precisely, at least as I read it, at the self-righteousness implicit in certain valorizations of the offline. Now, of course, deciding who is in fact guilty of self-righteous arrogance may involve making judgment calls that more often than not necessitate access to a person’s opaque intentions, and there is, as of yet, no app for that. (Please don’t tell me if there is.) But, insofar as we are able to reasonably identify the attitudes Jurgenson takes to task, then there is nothing particularly controversial about calling them out.

In the last third of the essay, Jurgenson pivots on the following question: “How have we come to make the error of collectively mourning the loss of that which is proliferating?” Response: “In great part, the reason is that we have been taught to mistakenly view online as meaning not offline.”

At this point, I do want to register a few reservations. Let me begin with the question above and the claim that “offline experience” is proliferating. What I suspect Jurgenson means here is that awareness of offline experience and a certain posture toward offline experience is proliferating. And this does seem to be the case. Semantically, it would have to be. The notion of the offline as “real” depends on the notion of the online; it would not have emerged apart from the advent of the online. The online and the offline are mutually constitutive as concepts; as one advances, the other follows.

It remains the case, however, that “offline,” only recently constituted as a concept, describes an experience that paradoxically recedes as it comes into view. Consequently, Jurgenson’s later assertion – “There was and is no offline … it has always been a phantom.” – is only partially true. In the sense that there was no concept of the offline apart from the online and that the online, once it appears, always penetrates the offline, then yes, it is true enough. However, this does not negate the fact that while there was no concept of the offline prior to the appearance of the online, there did exist a form of life that we can retrospectively label as offline. There was, therefore, an offline (even if it wasn’t known as such) experience realized in the past against which present online/offline experience can be compared.

What the comparison reveals is that a form of consciousness, a mode of human experience is being lost. It is not unreasonable to mourn its passing, and perhaps even to resist it. It seems to me that Jurgenson would not necessarily be opposed to this sort of rear-guard action if it were carried out without an attendant self-righteousness or aura of smug superiority. But he does appear to be claiming that there is no need for such rear-guard actions because, in fact, offline experience is as prominent and vital as it ever was. Here is a representative passage:

“Nothing has contributed more to our collective appreciation for being logged off and technologically disconnected than the very technologies of connection. The ease of digital distraction has made us appreciate solitude with a new intensity. We savor being face-to-face with a small group of friends or family in one place and one time far more thanks to the digital sociality that so fluidly rearranges the rules of time and space. In short, we’ve never cherished being alone, valued introspection, and treasured information disconnection more than we do now.”

It is one thing, however, to value a kind of experience, and quite another to actually experience it. It seems to me, in fact, that one portion of Jurgenson’s argument may undercut the other. Here are his two central claims, as I understand them:

1. Offline experience is proliferating, we enjoy it more than ever before.

2. Online experience permeates offline experience, the distinction is untenable.

But if the online now permeates the offline – and I think Jurgenson is right about this – then it cannot also be the case that offline experience is proliferating. The confusion lies in failing to distinguish between “offline” as a concept that emerges only after the online appears, and “offline” as a mode of experience unrecognized as such that predates the online. Let us call the former the theoretical offline and the latter the absolute offline.

Given the validity of claim 2 above, then claim 1 only holds for the theoretical offline not the absolute offline. And it is the passing of the absolute offline that critics mourn. The theoretical offline makes for a poor substitute.

The real strength of Jurgenson’s piece lies in his description of the immense interpenetration of the digital and material (another binary that does not quite hold up, actually). According to Jurgenson, “Smartphones and their symbiotic social media give us a surfeit of options to tell the truth about who we are and what we are doing, and an audience for it all, reshaping norms around mass exhibitionism and voyeurism.” To put it this way is to mark the emergence of a ubiquitous, unavoidable self-consciousness.

I would not say as Jurgenson does at one point, “Facebook is real life.” The point, of course, is that every aspect of life is real. There is no non-being in being. Perhaps it is better to speak of the real not as the opposite of the virtual, but as that which is beyond our manipulation, what cannot be otherwise. In this sense, the pervasive self-consciousness that emerges alongside the socially keyed online is the real. It is like an incontrovertible law that cannot be broken. It is a law haunted by the loss its appearance announces, and it has no power to remedy that loss. It is a law without a gospel.

Once self-consciousness takes its place as the incontrovertibly real, it paradoxically generates a search for something other than itself, something more real. This is perhaps the source of what Jurgenson has called the IRL fetish, and in this sense it has something in common with the Marxian and Freudian fetish: it does not know what it seeks. The disconnection, the unplugging, the logging off are pursued as if they were the sought after object. But they are not. The true object of desire is a state of pre-digital innocence that, like all states of innocence, once lost can never be recovered.

Perhaps I spoke better than I knew when I was a child, of those pleasant summers. After all, I am of that generation for which the passing from childhood into adulthood roughly coincided with the passage into the Digital Age. There is a metaphor in that observation. To pass from childhood into adulthood is to come into self-awareness, it is to leave naivety and innocence behind. The passage into the Digital Age is also a coming into a pervasive form of self-awareness that now precludes the possibility of naïve experience.

All in all, it would seem that I have stumbled into my Arcadianism yet again.

The Simple Life in the Digital Age

America has always been a land of contradictions. At the very least we could say the nation’s history has featured the sometimes creative, sometimes destructive interplay of certain tensions. At least one of these tensions can be traced right back to the earliest European settlers. In New England, Puritans established a “city on a hill,” a community ordered around the realization of a spiritual ideal.  Further south came adventurers, hustlers, and entrepreneurs looking to make their fortune. God and gold, to borrow the title of Walter R. Mead’s account of the Anglo-American contribution to the formation of the modern world, sums it up nicely.  Of course, this is also a rather ancient opposition. But perhaps we could say that never before had these two strands come together in quite the same way to form the double helix of a nation’s DNA.

This tension between spirituality and materialism also overlaps with at least two other tensions that have characterized American culture from its earliest days: The first of these, the tension between communitarianism and individualism, is easy to name. The other, though readily discernible, is a little harder to capture. For now I’m going to label this pair hustle and contemplation and hope that it conveys the dynamic well enough. Think Babbitt and Thoreau.

These pairs simplify a great deal of complexity, and of course they are merely abstractions. In reality, the oppositions are interwoven and mutually dependent. But thus qualified, they nonetheless point to recurring and influential types within American culture. These types, however, have not been balanced and equal. There has always seemed to be a dominant partner in each pairing: materialism, individualism, and hustle. But it would be a mistake to underestimate the influence of spirituality, communitarianism, and contemplation. Perhaps it is best to view them as the counterpoint to the main theme of American culture, together creating the harmony of the whole.

One way of nicely summing up all that is entailed by the counterpoints is to call it the pursuit of the simple life. The phrase sounds quaint, but it worked remarkably well in the hands of historian David E. Shi. In 1985, right in the middle of the decade that was to become synonymous with crass materialism – the same year Madonna released “Material Girl” – Shi published The Simple Life: Plain Living And High Thinking In American Culture. The audacity!

Shi weaves a variegated tapestry of individuals and groups that have advocated the simple life in one form or another throughout American history. Even though he purposely leaves out the Amish, Mennonites, and similar communities, he still is left with a long and diverse list of practitioners. Altogether they represent a wide array of motives animating the quest for the simple life. These include: “a hostility toward luxury and a suspicion of riches, a reverence for nature and a preference for rural over urban ways of life and work, a desire for personal self-reliance through frugality and diligence, a nostalgia for the past and a scepticism toward the claims of modernity, conscientious rather than conspicuous consumption, and an aesthetic taste for the plain and functional.”

This net gathers together Puritans and Quakers, Jeffersonians and Transcendentalists, Agrarians and Hippies, and many more. Perhaps if Shi were to update his work he might include hipsters in the mix. In any case, he would have no shortage of contemporary trends and movements to choose from. None of them dominant, of course, but recognizable and significant counterpoints still.

If I were tasked with updating Shi’s book, for example, I would certainly include a chapter on the critics of the digital age. Not all such critics would fit neatly into the simple life tradition, but I do think a good many would – particularly those who are concerned that the pace and rhythm of digitally augmented life crowds out solitude, silence, and reflection. Think, for example, of the many “slow” movements and advocates (myself included) of digital sabbaths. They would comfortably take their place alongside a many of the individuals and movements in Shi’s account who have taken the personal and social consequences of technological advance as their foil. Thoreau is only the most famous example.

Setting present day critics of digital life in the tradition identified by Shi has a few advantages. For one thing, it reminds us that the challenges posed by digital technologies, while having their particularities, are not entirely novel in character. Long before the dawn of the digital age, individuals struggled to find the right balance between their ideals for the good life and the possibilities and demands created by the emergence of new technologies.

Moreover, we may readily and fruitfully apply some of Shi’s conclusions about the simple life tradition to the contemporary criticisms of life in the digital age.

First, the simple life has always been a minority ethic. “Many Americans have not wanted to lead simple lives,” Shi observes, “and not wanting to is the best reason for not doing so.” But, in his view, this does not diminish the salutary leavening effect of the few on the culture at large.

Yet , Shi concedes, “Proponents of the simple life have frequently been overly nostalgic about the quality of life in olden times, narrowly anti-urban in outlook , and too disdainful of the benefits of prosperity and technology.” Better to embrace the wisdom of Lewis Mumford, “one of the sanest of all the simplifiers” in Shi’s estimation. According to Mumford,

“It is not enough to say, as Rousseau once did, that one has only to reverse all current practice to be right … If our new philosophy is well-grounded we shall not merely react against the ‘air-conditioned nightmare’ of our present culture; we shall also carry into the future many elements of quality that this culture actually embraces.”

Sound advice indeed.

If we are tempted to dismiss the critics for their inconsistencies, however, Shi would have us think again: “When sceptics have had their say, the fact remains that there have been many who have demonstrated that enlightened self-restraint can provide a sensible approach to living that can be fruitfully applied in any era.”

But it is important to remember that the simple life at its best, now as ever, requires a person “willing it for themselves.” Impositions of the simple life will not do. In fact, they are often counterproductive and even destructive. That said, I would add, though Shi does not make this point in his conclusion, that the simple life is perhaps best sustained within a community of practice.

Wisely, Shi also observes, “Simplicity is more aesthetic than ascetic in its approach to good living.” Consequently, it is difficult to lay down precise guidelines for the simple life, digital or otherwise. Moderation takes many forms. And so individuals must deliberately order their priorities “so as to distinguish between the necessary and superfluous, useful and wasteful, beautiful and vulgar,” but no one such ordering will be universally applicable.

Finally, Shi’s hopeful reading of the possibilities offered by the pursuit of the simple life remains resonant:

“And for those with the will to believe in the possibility of the simple life and act accordingly, the rewards can be great. Practitioners can gradually wrest control of their own lives from the manipulative demands of the marketplace and the workplace … Properly interpreted, such a modern simple life informed by its historical tradition can be both socially constructive and personally gratifying.”

Nathan Jurgenson has recently noted that criticisms of digital technologies are often built upon false dichotomies and a lack of historical perspective. In this respect they are no different than criticisms advanced by advocates of the simple life who were also tempted by similar errors. Ultimately, this will not do. Our thinking needs to be well-informed and clear-sighted, and the historical context Shi provides certainly moves us toward that end. At the very least, it reminds us that the quest for simplicity in the digital age had its analog precursors from which we stand to learn a few things.

Networked Momentum, or Why It Can Be So Hard To Opt Out

The NY Times’ Room for Debate forum has taken up the question: “Is Facebook a Fad? Will Our Grandchildren Tweet?” Contributors included Sherry Turkle and Keith Hampton among others. Each offers a quick take on the question of about 300 to 500 words.

In his comments addressing adoption patterns of social media networks, Hampton, a professor of communications at Rutgers, makes the following observation:

“Once critical mass has been reached, not only does the value to participants increase, but the cost of not participating and of discontinuance also increases. It is costly – in that you risk social isolation – to abandon a technology used by the majority of your communication partners.”

When I read this I was immediately reminded of the very useful concept of “technological momentum” articulated by historian of technology Thomas Hughes. I’m going to pull a Jonah Lehrer here and copy and paste a brief description of “technological momentum” from a post a few months back:

“Hughes seeks to stake out a position between technological determinism on the one hand and social constructivism on the other. He finds both accounts ultimately inadequate even though each manages to grasp a part of the whole situation. As a mediating position, Hughes offers the concept of “technological momentum.” By it Hughes seeks to identify the inertia that complex technological systems develop over time. Hughes’ approach is essentially temporal. He finds that the social constructivist approach best explains the behavior of young systems and the technological determinist approach best explains the behavior of mature systems. ‘Technological momentum’ offers a more flexible model that is responsive to the evolution of systems over time.”

Hughes was mostly concerned with what we might call hardware. The power grid was one of his key examples. But Hampton has articulated a social network variation of the principle. It is not enough to talk about social media participation merely in terms of opting in or opting out. For one thing, even those who opt out aren’t really altogether “out” as the folks at Cyborgology have pointed out. For another, one ought to account for the very real costs attached to opting out. These costs constitute an inertial force that can keep users logged on. Perhaps we might call these “sticky” networks.

 

The Self in the Age of Digital Reproduction

The title suggested itself to me before I had written a word. I picked up Walter Benjamin’s classic essay, “The Work of Art in the Age of Its Technological Reproducibility,”* and in my mind I heard, “The Self in the Age of Its Digital Reproducibility.” I then read through the essay once more with that title in mind to see if there might not be something to the implied analogy. I think there might be.

Of course, what follows is not intended as a strict interpretation and reapplication of the whole of Benjamin’s essay. Instead, it’s a rather liberal, maybe even playful, borrowing of certain contours and outlines of his argument. The borrowing is premised on the assumption that there is a loose analogy between the mechanical reproduction of visual works of art enabled by photography and film, and the reproduction of our personality across a variety of networks enabled by digital technology.

At one point in the essay, Benjamin noted, “commentators had earlier expended much fruitless ingenuity on the question of whether photography was an art – without asking the more fundamental question of whether the invention of photography had not transformed the entire character of art …” Just so. We might say commentators have presently expended much fruitless ingenuity asking about whether this or that digital technology achieved the status of this or that prior analog technology without asking the more fundamental question of whether the invention of digital technology had not transformed the entire character of the field in question. The important question is not, for instance, whether Facebook friendship is real friendship, but how social media has transformed the entire character of relationships. So in this fashion we take Benjamin as our guide letting his criticism suggest lines of inquiry for us.

Benjamin’s essay is best remembered for his discussion of the aura that attended an original work of art before the age of mechanical reproduction. That aura, grounded in the materiality of the work of art, was displaced by the introduction of mechanical reproduction.

“What, then, is the aura?” Benjamin asks. Answer:  “A strange tissue of space and time: the unique apparition of a distance, however near it may be …” And, he adds, “what withers in the age of the technological reproducibility of the work of art is the latter’s aura.”

Aura, to put it more plainly, is a concept that gathers together the authenticity and authority felt in the presence of a work of art. This authenticity and authority of the work of art fail to survive its mechanical (as opposed to manual) reproduction for two principal reasons:

“First, technological reproduction is more independent of the original than is manual reproduction. For example, in photography it can bring out aspects of the original that are accessible only to the lens … but not to the human eye; or it can use certain processes, such as enlargement or slow motion, to record images which escape natural optics altogether. This is the first reason. Second, technological reproduction can place the copy of the original in situations which the original itself cannot attain. Above all, it enables the original to meet the recipient halfway, whether in the form of a photograph or in that of a gramophone record.”

May we speak of the aura that attends a person in “the here and now,” as Benjamin puts it? I would think so. Benjamin himself suggests as much when he discusses the work of the film actor: “The situation can be characterized as follows: for the first time – and this is the effect of film – the human being is placed in a position where he must operate with his whole living person while forgoing its aura. For the aura is bound to his presence in the here and now. There is no facsimile of the aura.”

The analogy I’ve thus far only alluded to is this. Just as mechanical means of reproduction, such as photography, multiplied and distributed an original work or art, likewise do digital technologies, social media most explicitly, multiply and distribute the self. But in so doing they dissolve the aura that attends the person in the flesh and consequently elicit a quest for authenticity.

Consider again the two reasons Benjamin gave for the eclipse of the aura in the face of mechanical reproduction: the independence of the reproduction and its ability to “place the copy in situations which the original itself cannot attain.” The latter of these is most easily reapplied to the digital reproduction of the self. Our social media profiles, for instance, or Skype to take another example, place the self in (multiple, simultaneous) situations that our embodied self cannot attain. But it is the former that may prove most interesting.

Benjamin’s notion of the aura is intertwined with a certain irreducible distance that cannot be collapsed simply by drawing close. Remember his most straightforward definition of aura: “A strange tissue of space and time: the unique apparition of a distance, however near it may be …” The reason for this is that ordinary human vision, even in drawing close, retains an optical inability to penetrate past a certain point. It can only see what it can see, and a manual reproduction cannot improve on that. But a mechanical reproduction can; it can make visible what would remain invisible to the human eye. Imagine for instance what an extreme photographic close-up might reveal about a human face or how high-speed photography may capture a millisecond in time that ordinary human perception would blur into the larger patterns of movement that the unaided human eye is able to perceive.

“Just as the entire mode of existence of human collectives changes over long historical periods,” Benjamin observed, “so too does their mode of perception.” The point then is this: mechanical reproduction, photographs and film, enabled new forms of perception and these new forms of perception effectively neutralized the aura of the original.

Benjamin neatly summed up this dynamic with the notion of the optical unconscious:

“And just as enlargement not merely clarifies what we see indistinctly ‘in any case,’ but brings to light entirely new structures of matter, slow motion not only reveals familiar aspects of movements, but discloses quite unknown aspects within them … Clearly, it is another nature which speaks to the camera as compared to the eye. ‘Other’ above all in the sense that a space informed by human consciousness gives way to a space informed by the unconscious … it is through the camera that we first discover the optical unconscious …”

The camera, in other words, has the ability to bring to the attention of conscious perception what would ordinarily be perceived only at an unconscious level. Benjamin was explicitly pursuing an analogy to the Freudian unconscious. If you prefer to avoid that association, perhaps the term optical non-conscious would suffice. In this way this way this mode of perception may be elided to the bodily forms of intentionality discussed by Merleau-Ponty that are not quite the products of conscious attention. In any case, the capabilities of mechanical reproduction brought to conscious attention what ordinarily escaped it.

So what is the connection to digital reproductions of the self. Well, we might get at it by identifying what could be called the “social unconscious.” Just as photography and film disclosed a real but ordinarily invisible world, might we not also say that digital reproductions of the self materialize real but otherwise invisible relations and mental or emotional states? What else could be the meaning of the “Like” button or the ability to see a visualization of our history with a friend as chronicled on Facebook? Moreover, interactions that before the age of digital reproduction may have passed between two or three persons, now materialize before many more. And while most such interactions would have soon faded into oblivion when they passed out of memory, in the age of digital reproduction they achieve greater durability as well as visibility.

But what are the consequences? Benjamin can help us here as well.

“To an ever-increasing degree, the work reproduced becomes the reproduction of a work designed for reproducibility.” In an age of digital reproduction, the self we are reproducing is increasingly constructed for maximum reproducibility. We live with an eye to the reproductions we will create which we will create with an eye to their being widely reproduced (read, “shared”).

Benjamin also noted the historic tension “between two polarities within the artwork itself … These two poles are the artwork’s cult value and its exhibition value.”  When art was born in the service of magic, the importance of the figures drawn lay in their presence not necessarily their exhibition. By liberating of the work of art from the context of ritual and tradition, mechanical reproduction foregrounded exhibition. In the age of digital reproduction, mere being is incomplete without also being seen. It hasn’t happened if it’s not Facebook official. The private/public distinction is reconfigured for this very reason.

For those keen on registering economic consequences, Benjamin, speaking of the actor before the camera, offers this: “The representation of human beings by means of an apparatus has made possible a highly productive use of the human being’s self-alienation.” Now apply to the person before the apparatus of social-media.

Finally, Benjamin speaking of the human person who will be mechanically reproduced by film, writes:

“While he stands before the apparatus, he knows that in the end he is confronting the masses. It is they who will control him. Those who are not visible, not present while he executes his performance, are precisely the ones who will control it. This invisibility heightens the authority of their control.”

Apply more widely to all who are now engaged in the work of digitally reproducing themselves and cue the quest for authenticity.

____________________________________________________

* I’m drawing on the second version of the essay composed in 1935 and published in Harvard UP’s The Work of Art in the Age of Its Technological Reproducibility and Other Writings on Media (2008)According to the editors, this version “represents the form in which Benjamin originally wished to see the work published.”