Digital Devices and Learning to Grow Up

Last week the NY Times ran the sort of op-ed on digital culture that the cultured despisers love to ridicule. In it, Jane Brody made a host of claims about the detrimental consequences of digital media consumption on children, especially the very young. She had the temerity, for example, to call texting the “next national epidemic.” Consider as well the following paragraphs:

“Two of my grandsons, ages 10 and 13, seem destined to suffer some of the negative effects of video-game overuse. The 10-year-old gets up half an hour earlier on school days to play computer games, and he and his brother stay plugged into their hand-held devices on the ride to and from school. ‘There’s no conversation anymore,’ said their grandfather, who often picks them up. When the family dines out, the boys use their devices before the meal arrives and as soon as they finish eating.

‘If kids are allowed to play ‘Candy Crush’ on the way to school, the car ride will be quiet, but that’s not what kids need,’ Dr. Steiner-Adair said in an interview. ‘They need time to daydream, deal with anxieties, process their thoughts and share them with parents, who can provide reassurance.’

Technology is a poor substitute for personal interaction.”

Poor lady, I thought, and a grandmother no less. She was in for the kind of thrashing from the digital sophisticates that is usually reserved for Sherry Turkle.

In truth, I didn’t catch too many reactions to the piece, but one did stand out. At The Awl, John Hermann summed up the critical responses with admirable brevity:

“But the argument presented in the first installment is also proudly unsophisticated, and doesn’t attempt to preempt obvious criticism. Lines like ‘technology is a poor substitute for personal interaction,’ and non-sequitur quotes from a grab-bag of experts, tee up the most common and effective response to fears of Screen Addiction: that what’s happening on all these screens is not, as the writer suggests, an endless braindead Candy Crush session, but a rich social experience of its own. That screen is full of friends, and its distraction is no less valuable or valid than the distraction of a room full of buddies or a playground full of fellow students. Screen Addiction is, in this view, nonsensical: you can no more be addicted to a screen than to windows, sounds, or the written word.”

But Hermann does not quite leave it at that: “This is an argument worth making, probably. But tell it to an anxious parent or an alienated grandparent and you will sense that it is inadequate.” The argument may be correct, but, Hermann explains, “Screen Addiction is a generational complaint, and generational complaints, taken individually, are rarely what they claim to be. They are fresh expressions of horrible and timeless anxieties.”

Hermann goes on to make the following poignant observations:

“The grandparent who is persuaded that screens are not destroying human interaction, but are instead new tools for enabling fresh and flawed and modes of human interaction, is left facing a grimmer reality. Your grandchildren don’t look up from their phones because the experiences and friendships they enjoy there seem more interesting than what’s in front of them (you). Those experiences, from the outside, seem insultingly lame: text notifications, Emoji, selfies of other bratty little kids you’ve never met. But they’re urgent and real. What’s different is that they’re also right here, always, even when you thought you had an attentional claim. The moments of social captivity that gave parents power, or that gave grandparents precious access, are now compromised. The TV doesn’t turn off. The friends never go home. The grandkids can do the things they really want to be doing whenever they want, even while they’re sitting five feet away from grandma, alone, in a moving soundproof pod.

To see a more celebratory presentation of these dynamics, recall this Facebook ad from 2013:

Hermann, of course, is less sanguine.

Screen Addiction is a new way for kids to be blithe and oblivious; in this sense, it is empowering to the children, who have been terrible all along. The new grandparent’s dilemma, then, is both real and horribly modern. How, without coming out and saying it, do you tell that kid that you have things you want to say to them, or to give them, and that you’re going to die someday, and that they’re going to wish they’d gotten to know you better? Is there some kind of curiosity gap trick for adults who have become suddenly conscious of their mortality?”

“A new technology can be enriching and exciting for one group of people and create alienation for another;” Hermann concludes, “you don’t have to think the world is doomed to recognize that the present can be a little cruel.”

Well put.

I’m tempted to leave it at that, but I’m left wondering about the whole “generational complaint” business.

To say that something is a generational complaint suggests that we are dealing with old men yelling, “Get off my lawn!” It conjures up the image of hapless adults hopelessly out of sync with the brilliant exuberance of the young. It is, in other words, to dismiss whatever claim is being made. Granted, Hermann has given us a more sensitive and nuanced discussion of the matter, but even in his account too much ground is ceded to this kind of framing.

If we are dealing with a generational complaint, what exactly do we mean by that? Ostensibly that the old are lodging a predictable kind of complaint against the young, a complaint that amounts to little more than an unwillingness to comprehend the new or a desperate clinging to the familiar. Looked at this way, the framing implies that the old, by virtue of their age, are the ones out of step with reality.

But what if the generational complaint is framed rather as a function of coming into responsible adulthood. Hermann approaches this perspective when he writes, “Screen Addiction is a new way for kids to be blithe and oblivious; in this sense, it is empowering to the children, who have been terrible all along.” So when a person complains that they are being ignored by someone enthralled by their device, are they showing their age or merely demanding a basic degree of decency?

Yes, children are wont to be blithe and oblivious, often cruelly indifferent to the needs of others. Traditionally, we have sought to remedy that obliviousness and self-centeredness. Indeed, coming into adulthood more or less entails gaining some measure of control over our naturally self-centered impulses for our own good and for the sake of others. In this light, asking a child–whether age seven or thirty-seven–to lay their device aside long enough to acknowledge the presence of another human being is simply to ask them to grow up.

Others have taken a different tack in response to Brody and Hermann. Jason Kottke arrives at this conclusion:

“People on smartphones are not anti-social. They’re super-social. Phones allow people to be with the people they love the most all the time, which is the way humans probably used to be, until technology allowed for greater freedom of movement around the globe. People spending time on their phones in the presence of others aren’t necessarily rude because rudeness is a social contract about appropriate behavior and, as Hermann points out, social norms can vary widely between age groups. Playing Minecraft all day isn’t necessarily a waste of time. The real world and the virtual world each have their own strengths and weaknesses, so it’s wise to spend time in both.”

Of course. But how do we allocate the time we spend in each–that’s the question. Also, I’m not quite sure what to make of his claim about rudeness and the social contract except that it seems to suggest that it’s not rudeness if you decide you don’t like the terms of the social contract that renders it so. Sorry Grandma, I don’t recognize the social contract by which I’m supposed to acknowledge your presence and render to you a modicum of my attention and affection.

Yes, digital devices have given us the power to decide who is worthy of our attention minute by minute. Advocates of this constant connectivity–many of them, like Facebook, acting out of obvious self-interest–want us to believe this is an unmitigated good and that we should exercise this power with impunity. But–how to say this without sounding alarmist–encouraging people to habitually render other human beings unworthy of their attention seems like a poor way to build a just and equitable society.

Humanist Technology Criticism

“Who are the humanists, and why do they dislike technology so much?”

That’s what Andrew McAfee wants to know. McAfee, formerly of Harvard Business School, is now a researcher at MIT and the author, with Erik Brynjolfsson, of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. At his blog, hosted by the Financial Times, McAfee expressed his curiosity about the use of the terms humanism or humanist in “critiques of technological progress.” “I’m honestly not sure what they mean in this context,” McAfee admitted.

Humanism is a rather vague and contested term with a convoluted history, so McAfee asks a fair question–even if his framing is rather slanted. I suspect that most of the critics he has in mind would take issue with the second half of McAfee’s compound query. One of the examples he cites, after all, is Jaron Lanier, who, whatever else we might say of him, can hardly be described as someone who “dislikes technology.”

That said, what response can we offer McAfee? It would be helpful to sketch a history of the network of ideas that have been linked to the family of words that include humanism, humanist, and the humanities. The journey would take us from the Greeks and the Romans, through (not excluding) the medieval period to the Renaissance and beyond. But that would be a much larger project, and I wouldn’t be your best guide. Suffice it to say that near the end of such a journey, we would come to find the idea of humanism splintered and in retreat; indeed, in some quarters, we would find it rejected and despised.

But if we forego the more detailed history of the concept, can we not, nonetheless, offer some clarifying comments regarding the more limited usage that has perplexed McAfee? Perhaps.

I’ll start with an observation made by Wilfred McClay in a 2008 essay in the Wilson Quarterly, “The Burden of the Humanities.” McClay suggested that we define the humanities as “the study of human things in human ways.”¹ If so, McClay continues, “then it follows that they function in culture as a kind of corrective or regulative mechanism, forcing upon our attention those features of our complex humanity that the given age may be neglecting or missing.” Consequently, we have a hard time defining the humanities–and, I would add, humanism–because “they have always defined themselves in opposition.”

McClay provides a brief historical sketch showing that the humanities have, at different historical junctures, defined themselves by articulating a vision of human distinctiveness in opposition to the animal, the divine, and the rational-mechanical. “What we are as humans,” McClay adds, “is, in some respects, best defined by what we are not: not gods, not angels, not devils, not machines, not merely animals.”

In McClay’s historical sketch, humanism and the humanities have lately sought to articulate an understanding of the human in opposition to the “rational-mechanical,” or, in other words, in opposition to the technological, broadly speaking. In McClay’s telling, this phase of humanist discourse emerges in early nineteenth century responses to the Enlightenment and industrialization. Here we have the beginnings of a response to McAfee’s query. The deployment of humanist discourse in the context of technology criticism is not exactly a recent development.

There may have been earlier voices of which I am unaware, but we may point to Thomas Carlyle’s 1829 essay, “Sign of the Times,” as an ur-text of the genre.² Carlyle dubbed his era the “Mechanical Age.” “Men are grown mechanical in head and heart, as well as in hand,” Carlyle complained. “Not for internal perfection,” he added, “but for external combinations and arrangements for institutions, constitutions, for Mechanism of one sort or another, do they hope and struggle.”

Talk of humanism in relation to technology also flourished in the early and mid-twentieth century. Alan Jacobs, for instance, is currently working on a book project that examines the response of a set of early 20th century Christian humanists, including W.H. Auden, Simone Weil, and Jacques Maritain, to total war and the rise of technocracy. “On some level each of these figures,” Jacobs explains, “intuited or explicitly argued that if the Allies won the war simply because of their technological superiority — and then, precisely because of that success, allowed their societies to become purely technocratic, ruled by the military-industrial complex — their victory would become largely a hollow one. Each of them sees the creative renewal of some form of Christian humanism as a necessary counterbalance to technocracy.”

In a more secular vein, Paul Goodman asked in 1969, “Can Technology Be Humane?” In his article (h/t Nicholas Carr), Goodman observed that popular attitudes toward technology had shifted in the post-war world. Science and technology could no longer claim the “unblemished and justified reputation as a wonderful adventure” they had enjoyed for the previous three centuries. “The immediate reasons for this shattering reversal of values,” in Goodman’s view, “are fairly obvious.

Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied subsequent developments, the deterioration of the physical environment and the destruction of the biosphere, the catastrophes impending over the cities because of technological failures and psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield diminishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely only increase human woe.”

For his part, Goodman advocated a more prudential and, yes, humane approach to technology. “Whether or not it draws on new scientific research,” Goodman argued, “technology is a branch of moral philosophy, not of science.” “As a moral philosopher,” Goodman continued in a remarkable passage, “a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.” The whole essay is well-worth your time. I bring it up merely as another instance of the genre of humanistic technology criticism.

More recently, in an interview cited by McAfee, Jaron Lanier has advocated the revival of humanism in relation to the present technological milieu. “I’m trying to revive or, if you like, resuscitate, or rehabilitate the term humanism,” Lanier explained before being interrupted by a bellboy cum Kantian, who breaks into the interview to say, “Humanism is humanity’s adulthood. Just thought I’d throw that in.” When he resumes, Lanier expanded on what he means by humanism:

“And pragmatically, if you don’t treat people as special, if you don’t create some sort of a special zone for humans—especially when you’re designing technology—you’ll end up dehumanising the world. You’ll turn people into some giant, stupid information system, which is what I think we’re doing. I agree that humanism is humanity’s adulthood, but only because adults learn to behave in ways that are pragmatic. We have to start thinking of humans as being these special, magical entities—we have to mystify ourselves because it’s the only way to look after ourselves given how good we’re getting at technology.”

In McAfee’s defense, this is an admittedly murky vision. I couldn’t tell you what exactly Lanier is proposing when he says that we have to “mystify ourselves.” Earlier in the interview, however, he gave an example that might help us understand his concerns. Discussing Google Translate, he observes the following: “What people don’t understand is that the translation is really just a mashup of pre-existing translations by real people. The current set up of the internet trains us to ignore the real people who did the first translations, in order to create the illusion that there is an electronic brain. This idea is terribly damaging. It does dehumanise people; it does reduce people.”

So Lanier’s complaint here seems to be that this particular configuration of technology and obscures an essential human element. Furthermore, Lanier is concerned that people are reduced in this process. This is, again, a murky concept, but I take it to mean that some important element of what constitutes the human is being ignored or marginalized or suppressed. Like the humanities in McClay’s analysis, Lanier’s humanism draws our attention to “those features of our complex humanity that the given age may be neglecting or missing.”

One last example. Some years ago, historian of science George Dyson wondered if the cost of machines that think will be people who don’t. Dyson’s quip suggests the problem that Evan Selinger has dubbed the outsourcing of our humanity. We outsource our humanity when we allow an app or device to do for us what we ought to be doing for ourselves (naturally, that ought needs to be established). Selinger has developed his critique in response to a variety of apps but especially those that outsource what we may call our emotional labor.

I think it fair to include the outsourcing critique within the broader genre of humanist technology criticism because it assumes something about the nature of our humanity and finds that certain technologies are complicit in its erosion. Not surprisingly, in a tweet of McAfee’s post, Selinger indicated that he and Brett Frischmann had plans to co-author a book analyzing the concept of dehumanizing technology in order to bring clarity to its application. I have no doubt that Selinger and Frishchmann’s work will advance the discussion.

While McAfee was puzzled by humanist discourse with regards to technology criticism, others have been overtly critical. Evgeny Morozov recently complained that most technology critics default to humanist/anti-humanist rhetoric in their critiques in order to evade more challenging questions about politics and economics. For my part, I don’t see why both approaches cannot each contribute to a broader understanding of technology and its consequences while also informing our personal and collective responses.

Of course, while Morozov is critical of humanizing/dehumanizing approach to technology on more or less pragmatic grounds–it is ultimately ineffective in his view–others oppose it on ideological or theoretical grounds. For these critics, humanism is part of the problem not the solution. Technology has been all too humanistic, or anthropocentric, and has consequently wreaked havoc on the global environment. Or, they may argue that any deployment of humanism as an evaluative category also implies a policing of the boundaries of the human with discriminatory consequences. Others will argue that it is impossible to make a hard ontological distinction among the natural, the human, and the technological. We have always been cyborgs in their view. Still others argue that there is no compelling reason to privilege the existing configuration of what we call the human. Humanity is a work in progress and technology will usher in a brave, new post-human world.

Already, I’ve gone on longer than a blog post should, so I won’t comment on each of those objections to humanist discourse. Instead, I’ll leave you with a few considerations about what humanist technology criticism might entail. I’ll do so while acknowledging that these considerations undoubtedly imply a series of assumptions about what it means to be a human being and what constitutes human flourishing.

That said, I would suggest that a humanist critique of technology entails a preference for technology that (1) operates at a humane scale, (2) works toward humane ends, (3) allows for the fullest possible flourishing of a person’s capabilities, (4) does not obfuscate moral responsibility, and (5) acknowledges certain limitations to what we might quaintly call the human condition.

I realize these all need substantial elaboration and support–the fifth point is especially contentious–but I’ll leave it at that for now. Take that as a preliminary sketch. I’ll close, finally, with a parting observation.

A not insubstantial element within the culture that drives technological development is animated by what can only be described as a thoroughgoing disgust with the human condition, particularly its embodied nature. Whether we credit the wildest dreams of the Singulatarians, Extropians, and Post-humanists or not, their disdain as it finds expression in a posture toward technological power is reason enough for technology critics to strive for a humanist critique that acknowledges and celebrates the limitations inherent in our frail, yet wondrous humanity.

This gratitude and reverence for the human as it is presently constituted, in all its wild and glorious diversity, may strike some as an unpalatably religious stance to assume. And, indeed, for many of us it stems from a deeply religious understanding of the world we inhabit, a world that is, as Pope Francis recently put it, “our common home.” Perhaps, though, even the secular citizen may be troubled by, as Hannah Arendt has put it, such a “rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking).”

________________________

¹ Here’s a fuller expression of McClay’s definition from earlier in the essay: “The distinctive task of the humanities, unlike the natural sciences and social sciences, is to grasp human things in human terms, without converting or reducing them to something else: not to physical laws, mechanical systems, biological drives, psychological disorders, social structures, and so on. The humanities attempt to understand the human condition from the inside, as it were, treating the human person as subject as well as object, agent as well as acted-upon.”

² Shelley’s “In Defense of Poetry” might qualify.

A Technological History of Modernity

I’m writing chiefly to commend to you what Alan Jacobs has recently called his “big fat intellectual project.”

The topic that has driven his work over the last few years Jacobs describes as follows: “The ways that technocratic modernity has changed the possibilities for religious belief, and the understanding of those changes that we get from studying the literature that has been attentive to them.” He adds,

“But literature has not been merely an observer of these vast seismic tremors; it has been a participant, insofar as literature has been, for many, the chief means by which a disenchanted world can be re-enchanted — but not fully — and by which buffered selves can become porous again — but not wholly. There are powerful literary responses to technocratic modernity that serve simultaneously as case studies (what it’s like to be modern) and diagnostic (what’s to be done about being modern).”

To my mind, such a project enjoys a distinguished pedigree, at least in some important aspects. I think, for example, of Leo Marx’s classic, The Machine in the Garden: Technology and the Pastoral Ideal in America, or the manner in which Katherine Hayles weaves close readings of contemporary fiction into her explorations of digital technology. Not that he needs me to say this, but I’m certain Jacobs’ work along these lines, particularly with its emphasis on religious belief, will be valuable and timely. You should click through to find links to a handful of essays Jacobs has already written in this vein.

On his blog, Text Patterns, Jacobs has, over the last few weeks, been describing one important thread of this wider project, a technological history of modernity, which, naturally, I find especially intriguing and necessary.

The first post in which Jacobs articulates the need for a technological history of modernity began as a comment on Matthew Crawford’s The World Beyond Your Head. In it, Jacobs repeats his critique of the “ideas have consequences” model of history, one in which the ideas of philosophers drive cultural change.

Jacobs took issue with the “ideas have consequences” model of cultural change in his critique of Neo-Thomist accounts of modernity, i.e., those that pin modernity’s ills on the nominalist challenge to the so-called medieval/Thomist synthesis of faith and reason. He finds that Crawford commits a similar error in attributing the present attention economy, in large measure, to conclusions about the will and the individual arrived at by Enlightenment thinkers.

Beyond the criticisms specific to the debate about the historical consequences of nominalism and the origins of our attention economy, Jacobs articulated concerns that apply more broadly to any account of cultural change that relies too heavily on the work of philosophers and theologians while paying too little attention to the significance of the material conditions of lived experience.

Moving toward the need for a technological history of modernity, Jacobs writes, “What I call the Oppenheimer Principle — ‘When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success’ — has worked far more powerfully to shape our world than any of our master thinkers. Indeed, those thinkers are, in ways we scarcely understand, themselves the product of the Oppenheimer Principle.”

Or, as Ken Myers, a cultural critic that Jacobs and I both hold in high esteem, often puts it: ideas may have consequences, but ideas also have antecedents. These antecedents may be described as unarticulated assumptions derived from the bodily, emotional, and, yes, cognitive consequences of society’s political, economic, and technological infrastructure. I’m not sure if Jacobs would endorse this move, but I find it helpful to talk about these assumptions by borrowing the concept of “plausibility structures” first articulated by the sociologist Peter Berger.

For Berger, plausibility structures are those chiefly social realities that render certain ideas plausible, compelling, or meaningful apart from whatever truth value they might be independently or objectively assigned. Or, as Berger has frequently quipped, the factors that make it easier to be a Baptist in Texas than it would be in India.

Again, Berger has in mind interpersonal relationships and institutional practices, but I think we may usefully frame our technological milieu similarly. In other words, to say that our technological milieu, our material culture constitutes a set of plausibility structures is to say that we derive tacit assumptions about what is possible, what is good, what is valuable from merely carrying on about our daily business with and through our tools. These implicit valuations and horizons of the possible are the unspoken context within which we judge and evaluate explicit ideas and propositions.

Consequently, Jacobs is quite right to insist that we understand the emergence of modernity as more than the triumph of a set of ideas about individuals, democracy, reason, progress, etc. And, as he puts it,

“Those of us who — out of theological conviction or out of some other conviction — have some serious doubts about the turn that modernity has taken have been far too neglectful of this material, economic, and technological history. We need to remedy that deficiency. And someone needs to write a really comprehensive and ambitious technological history of modernity. I don’t think I’m up to that challenge, but if no one steps up to the plate….”

All of this to say that I’m enthusiastic about the project Jacobs has presented and eager to see how it unfolds. I have a few more thoughts about it that I hope to post in the coming days–why, for example, Jacobs project is more appealing than Evgeny Morozov’s vision for tech criticism–but that may or may not materialize. Whatever the case, I think you’ll do well to tune in to Jacobs’ work on this as it progresses.

Et in Facebook ego

Today is the birthday of the friend whose death elicited this post two years ago. I republish it today for your consideration. 

In Nicolas Poussin’s mid-seventeenth century painting, Et in Arcadia ego, shepherds have stumbled upon an ancient tomb on which the titular words are inscribed. Understood to be the voice of death, the Latin phrase may be roughly translated, “Even in Arcadia there am I.” Because Arcadia symbolized a mythic pastoral paradise, the painting suggested the ubiquity of death. To the shepherds, the tomb was a momento mori: a reminder of death’s inevitability.

Nicolas Poussin, Et in Arcadia ego, 1637-38
Nicolas Poussin, Et in Arcadia ego, 1637-38

Poussin was not alone among artists of the period in addressing the certainty of death. During the seventeenth and eighteenth century, vanitas art flourished. The designation stems from the Latin phrase vanitas vanitatum omni vanitas, a recurring refrain throughout the biblical book of Ecclesiastes: ”vanity of vanities, all is vanity,” in the King James translation. Paintings in the genre were still lifes depicting an assortment of objects which represented all that we might pursue in this life: love, power, fame, fortune, happiness. In their midst, however, one might also find a skull or an hour glass. These were symbols of death and the brevity of life. The idea, of course, was to encourage people to make the most of their living years.

Edwart Collier, 1690
Edwart Collier, 1690

For the most part, we don’t go in for this sort of thing anymore. Few people, if any, operate under the delusion that we might escape death (excepting, perhaps, the Singularity crowd), but we do a pretty good job of forgetting what we know about death. We keep death out of sight and, hence, out of mind. We’re certainly not going out of our way to remind ourselves of death’s inevitability. And, who knows, maybe that’s for the better. Maybe all of those skulls and hourglasses were morbidly unhealthy.

But while vanitas art has gone out of fashion, a new class of memento mori has emerged: the social media profile.

I’m one of those on again, off again Facebook users. Lately, I’ve been on again, and recently I noticed one of those birthday reminders Facebook places in the column where it puts all of the things Facebook would like you to click on. It was for a high school friend who I had not spoken to in over eight years. It was in that respect a very typical Facebook friendship:  the sort that probably wouldn’t exist at all were it not for Facebook. And that’s not necessarily a knock on the platform. For the most part, I appreciate being able to maintain at least minimal ties to old friends. In this case, though, it demonstrated just how weak those ties can be.

Upon clicking over to their profile, I read a few odd notes, and very quickly it became disconcertingly clear that my friend had died over a year ago. Naturally, I was taken a back and saddened. He died while I was off Facebook, and news had not reached me by any other channel. But there it was. Out of nowhere and without warning my browser was haunted by the very real presence of death. Momento mori.

Just a few days prior I logged on to Facebook and was greeted by the tragic news of a former student’s sudden passing. Because we had several mutual connections, photographs of the young man found their way into my news feed for several days. It was odd and disconcerting and terribly sad all at once. I don’t know what I think of social media mourning. It makes me uneasy, but I won’t criticize what might bring others solace. In any case, it is, like death itself, an unavoidable reality of our social media experience. Death is no digital dualist.

Facebook sometimes feels like a modern-day Arcadia. It is a carefully cultivated space in which life appears Edenic. The pictures are beautiful, the events exciting, the faces always smiling, the children always amusing, the couples always adoring. Some studies even suggest that comparing our own experience to these immaculately curated slices of life leads to envy, discontent, and unhappiness. Understandably so … if we assume that these slices of life are comprehensive representations of the lives people acutally lead. Of course, they are not.

Lest we be fooled, however, there, alongside the pets and witty status updates and wedding pictures and birth announcements, we will increasingly find our virtual Arcadias haunted by the digital, disembodied presence of the dead. Our digital memento mori.

Et in Facebook ego.

Google Photos and the Ideal of Passive Pervasive Documentation

I’ve been thinking, recently, about the past and how we remember it. That this year marks the 20th anniversary of my high school graduation accounts for some of my reflective reminiscing. Flipping through my senior yearbook, I was surprised by what I didn’t remember. Seemingly memorable events alluded to by friends in their notes and more than one of the items I myself listed as “Best Memories” have altogether faded into oblivion. “I will never forget when …” is an apparently rash vow to make.

But my mind has not been entirely washed by Lethe’s waters. Memories, assorted and varied, do persist. Many of these are sustained and summoned by stuff, much of it useless, that I’ve saved for what we derisively call sentimental reasons. My wife and I are now in the business of unsentimentally trashing as much of this stuff as possible to make room for our first child. But it can be hard parting with the detritus of our lives because it is often the only tenuous link joining who we were to who we now are. It feels as if you risk losing a part of yourself forever if you were to throw away that last delicate link.

“Life without memory,” Luis Bunuel tells us, “is no life at all.” “Our memory,” he adds, “is our coherence, our reason, our feeling, even our action. Without it, we are nothing.” Perhaps this accounts for why tech criticism was born in a debate about memory. In the Phaedrus, Plato’s Socrates tells a cautionary tale about the invention of writing in which writing is framed as a technology that undermines the mind’s power to remember. What we can write down, we will no longer know for ourselves–or so Socrates worried. He was, of course, right. But, as we all know, this was an incomplete assessment of writing. Writing did weaken memory in the way Plato feared, but it did much else besides. It would not be the last time critics contemplated the effects of a new technology on memory.

I’ve not written nearly as much about memory as I once did, but it continues to be an area of deep interest. That interest was recently renewed not only by personal circumstances but also by the rollout of Google Photos, a new photo storage app with cutting edge sorting and searching capabilities. According to Steven Levy, Google hopes that it will be received as a “visual equivalent to Gmail.” On the surface, this is just another digital tool designed to store and manipulate data. But the data in question is, in this case, intimately tied up with our experience and how we remember it. It is yet another tool designed to store and manipulate memory.

When Levy asked Bradley Horowitz, the Google executive in charge of Photos, what problem does Google Photos solve? Horowitz replied,

“We have a proliferation of devices and storage and bandwidth, to the point where every single moment of our life can be saved and recorded. But you don’t get a second life with which to curate, review, and appreciate the first life. You almost need a second vacation to go through the pictures of the safari on your first vacation. That’s the problem we’re trying to fix — to automate the process so that users can be in the moment. We also want to bring all of the power of computer vision and machine learning to improve those photos, create derivative works, to make suggestions…to really be your assistant.”

It shouldn’t be too surprising that the solution to the problem of pervasive documentation enabled by technology is a new technology that allows you to continue documenting with even greater abandon. Like so many technological fixes to technological problems, it’s just a way of doubling down on the problem. Nor is it surprising that he also suggested this would help users “be in the moment” without of a hint of irony.

But here is the most important part of the whole interview, emphasis mine:

“[…] so part of Google photos is to create a safe space for your photos and remove any stigma associated with saving everything. For instance, I use my phone to take pictures of receipts, and pictures of signs that I want to remember and things like that. These can potentially pollute my photo stream. We make it so that things like that recede into the background, so there’s no cognitive burden to actually saving everything.”

Replace saving with remembering and the potential significance of a tool like Google Photos becomes easier to apprehend. Horowitz is here confirming that users will need to upload their photos to Google’s Cloud if they want to take advantage of Google Photos’ most impressive features. He anticipates that there will be questions about privacy and security, hence the mention of safety. But the really important issue here is this business about saving everything.

I’m not entirely sure what to make of the stigma Horowitz is talking about, but the cognitive burden of “saving everything” is presumably the burden of sorting and searching. How do you find the one picture you’re looking for when you’ve saved thousands of pictures across a variety of platforms and drives? How do you begin to organize all of these pictures in any kind of meaningful way? Enter Google Photos and its uncanny ability to identify faces and group pictures into three basic categories–People, Places, and Things–as well as a variety of sub-categories such as “food,” “beach,” or “cars.” Now you don’t need that second life to curate your photos. Google does it for you. Now we may document our lives to our heart’s content without a second thought about whether or not we’ll ever go back to curate our unwieldy hoard of images.

I’ve argued elsewhere that we’ve entered an age of memory abundance, and the abundance of memories makes us indifferent to them. When memory is scarce, we treasure it and care deeply about preserving it. When we generate a surfeit of memory, our ability to care about it diminishes proportionately. We can no longer relate to how Roland Barthes treasured his mother’s photograph; we are more like Andy Warhol, obsessively recording all of his interactions and never once listening to the recordings. Plato was, after all, even closer to the mark than we realized. New technologies of memory reconfigure the affections as well as the intellect. But is it possible that Google Photos will prove this judgement premature? Has Google figured out how we may have our memory cake and eat it too?

I think not, and there’s a historical precedent that will explain why.

Ivan Illich, in his brilliant study of medieval reading and the evolution of the book, In the Vineyard of the Text, noted how emerging textual technologies reconfigured how readers related to what they read. It is a complex, multifaceted argument and I won’t do justice to it here, but the heart of it is summed up in the title of Illich’s closing chapter, “From Book to Text.” After explaining what Illich meant by the that formulation, I’m going to suggest that we consider an analogous development: from photograph to image.

Like the photography, writing is, as Plato understood, a mnemonic technology. The book or codex is only one form the technology has taken, but it is arguably the most important form owing to its storage capacity and portability. Contrast the book to, for instance, a carved stone tablet or a scroll and you’ll immediately recognize the brilliance of the design. But the matter of sorting and searching remained a significant problem until the twelfth century. It is then that new features appeared to improve the book’s accessibility and user-friendliness, among them chapter titles, pagination, and the alphabetized index. Now one cloud access particular passages without having to either read the whole work or, more to the point, either memorize the passages or their location in the book (illuminated manuscripts were designed to aide with the latter).

My word choice in describing the evolution of the book above was, of course, calculated to make us see the book as a technology and also to make certain parallels to the case of digital photography more obvious. But what was the end result of all of this innovation? What did Illich mean by saying that the book became a text?

Borrowing a phrase Katherine Hayles deployed to describe a much later development, I’d say that Illich is getting at one example of how information lost its body. In other words, prior to these developments it was harder to imagine the text of a book as a free-floating reality that could be easily lifted and presented in a different format. The ideas, if you will, and the material that conveyed them–the message and medium–were intimately bound together; one could hardly imagine the two existing independently. This had everything to do with the embodied dimensions of the reading experience and the scarcity of books. Because there was no easy way to dip in and out of a book to look for a particular fragment and because one would likely encounter but one copy of a particular work, the work was experienced as a whole that lived within the particular pages of the book one held in hand.

The book had then been read reverentially as a window on the world; it yielded what Illich termed monastic reading. The text was later, after the technical innovations of the twelfth century, read as a window on the mind of the author; it yielded scholastic reading. We might also characterize these as devotional reading and academic reading, respectively. Illich summed it up this way:

“The text could now be seen as something distinct from the book. It was an object that could be visualized even with closed eyes [….] The page lost the quality of soil in which words are rooted. The new text was a figment on the face of the book that lifted off into autonomous existence [….] Only its shadow appeared on the page of this or that concrete book. As a result, the book was no longer the window onto nature or god; it was no longer the transparent optical device through which a reader gains access to creatures or the transcendent.”

Illich had, a few pages earlier, put the matter more evocatively: “Modern reading, especially of the academic and professional type, is an activity performed by commuters or tourists; it is no longer that of pedestrians and pilgrims.”

I recount Illich’s argument because it illuminates the changes we are witnessing with regards to photography. Illich demonstrated two relevant principles. First, that small technical developments can have significant and lasting consequences for the experience and meaning of media. The move from analog to digital photography should naturally be granted priority of place, but subsequent developments such as those in face recognition software and automated categorization should not be underestimated. Secondly, that improvements in what we might today call retrieval and accessibility can generate an order of abstraction and detachment from the concrete embodiment of media. And this matters because the concrete embodiment, the book as opposed to the text, yields kinds and degrees of engagement that are unique to it.

Let me try to put the matter more directly and simultaneously apply it to the case of photography. Improving accessibility meant that readers could approach the physical book as the mere repository of mental constructs, which could be poached and gleaned at whim. Consequently, the book was something to be used to gain access to the text, which now appeared for the first time as an abstract reality; it ceased to be itself a unique and precious window on the world and its affective power was compromised.

Now, just as the book yielded to the text, so the photograph yields to the image. Imagine a 19th century woman gazing lovingly at a photograph of her son. The woman does not conceive of the photograph as one instantiation of the image of her son. Today, however, we who hardly ever hold photographs anymore, we can hardly help thinking it terms of images, which may be displayed on any of a number of different platforms, not to mention manipulated at whim. The image is an order of abstraction removed from the photograph and it would be hard to imagine someone treasuring it in the same way that we might treasure an old photograph. Perhaps a thought experiment will drive this home. Try to imagine the emotional distance between the act of tearing up a photograph and deleting an image.

Now let’s come back to the problem Google Photos is intended to solve. Will automated sorting and categorization along with the ability to search succeed in making our documentation more meaningful? Moreover, will it overcome the problems associated with memory abundance? Doubtful. Instead, the tools will facilitate further abstraction and detachment. They are designed to encourage the production of even more documentary data and to further diminish our involvement in their production and storage. Consequently, we will continue to care less not more about particular images.

Of course, this hardly means the tools are useless or that images are meaningless. I’m certain that face recognition software, for instance, can and will be put to all sorts of uses, benign and otherwise and that the reams of data users will feed Google Photos will only help to improve and refine the software. And it is also true that images can be made use of in ways that photographs never could. But perhaps that is the point. A photograph we might cherish; we tend to make use of images. Unlike the useless stuff around which my memories accumulate and that I struggle to throw away, images are all use-value and we don’t think twice about deleting them when they have no use.

Finally, Google’s answer to the problem of documentation, that it takes us out of the moment as it were, is to encourage such pervasive and continual documentation that it is no longer experienced as a stepping out of the moment at all. The goal appears to be a state of continual passive documentation in which case the distinction between experience and documentation blurs so that the two are indistinguishable. The problem is not so much solved as it is altogether transcended. To experience life will be to document it. In so doing we are generating a second life, a phantom life that abides in the Cloud.

And perhaps we may, without stretching the bounds of plausibility too far, reconsider that rather ethereal, heavenly metaphor–the Cloud. As we generate this phantom life, this double of ourselves constituted by data, are we thereby hoping, half-consciously, to evade or at least cope with the unremitting passage of time and, ultimately, our mortality?