Eternal Sunshine of the Spotless Digital Archive

The future is stubbornly resistant to prediction, but we try anyway. I’ve been thinking lately about memory, technologies that mediate our memories, and the future of the past.

The one glaring fact — and I think it is more or less incontrovertible — is this: Digital technology has made it possible to capture and store vast amounts a data.

Much of this data, for the average person, involves the documentation of daily life. This documentation is often photographic or audio-visual.

What difference will this make? Recently, I suggested that in an age of memory abundance, memory will be devalued. There will be too much of it and it will be out there somewhere — in a hard drive, on my phone/computer,  or in the cloud. As we confidently and routinely outsource our remembering to digital devices and archives, we will grow relatively indifferent to personal memories. (Although, I don’t think indifferent is the best word — perhaps unattached.)

This too seems to me incontrovertible. It is the overlooked truth in Plato’s much-maligned critique of writing. Externalized memory is only figuratively related to internalized memory.

But I was assuming the permanence of these digital memories. What if our digital archives prove to be impermanent? What if in the coming years and decades we realize that our digital memories are gradually fading into oblivion?

Consider the following from Bruce Sterling: “Actually it’s mostly the past’s things that will outlive us. Things that have already successfully lived a long time, such as the Pyramids, are likely to stay around longer than 99.9% of our things. It might be a bit startling to realize that it’s mostly our paper that will survive us as data, while a lot of our electronics will succumb to erasure, loss, and bit rot.”

It might turn out that Snapchat is a premonition. What then?

Scenario A: Digital memory decay is a technical problem that is eventually solved; trajectory of memory abundance and consequent indifference plays out.

Scenario B: Digital memory decay remains a persistent problem.

Scenario B1: We devote ourselves to rituals of digital memory preservation. Therapy first referred to the care of the gods. We think of it as care for the self, sometimes involving the recollection repressed memories. Perhaps in the future these senses of the word will mutate into therapy understood as the care of our digital memories.

Scenario B2: By the time the problem of digital memory decay is recognized as a threat, we no longer care. Memory, we decide, is a burden. Mutually reinforcing decay and indifference then yield a creeping amnesia of long term memory. Eternal sunshine indeed.

Scenario B3: We reconsider our digital dependence and reintegrate analog and internalized forms of memory into our ecology of remembrance.

Scenario C: All of this is wrong.

In truth, I can hardly imagine a serious indifference to personal memory. But then again, I’m sure those who lived in societies whose cultural forms were devoted to tribal remembrance could hardly imagine serious indifference to the memory of the tribe. They probably couldn’t imagine someone caring much about their individual history; it was likely an incoherent concept. Thinking about the future involves the thinking of that which we can’t quite imagine, or is it the imagining of that which we can’t quite think. In any case, it’s not really about the future anyway. It’s about trying to make some sense of forces now at work and trying to reckon with the long reach of the past, which, remembered or not, will continue to make itself felt in the present.

To See, or To Be Seen

When we think about the consequences of a new technology, we are prone to ask about what can be done with it. We think, in other words, of the technology as a tool which is put to this use or that. We then ask whether that use is good or bad, or possibly indifferent.

So take, for example, a relatively new product like Twitter’s Vine. Vine is an app that is to video what Twitter is to text. It invites you to record and post videos, but these videos can be no more the six seconds in length. If you’re unfamiliar with Vine, you can watch a seemingly random selection of new videos at Vinepeek.

vine-logo

When you first hear of Vine, and you think about evaluating it (because you happen to be in a critical frame of mind), what sorts of questions do you ask? I suspect the first question that will typically come to mind is this: What will people post with this new tool? Will the videos be touching, beautiful, surprising, revealing? Will they be trashy, abusive, pornographic, violent? Or, will they be inane, predictable, mind-numbing?

Of course, videos posted to Vine will likely be all of these things in some necessarily depressing proportion. But this is only the first and most obvious question one could ask.

Here is another possible question: How does the use of Vine shape the way one perceives experience?

There are other questions, of course, but it is this question of perception I find to be really fascinating. The use of technology leads to consequential actions out there, in the world. But the use of technology also carries important consequences in here, within me. The question of perception is especially important for two reasons. First, and most obviously, our perception is the ground of pretty much everything else we do. How we “see” things leads to certain kinds of thoughts and feelings and actions. Secondly, that by which we perceive tends to fade from view; we don’t, to take the most obvious example, see the eyes through which we see everything else.

This means that one of the most important consequences of a new technology might also be the consequence we are least likely to become aware of, and this only heightens its influence.

So how does the use of Vine shape perception? Like most documentary technology, the use of Vine will likely encourage users to “see” potential Vines in their experience just as a camera encourages users to “see” potential photographs. But what do we make of this new frame by which we are prompted perceive? That depends, I think, on the degree to which users become self-aware of the medium, the possibilities it creates, and the constraints it imposes.

Reality is always out there; certain aspects are apparent to us, certain aspects are concealed. New technologies may reconfigure what is revealed and what is concealed to us. Slow-motion film, for instance, does not create a new reality; it alters perception and thereby reveals previously concealed dimensions of reality. (I think this is the sort of thing Walter Benjamin had in mind when he discussed the “optical unconscious.”)

Technologies of perception — and really all technologies impact perception — reveal and conceal. No one technology can simultaneously reveal the whole of reality. If it reveals some new dimension of reality, it is because it simultaneously conceals some other dimension. A user that is self-conscious of how a new technology can be used to perceive experience creatively might use a new medium such as Vine to imaginatively make others aware of some previously unnoticed aspect of reality.

To those who care about such things, I think that Martin Heidegger’s influence is hiding between the lines of this post. The German philosopher had a great deal to say about technology and how it affects our perception, how it becomes a part of us. He used the phrase “standing reserve” to describe how modern technology encouraged us to reduce material reality to a fund of resources just there on stand-by, “ready-to-hand,” that is ready to be put to use by us for our purposes. The intrinsic properties of what is rendered merely standing reserve are obscured or lost altogether. We see, we perceive such things only as they are useful to us. We don’t see such things as they are; and “such things” are sometimes not “things,” but persons.

With a technology like Vine, the question may be whether it is used with a view  to the world as “standing-reserve,” there merely to be exploited for our own uses (which often amount to making ourselves seen), or whether it is used as a means of revealing the world, of allowing some previously muted aspect of reality to be seen.

Of course, this question applies to much more than Vine. It is a question to ask of all technology.

The Lifestream Stops

David Gelernter, 2013:

“And today, the most important function of the internet is to deliver the latest information, to tell us what’s happening right now. That’s why so many time-based structures have emerged in the cybersphere: to satisfy the need for the newest data. Whether tweet or timeline, all are time-ordered streams designed to tell you what’s new … But what happens if we merge all those blogs, feeds, chatstreams, and so forth? By adding together every timestream on the net — including the private lifestreams that are just beginning to emerge — into a single flood of data, we get the worldstream: a way to picture the cybersphere as a whole … What people really want is to tune in to information. Since many millions of separate lifestreams will exist in the cybersphere soon, our basic software will be the stream-browser: like today’s browsers, but designed to add, subtract, and navigate streams.”

E. M. Forster, 1909:

“Who is it?” she called. Her voice was irritable, for she had been interrupted often since the music began. She knew several thousand people, in certain directions human intercourse had advanced enormously.

But when she listened into the receiver, her white face wrinkled into smiles, and she said:

“Very well. Let us talk, I will isolate myself. I do not expect anything important will happen for the next five minutes-for I can give you fully five minutes …”

She touched the isolation knob, so that no one else could speak to her. Then she touched the lighting apparatus, and the little room was plunged into darkness.

“Be quick!” She called, her irritation returning. “Be quick, Kuno; here I am in the dark wasting my time.”

[Conversation ensues and comes to an abrupt close.]

Vashti’s next move was to turn off the isolation switch, and all the accumulations of the last three minutes burst upon her. The room was filled with the noise of bells, and speaking-tubes. What was the new food like? Could she recommend it? Has she had any ideas lately? Might one tell her one”s own ideas? Would she make an engagement to visit the public nurseries at an early date? – say this day month.

To most of these questions she replied with irritation – a growing quality in that accelerated age. She said that the new food was horrible. That she could not visit the public nurseries through press of engagements. That she had no ideas of her own but had just been told one-that four stars and three in the middle were like a man: she doubted there was much in it.

When I read Gelernter’s piece and his world-stream metaphor (illustration below), I was reminded of Forster’s story and the image of Vashti, sitting in her chair, immersed in cacophonic real-time stream of information. Of course, the one obvious difference between Gelernter’s and Forster’s conceptions of the relentless stream of information into which one plunges is the nature of the interface. In Forster’s story, “The Machine Stops,” the interface is anchored to a particular place. It is an armchair in a bare, dark room from which characters in his story rarely move. Gelernter assumes the mobile interfaces we’ve grown accustomed to over the last several years.

In Forster’s story, the great threat the Machine poses to its users is that of radical disembodiment. Bodies have atrophied, physicality is a burden, and all the ways in which the body comes to know the world have been overwhelmed by a perpetual feeding of the mind with ever more derivative “ideas.” This is a fascinating aspect of the story. Forster anticipates the insights of later philosophers such as Merleau-Ponty and Hubert Dreyfus as well as the many researchers helping us understand embodied cognition. Take this passage for example:

You know that we have lost the sense of space. We say “space is annihilated”, but we have annihilated not space, but the sense thereof. We have lost a part of ourselves. I determined to recover it, and I began by walking up and down the platform of the railway outside my room. Up and down, until I was tired, and so did recapture the meaning of “Near” and “Far”. “Near” is a place to which I can get quickly on my feet, not a place to which the train or the air-ship will take me quickly. “Far” is a place to which I cannot get quickly on my feet; the vomitory is “far”, though I could be there in thirty-eight seconds by summoning the train. Man is the measure. That was my first lesson. Man”s feet are the measure for distance, his hands are the measure for ownership, his body is the measure for all that is lovable and desirable and strong.

But how might Forster have conceived of his story if his interface had been mobile? Would his story still be a Cartesian nightmare? Or would he understand the danger to be posed to our sense of time rather than our sense of place? He might have worried not about the consequences of being anchored to one place, but rather being anchored to one time — a relentless, enduring present.

Were I Forster, however, I wouldn’t change his focus on the body. For isn’t our body and the physicality of lived experience that the body perceives also our most meaningful measure of time? Do not our memories etch themselves in our bodies? Does not a feel for the passing years emerge from the transformation of our bodies? Philosopher Merleau-Ponty spoke of the “time of the body.” Consider Shaun Gallagher’s exploration of Merleau-Ponty’s perspective:

“Temporality is in some way a ‘dimension of our being’ … More specifically, it is a dimension of our situated existence. Merleau-Ponty explains this along the lines of the Heideggerian analysis of being-in- the-world. It is in my everyday dealings with things that the horizon of the day gets defined: it is in ‘this moment I spend working, with, behind it, the horizon of the day that has elapsed, and in front of it, the evening and night – that I make contact with time, and learn to know its course’ …”

Gallagher goes on to cite the following passage from Merleau-Ponty:

“I do not form a mental picture of my day, it weighs upon me with all its weight, it is still there, and though I may not recall any detail of it, I have the impending power to do so, I still ‘have it in hand.’ . . . Our future is not made up exclusively of guesswork and daydreams. Ahead of what I see and perceive . . . my world is carried forward by lines of intentionality which trace out in advance at least the style of what is to come.”

Then Gallagher adds, “Thus, Merleau-Ponty suggests, I feel time on my shoulders and in my fatigued muscles; I get physically tired from my work; I see how much more I have to do. Time is measured out first of all in my embodied actions as I ‘reckon with an environment’ in which ‘I seek support in my tools, and am at my task rather than confronting it.'”

That last distinction between being at my task rather than confronting it seems particularly significant, especially as it involves the support of tools. Our sense of time, like our sense of place, is not an unchangeable given. It shifts and alters through technological mediation. Melvin Kranzberg, in the first of his six laws of technology, reminds us, “Technology is neither good nor bad; nor is it neutral.” Our technological mediation of space and time is never neutral; and while it may not be “bad” or “good” in some abstract sense, it can be more or less humane, more or less conducive to our well-being. If the future of the Internet is the worldstream, we should perhaps think twice before plunging.

Worldstream Gelernter

Borg Complex Case Files 2

UPDATE: See the Borg Complex primer here.

______________________________________

Alright, let’s review. A Borg Complex is exhibited by writers and pundits whenever you can sum up their message with the phrase: “Resistance is futile.”

The six previously identified symptoms of a Borg Complex are as follows:

1. Makes grandiose, but unsupported claims for technology

2. Uses the term Luddite a-historically and as a casual slur

3. Pays lip service to, but ultimately dismisses genuine concerns

4. Equates resistance or caution to reactionary nostalgia

5. Starkly and matter-of-factly frames the case for assimilation

6. Announces the bleak future for those who refuse to assimilate

In an effort to further refine our diagnostic instruments, I am now adding two more related symptoms to the list. The first of these arises from a couple of cases submitted by Nick Carr and is summarized as follows:

7. Expresses contemptuous disregard for the achievements of the past

Consider this claim by Tim O’Reilly highlighted by Carr:

“I don’t really give a shit if literary novels go away. … the novel as we know it today is only a 200-year-old construct. And now we’re getting new forms of entertainment, new forms of popular culture.”

Well, there you have it. This same statement serves to illustrate the second symptom I’m introducing:

8. Refers to historical parallels only to dismiss present concerns.

This symptom identifies historical precedents as a way of saying, “Look, people worried about stuff like this before and they survived, so there’s nothing to be concerned about now.” It sees all concerns through the lens of Chicken Little. The sky never falls. I would suggest that the Boy Who Cried Wolf is the better parable. The wolf sometimes shows up.

To this list of now eight symptoms, we might also add two Causes.

1. Self-interest, usually commercial (ranging from more tempered to crass)

2. Fatalistic commitment to technological determinism (in pessimistic and optimistic varieties)

Now let’s consider some more cases related to education, a field that is particularly susceptible to Borg Complex claims. In a recent column, Thomas Friedman gushed over MOOCs:

“Nothing has more potential to lift more people out of poverty — by providing them an affordable education to get a job or improve in the job they have. Nothing has more potential to unlock a billion more brains to solve the world’s biggest problems. And nothing has more potential to enable us to reimagine higher education than the massive open online course, or MOOC, platforms that are being developed by the likes of Stanford and the Massachusetts Institute of Technology and companies like Coursera and Udacity.”

These are high, indeed utopian hopes, and in their support Friedman offered two heartwarming anecdotes of MOOC success. In doing so, he committed the error of tech-uptopians (modeled on what Pascal took to be the error of Stoicism): Believing the wonderful use to which a technology can be put, will be the use to which it is always put.

Friedman wrapped up his post with the words of M.I.T. president, L. Rafael Reif, “There is a new world unfolding and everyone will have to adapt.”

And there it is — Borg Complex.

In long piece in The Awl (via Nathan Jurgenson), Maria Bustillos discusses an online exchange on the subject of MOOCs between Aaron Bady and Clay Shirky. Take the time to read the whole thing if you’re curious/worried about MOOCs. But here is Shirky on the difference between he and Bady: “Aaron and I agree about most of the diagnosis; we disagree about the prognosis. He thinks these trends are reversible, and I don’t; Udacity could go away next year and the damage is already done.”

Once again — Borg Complex.

At this juncture it’s worth asking, as Jurgenson did in sending me this last case, “Is he wrong?”

Is it not the case that folks who exhibit a Borg Complex often turn out to be right? Isn’t it inevitable that their vision will play out regardless of what we say or do?

My initial response is … maybe. As I’ve said before, a Borg Complex diagnosis is neutral as to the eventual veracity of the claims. It is more about an attitude toward technology in the present that renders the claims, if they are realized, instances of self-fulfilling prophecy.

The appearance of inevitability is a trick played by our tendency to make a neat story out of the past. Historians know this better than most. What looks like inevitability to us is only a function of zooming so far out that all contingencies fade from view. But in the fine-grain texture of lived experience, we know that there are countless contingencies that impinge upon the unfolding of history. It is also a function of forgetfulness. If only we had a catalog of all that we were told was inevitable that didn’t finally transpire. We only remember successes and then we tell their story as if they had to succeed all along, and this lends claims of inevitability an undeserved plausibility.

One other consideration: It is always worth asking, Who stands to profit from tales of technological inevitability?

Consider the conclusion of a keynote address delivered at the Digital-Life-Design Conference in Munich by Max Levchin:

“So to conclude: I believe that in the next decades we will see huge number of inherently analog processes captured digitally. Opportunities to build businesses that process this data and improve lives will abound.”

The “analog processes” Levchin is talking about are basically the ordinary aspects of our human lives that have yet to be successfully turned into digital data points. Here’s Carr on the vision Levchin lays out:

“This is the nightmare world of Big Data, where the moment-by-moment behavior of human beings — analog resources — is tracked by sensors and engineered by central authorities to create optimal statistical outcomes. We might dismiss it as a warped science fiction fantasy if it weren’t also the utopian dream of the Max Levchins of the world. They have lots of money and they smell even more: ‘I believe that in the next decades we will see huge number of inherently analog processes captured digitally. Opportunities to build businesses that process this data and improve lives will abound.’ It’s the ultimate win-win: you get filthy rich by purifying the tribe.”

Commenting on the same address and on Carr’s critique, Alan Jacobs captures the Borg-ish nature of Levchin’s rhetoric:

“And Levchin makes his case for these exciting new developments in a very common way: ‘This is going to add a huge amount of new kinds of risks. But as a species, we simply must take these risks, to continue advancing, to use all available resources to their maximum.’ Levchin doesn’t really spell out what the risks are — though Nick Carr does — because he doesn’t want us to think about them. He just wants us to think about the absolutely unquestionable ‘must’ of using ‘all available resources to their maximum.’ That ‘advance’ in that direction might amount to ‘retreat’ in actual human flourishing does not cross his mind, because it cannot. Efficiency in the exploitation of ‘resources’ is his only canon.”

We “must” do this. It is inevitable but that this will happen. Resistance is futile.

Except that very often it isn’t. There are choices to be made. A figure no less identified with technological determinism than Marshall McLuhan reminded us: “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.” The handwaving rhetoric that I’ve called a Borg Complex is resolutely opposed to just such contemplation, and very often for the worst of motives. By naming it my hope is that it can be more readily identified, weighed, and rejected.

Elsewhere McLuhan said, “the only alternative is to understand everything that is going on, and then neutralize it as much as possible, turn off as many buttons as you can, and frustrate them as much as you can … Anything I talk about is almost certainly to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.”

Understanding is a form of resistance. So remember, carry on with the work of intelligent, loving resistance where discernment and wisdom deem it necessary.

_______________________________________________

This is a third in series of Borg Complex posts, you can read previous installments here.

I’ve set up a tumblr to catalog instances of Borg Complex rhetoric here. Submissions welcome.

Borg

For Your Consideration – 10

These “For Your Consideration” posts have been few and far between of late. The pace may pick up to something close to weekly, which was my original intent. Or, it may not. We’ll see. If only writing this blog were my full-time occupation. Alas, it is not, barring the unexpected appearance of some foolishly generous patron. That said, I have found it fairly easy to post about a link a day at the Facebook page I set up for The Frailest Thing. So if you’d like a more regular stream of suggested readings related to technology and society feel free to “Like” the page (that phrase persists in sounding rather ridiculous) by clicking the icon just to the right of this text.  I’ll only note that the links there will be more narrowly focused on matters technological while those I include in these posts tend to be a bit more eclectic. So without further ado …

“Synthetic double-helix faithfully stores Shakespeare’s sonnets”:

“DNA packs information into much less space than other media. For example, CERN, the European particle-physics lab near Geneva, currently stores around 90 petabytes of data on some 100 tape drives. Goldman’s method could fit all of those data into 41 grams of DNA.”

“Noted”:

“The remedy for the problems created by information technology is more information technology.”

“In his 1689 De arte Excerpendi, the Hamburg rhetorician Vincent Placcius described a scrinium literatum, or literary cabinet, whose multiple doors held 3,000 hooks on which loose slips could be organized under various headings and transposed as necessary.2 Two of the cabinets were eventually built, one for Placcius’s own use and one acquired by Leibniz.”

“Google and the future of search: Amit Singhal and the Knowledge Graph”:

“‘We are maniacally focusing on the user to reduce every possible friction point between them, their thoughts and the information they want to find.’ Getting ever closer to Page’s brain implants, in effect.”

“The Pope’s Social Media Guru On @Pontifex’s First Tweet”:

“As Secretary of the Pontifical Council for Social Communications, Tighe is the Pope’s social media guy.”

“It’s almost like the equivalent of the old marketplaces where Jesus went to engage people. That’s where we have to be, with all of its ambiguities and difficulties, because that’s where the people are.”

“Why We Should Memorize Poetry”:

“My late colleague Joseph Brodsky, who died in 1996, used to appall his students by requiring them to memorize something like a thousand lines each semester. He felt he was preparing them for the future; they might need such verses later in life.”

‘I am what I am attached to’: On Bruno Latour’s ‘Inquiry into the Modes of Existence’:

“The Economy — the pride and joy of the Moderns and of the “hard” social sciences — illustrates this well. What a mad construction Latour shows it to be! It is Providence itself, a second Nature, a religion that presides over the distribution of all that is good and evil.”

“Speak, Memory”:

“It is startling to realize that some of our most cherished memories may never have happened—or may have happened to someone else. I suspect that many of my enthusiasms and impulses, which seem entirely my own, have arisen from others’ suggestions, which have powerfully influenced me, consciously or unconsciously, and then been forgotten.”

“SIRI RISING: The Inside Story Of Siri’s Origins — And Why She Could Overshadow The iPhone”:0

“This Siri — the Siri of the past — offers a glimpse at what the Siri of the future may provide, and a blueprint for how a growing wave of artificially intelligent assistants will slot into our lives. The goal is a human-enhancing and potentially indispensable assistant that could supplement the limitations of our minds and free us from mundane and tedious tasks.”

The Internet as seen from 1969: