Destructive Scale

In Tools for Conviviality, Ivan Illich notes some of what had been concluded by 1970 through the research conducted at the Center for Intercultural Documentation (CIDOC) in Cuernavaca, Mexico, which Illich directed. Research focused on the consequences of industrial production on society and early on it focused on what Illich called “educational devices.” Here is the conclusion Illich, who was no fan of the current mode of education, drew:

“Alternative devices for the production and marketing of mass education are technically more feasible and ethically less tolerable than compulsory graded schools. Such new educational arrangements are now on the verge of replacing traditional school systems in rich and in poor countries. They are potentially more effective in the conditioning of job-holders and consumers in an industrial economy. They are therefore more attractive for the management of present societies, more seductive for the people, and insidiously destructive of fundamental values.”

A little further on, Illich drew a more general principle: “When an enterprise grows beyond a certain point on this scale, it first frustrates the end for which it was originally designed, and then rapidly becomes a threat to society itself. These scales must be identified and the parameters of human endeavors within which human life remains viable must be explored.”

Failure to do so will have dire consequences:

“Society can be destroyed when further growth of mass production renders the milieu hostile, when it extinguishes the free use of the natural abilities of society’s members, when it isolates people from each other and locks them into a man-made shell, when it undermines the texture of community by promoting extreme social polarization and splintering specialization, or when cancerous acceleration enforces social change at a rate that rules out legal, cultural, and political precedents as formal guidelines to present behavior. Corporate endeavors which thus threaten society cannot be tolerated. At this point it becomes irrelevant whether an enterprise is nominally owned by individuals, corporations, or the state, because no form of management can make such fundamental destruction serve a social purpose.

This focus on scale , it seems to me, is one of Illich’s most valuable and enduring contribution to our understanding of technology and its relationship to society.

The emphasis in the last paragraph is mine. I draw attention to this claim because I believe it speaks to an often myopic focus on “political economy” (more here) that proceeds as if the intrinsic nature of the technology or system in question were irrelevant.

Survival in Justice

ivan-illich

Reading Ivan Illich is an intellectually and morally challenging business. Below are two excerpts from Tools for Conviviality. I offer them to you for your consideration. I cannot say that I would endorse them without reservation. Nonetheless, they confront us with the uncomfortable possibility that the cure of our technological malaise will be more radical than most of us have wanted to believe and will require more than most of us are prepared to sacrifice.

These passages also remind us that the prospect of subjecting technology to serious moral critique amounts to a great deal more than intellectual parlor games or mere tinkering with the design of our digital tools. They remind us as well that when we finally come to the roots of our most serious problems, we will find wildly different conceptions of the good life and human flourishing in conflict with one another.

First, against the ideology of growth at all costs.

Our imaginations have been industrially deformed to conceive only what can be molded into an engineered system of social habits that fit the logic of large-scale production. We have almost lost the ability to frame in fancy a world in which sound and shared reasoning sets limits to everybody’s power to interfere with anybody’s equal power to shape the world […] Men with industrially distorted minds cannot grasp the rich texture of personal accomplishments within the range of modern though limited tools. There is no room in their imaginations for the qualitative change that the acceptance of a stable-state industry would mean; a society in which members are free from most of the multiple restraints of schedules and therapies now imposed for the sake of growing tools. Much less do most of our contemporaries experience the sober joy of life in this voluntary though relative poverty which lies within our grasp.

Second, what Illich considers will be the sacrifices required to move toward a more just society.

I argue that survival in justice is possible only at the cost of those sacrifices implicit in the adoption of a convivial mode of production and the universal renunciation of unlimited progeny, affluence, and power on the part of both individuals and groups. This price cannot be extorted by some despotic Leviathan, nor elicited by social engineering. People will rediscover the value of joyful sobriety and liberating austerity only if they relearn to depend on each other rather than on energy slaves. The price for a convivial society will be paid only as the result of a political process which reflects and promotes the society-wide inversion of present industrial consciousness. This political process will find its concrete expression not in some taboo, but in a series of temporary agreements on one or the other concrete limitation of means, constantly adjusted under the pressure of conflicting insights and interests.

As I’ve suggested before, it is often the case that “we want what we cannot possibly have on the terms that we want it.”

 


 

Ivan Illich on Technology and Labor

From Ivan Ilich’s Tools for Conviviality (1973), a book that emerged out of conversations at The Center for Intercultural Documentation (CIDOC) in Cuernavaca, Mexico.

For a hundred years we have tried to make machines work for men and to school men for life in their service. Now it turns out that machines do not “work” and that people cannot be schooled for a life at the service of machines. The hypothesis on which the experiment was built must now be discarded. The hypothesis was that machines can replace slaves. The evidence shows that, used for this purpose, machines enslave men. Neither a dictatorial proletariat nor a leisure mass can escape the dominion of constantly expanding industrial tools.

The crisis can be solved only if we learn to invert the present deep structure of tools; if we give people tools that guarantee their right to work with high, independent efficiency, thus simultaneously eliminating the need for either slaves or masters and enhancing each person’s range of freedom. People need new tools to work with rather than tools that “work” for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves.

[…]

As the power of machines increases, the role of persons more and more decreases to that of mere consumers.

[…]

This world-wide crisis of world-wide institutions can lead to a new consciousness about the nature of tools and to majority action for their control. If tools are not controlled politically, they will be managed in a belated technocratic response to disaster. Freedom and dignity will continue to dissolve into an unprecedented enslavement of man to his tools.

Illich is among the older thinkers whose work on technology and society I think remains instructive and stimulating. These considerations seem especially relevant to our debates about automation and employment.

Illich was wide-ranging in his interests. Early in the life of this blog, I frequently cited In the Vineyard of the Text, his study of the evolution of writing technologies in the late medieval period.

How to Think About Memory and Technology

I suppose it is the case that we derive some pleasure from imagining ourselves to be part of a beleaguered but noble minority. This may explain why a techno-enthusiast finds it necessary to attack dystopian science fiction on the grounds that it is making us all fear technology while I find that same notion ludicrous.

Likewise, Salma Noreen closes her discussion of the internet’s effect on memory with the following counsel: “Rather than worrying about what we have lost, perhaps we need to focus on what we have gained.” I find that a curious note on which to close because I tend to think that we are not sufficiently concerned about what we have lost or what we may be losing as we steam full speed ahead into our technological futures. But perhaps I also am not immune to the consolations of belonging to an imagined beleaguered community of my own.

So which is it? Are we a society of techno-skeptics with brave, intrepid techno-enthusiasts on the fringes stiffening our resolve to embrace the happy technological future that can be ours for the taking? Or are we a society of techno-enthusiasts for whom the warnings of the few techno-skeptics are nothing more than a distant echo from an ever-receding past?

I suspect the latter is closer to the truth, but you can tell me how things look from where you’re standing.

My main concern is to look more closely at Noreen’s discussion of memory, which is a topic of abiding interest to me. “What anthropologists distinguish as ‘cultures,’” Ivan Illich wrote, “the historian of mental spaces might distinguish as different ‘memories.'” And I rather think he was right. Along similar lines, and in the early 1970s, George Steiner lamented, “The catastrophic decline of memorization in our own modern education and adult resources is one of the crucial, though as yet little understood, symptoms of an afterculture.” We’ll come back to more of what Steiner had to say a bit further on, but first let’s consider Noreen’s article.

She mentions two studies as a foil to her eventual conclusion. The first suggesting “the internet is leading to ‘digital amnesia’, where individuals are no longer able to retain information as a result of storing information on a digital device,” and the other “that relying on digital devices to remember information is impairing our own memory systems.”

“But,” Noreen counsels her readers, “before we mourn this apparent loss of memory, more recent studies suggest that we may be adapting.” And in what, exactly, does this adaptation consist? Noreen summarizes it this way: “Technology has changed the way we organise information so that we only remember details which are no longer available, and prioritise the location of information over the content itself.”

This conclusion seems to me banal, which is not to say that it is incorrect. It amounts to saying that we will not remember what we do not believe we need to remember and that, when we have outsourced our memory, we will take some care to learn how we might access it in the future.

Of course, when the Google myth dominates a society, will we believe that there is anything at all that we ought to commit to memory? The Google myth in this case is the belief that the every conceivable bit of knowledge that we could ever possibly desire is just a Google search away.

The sort of analysis Noreen offers, which is not uncommon, is based on an assumption we should examine more closely and also leaves a critical consideration unaddressed.

The assumption is that there are no distinctions within the category of memory. All memories are assumed to be discreet facts of the sort which one would need to know in order to do well on Jeopardy. But this assumption ignores the diversity of what we call memories and the diversity of functions to which memory is put. Here is how I framed the matter some years back:

All of this leads me to ask, What assumptions are at play that make it immediately plausible for so many to believe that we can move from internalized memory to externalized memory without remainder?  It would seem, at least, that the ground was prepared by an earlier reduction of knowledge to information or data.  Only when we view knowledge as the mere aggregation of discreet bits of data, can we then believe that it makes little difference whether that data is stored in the mind or in a database.

We seem to be approaching knowledge as if life were a game of Jeopardy which is played well by merely being able to access trivial knowledge at random.  What is lost is the associational dimension of knowledge which constructs meaning and understanding by relating one thing to another and not merely by aggregating data.  This form of knowledge, which we might call metaphorical or analogical, allows us to experience life with the ability to “understand in light of,” to perceive through a rich store of knowledge and experience that allows us to see and make connections that richly texture and layer our experience of reality.

But this understanding of memory seems largely absent from the sorts of studies that are frequently cited in discussions of offloaded or outsourced memory. I’ll add another relevant consideration I’ve previously articulated, that there is a silent equivocation that slips into these discussions: the notion of memory we tend to assume is our current understanding of memory derived by comparison to computer memory, which is essentially storage.

Having first identified a computer’s storage  capacity as “memory,” a metaphor dependent upon the human capacity we call “memory,” we have now come to reverse the direction of the metaphor by understanding human “memory” in light of a computer’s storage capacity.  In other words we’ve reduced our understanding of memory to mere storage of information.  And now we read all discussions of memory in light of this reductive understanding.

As for the unaddressed critical consideration, if we grant that we must all outsource or externalize some of our memory, and that it may even be admittedly advantageous to do so, how do we make qualitative judgments about the memory that we can outsource to our benefit and the memory we should on principle internalize (if we even allow for the latter possibility)?

Here we might take a cue from the religious practices of Jews, Christians, and Muslims, who have long made the memorization of Scripture a central component of their respective forms of piety. Here’s a bit more from Steiner commenting on what can be known about early modern literacy:

Scriptural and, in a wider sense, religious literacy ran strong, particularly in Protestant lands. The Authorized Version and Luther’s Bible carried in their wake a rich tradition of symbolic, allusive, and syntactic awareness. Absorbed in childhood, the Book of Common Prayer, the Lutheran hymnal and psalmody cannot but have marked a broad compass of mental life with their exact, stylized articulateness and music of thought. Habits of communication and schooling, moreover, sprang directly from the concentration of memory. So much was learned and known by heart — a term beautifully apposite to the organic, inward presentness of meaning and spoken being within the individual spirit.

Learned by heart–a beautifully apt phrase, indeed. Interestingly, this is an aspect of religious practice that, while remaining relatively consistent across the transition from oral to literate society, appears to be succumbing to the pressures of the Google myth, at least among Protestants. If I have an app that lets me instantly access any passage of my sacred text, in any of a hundred different translations, why would I bother to memorize any of it.

The answer, of course, best and perhaps only learned by personal experience, is that there is a qualitative difference between the “organic, inward presentness of meaning” that Steiner describes and merely knowing that I know how to find a text if I were inclined to find it. But the Google myth, and the studies that examine it, seem to know nothing of that qualitative difference, or, at least, they choose to bracket it.

I should note in passing that much of what I have recently written about attention is also relevant here. Distraction is the natural state of someone who has no goal that might otherwise command or direct their attention. Likewise, forgetfulness is the natural state of someone who has no compelling reason to commit something to memory. At the heart of both states may be the liberated individual will yielded by modernity. Distraction and forgetfulness seem both to stem from a refusal to acknowledge an order of knowing that is outside of and independent of the solitary self. To discipline our attention and to learn something by heart is, in no small measure, to submit the self to something beyond its own whims and prerogatives.

So, then, we might say that one of the enduring consequences of new forms of externalized memory is not only that they alter the quantity of what is committed to memory but that they also reconfigure the meaning and value that we assign to both the work of remembering and to what is remembered. In this way we begin to see why Illich believed that changing memories amounted to changing cultures. This is also why we should consider that Plato’s Socrates was on to something more than critics give him credit for when he criticized writing for how it would affect memory, which was for Plato much more than merely the ability to recall discreet bits of data.

This last point brings me, finally, to an excellent discussion of these matters by John Danaher. Danaher is always clear and meticulous in his writing and I commend his blog, Philosophical Disquisitions, to you. In this post, he explores the externalization of memory via a discussion of a helpful distinction offered by David Krakauer of the Santa Fe Institute. Here is Danaher’s summary of the distinction between two different types of cognitive artifacts, or artifacts we think with:

Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.

Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.

Danaher critically interacts with Krakauer’s distinction, but finds it useful. It is useful because, like Albert Borgmann’s work, it offers to us concepts and categories by which we might begin to evaluate the sorts of trade-offs we must make when deciding what technologies we will use and how.

Also of interest is Danaher’s discussion of cognitive ecology. Invoking earlier work by Donald Norman, Danaher explains that “competitive cognitive artifacts don’t just replace or undermine one cognitive task. They change the cognitive ecology, i.e. the social and physical environment in which we must perform cognitive tasks.” His critical consideration of the concept of cognitive ecology brings him around to the wonderful work Evan Selinger has been doing on the problem of technological outsourcing, work that I’ve cited here on more than a few occasions. I commend to you Danaher’s post for both its content and its method. It will be more useful to you than the vast majority of commentary you might otherwise encounter on this subject.

I’ll leave you with the following observation by the filmmaker Luis Bunuel: “Our memory is our coherence, our reason, our feeling, even our action. Without it, we are nothing.” Let us take some care and give some thought, then, to how our tools shape our remembering.

 

Google Photos and the Ideal of Passive Pervasive Documentation

I’ve been thinking, recently, about the past and how we remember it. That this year marks the 20th anniversary of my high school graduation accounts for some of my reflective reminiscing. Flipping through my senior yearbook, I was surprised by what I didn’t remember. Seemingly memorable events alluded to by friends in their notes and more than one of the items I myself listed as “Best Memories” have altogether faded into oblivion. “I will never forget when …” is an apparently rash vow to make.

But my mind has not been entirely washed by Lethe’s waters. Memories, assorted and varied, do persist. Many of these are sustained and summoned by stuff, much of it useless, that I’ve saved for what we derisively call sentimental reasons. My wife and I are now in the business of unsentimentally trashing as much of this stuff as possible to make room for our first child. But it can be hard parting with the detritus of our lives because it is often the only tenuous link joining who we were to who we now are. It feels as if you risk losing a part of yourself forever if you were to throw away that last delicate link.

“Life without memory,” Luis Bunuel tells us, “is no life at all.” “Our memory,” he adds, “is our coherence, our reason, our feeling, even our action. Without it, we are nothing.” Perhaps this accounts for why tech criticism was born in a debate about memory. In the Phaedrus, Plato’s Socrates tells a cautionary tale about the invention of writing in which writing is framed as a technology that undermines the mind’s power to remember. What we can write down, we will no longer know for ourselves–or so Socrates worried. He was, of course, right. But, as we all know, this was an incomplete assessment of writing. Writing did weaken memory in the way Plato feared, but it did much else besides. It would not be the last time critics contemplated the effects of a new technology on memory.

I’ve not written nearly as much about memory as I once did, but it continues to be an area of deep interest. That interest was recently renewed not only by personal circumstances but also by the rollout of Google Photos, a new photo storage app with cutting edge sorting and searching capabilities. According to Steven Levy, Google hopes that it will be received as a “visual equivalent to Gmail.” On the surface, this is just another digital tool designed to store and manipulate data. But the data in question is, in this case, intimately tied up with our experience and how we remember it. It is yet another tool designed to store and manipulate memory.

When Levy asked Bradley Horowitz, the Google executive in charge of Photos, what problem does Google Photos solve? Horowitz replied,

“We have a proliferation of devices and storage and bandwidth, to the point where every single moment of our life can be saved and recorded. But you don’t get a second life with which to curate, review, and appreciate the first life. You almost need a second vacation to go through the pictures of the safari on your first vacation. That’s the problem we’re trying to fix — to automate the process so that users can be in the moment. We also want to bring all of the power of computer vision and machine learning to improve those photos, create derivative works, to make suggestions…to really be your assistant.”

It shouldn’t be too surprising that the solution to the problem of pervasive documentation enabled by technology is a new technology that allows you to continue documenting with even greater abandon. Like so many technological fixes to technological problems, it’s just a way of doubling down on the problem. Nor is it surprising that he also suggested this would help users “be in the moment” without of a hint of irony.

But here is the most important part of the whole interview, emphasis mine:

“[…] so part of Google photos is to create a safe space for your photos and remove any stigma associated with saving everything. For instance, I use my phone to take pictures of receipts, and pictures of signs that I want to remember and things like that. These can potentially pollute my photo stream. We make it so that things like that recede into the background, so there’s no cognitive burden to actually saving everything.”

Replace saving with remembering and the potential significance of a tool like Google Photos becomes easier to apprehend. Horowitz is here confirming that users will need to upload their photos to Google’s Cloud if they want to take advantage of Google Photos’ most impressive features. He anticipates that there will be questions about privacy and security, hence the mention of safety. But the really important issue here is this business about saving everything.

I’m not entirely sure what to make of the stigma Horowitz is talking about, but the cognitive burden of “saving everything” is presumably the burden of sorting and searching. How do you find the one picture you’re looking for when you’ve saved thousands of pictures across a variety of platforms and drives? How do you begin to organize all of these pictures in any kind of meaningful way? Enter Google Photos and its uncanny ability to identify faces and group pictures into three basic categories–People, Places, and Things–as well as a variety of sub-categories such as “food,” “beach,” or “cars.” Now you don’t need that second life to curate your photos. Google does it for you. Now we may document our lives to our heart’s content without a second thought about whether or not we’ll ever go back to curate our unwieldy hoard of images.

I’ve argued elsewhere that we’ve entered an age of memory abundance, and the abundance of memories makes us indifferent to them. When memory is scarce, we treasure it and care deeply about preserving it. When we generate a surfeit of memory, our ability to care about it diminishes proportionately. We can no longer relate to how Roland Barthes treasured his mother’s photograph; we are more like Andy Warhol, obsessively recording all of his interactions and never once listening to the recordings. Plato was, after all, even closer to the mark than we realized. New technologies of memory reconfigure the affections as well as the intellect. But is it possible that Google Photos will prove this judgement premature? Has Google figured out how we may have our memory cake and eat it too?

I think not, and there’s a historical precedent that will explain why.

Ivan Illich, in his brilliant study of medieval reading and the evolution of the book, In the Vineyard of the Text, noted how emerging textual technologies reconfigured how readers related to what they read. It is a complex, multifaceted argument and I won’t do justice to it here, but the heart of it is summed up in the title of Illich’s closing chapter, “From Book to Text.” After explaining what Illich meant by the that formulation, I’m going to suggest that we consider an analogous development: from photograph to image.

Like the photography, writing is, as Plato understood, a mnemonic technology. The book or codex is only one form the technology has taken, but it is arguably the most important form owing to its storage capacity and portability. Contrast the book to, for instance, a carved stone tablet or a scroll and you’ll immediately recognize the brilliance of the design. But the matter of sorting and searching remained a significant problem until the twelfth century. It is then that new features appeared to improve the book’s accessibility and user-friendliness, among them chapter titles, pagination, and the alphabetized index. Now one cloud access particular passages without having to either read the whole work or, more to the point, either memorize the passages or their location in the book (illuminated manuscripts were designed to aide with the latter).

My word choice in describing the evolution of the book above was, of course, calculated to make us see the book as a technology and also to make certain parallels to the case of digital photography more obvious. But what was the end result of all of this innovation? What did Illich mean by saying that the book became a text?

Borrowing a phrase Katherine Hayles deployed to describe a much later development, I’d say that Illich is getting at one example of how information lost its body. In other words, prior to these developments it was harder to imagine the text of a book as a free-floating reality that could be easily lifted and presented in a different format. The ideas, if you will, and the material that conveyed them–the message and medium–were intimately bound together; one could hardly imagine the two existing independently. This had everything to do with the embodied dimensions of the reading experience and the scarcity of books. Because there was no easy way to dip in and out of a book to look for a particular fragment and because one would likely encounter but one copy of a particular work, the work was experienced as a whole that lived within the particular pages of the book one held in hand.

The book had then been read reverentially as a window on the world; it yielded what Illich termed monastic reading. The text was later, after the technical innovations of the twelfth century, read as a window on the mind of the author; it yielded scholastic reading. We might also characterize these as devotional reading and academic reading, respectively. Illich summed it up this way:

“The text could now be seen as something distinct from the book. It was an object that could be visualized even with closed eyes [….] The page lost the quality of soil in which words are rooted. The new text was a figment on the face of the book that lifted off into autonomous existence [….] Only its shadow appeared on the page of this or that concrete book. As a result, the book was no longer the window onto nature or god; it was no longer the transparent optical device through which a reader gains access to creatures or the transcendent.”

Illich had, a few pages earlier, put the matter more evocatively: “Modern reading, especially of the academic and professional type, is an activity performed by commuters or tourists; it is no longer that of pedestrians and pilgrims.”

I recount Illich’s argument because it illuminates the changes we are witnessing with regards to photography. Illich demonstrated two relevant principles. First, that small technical developments can have significant and lasting consequences for the experience and meaning of media. The move from analog to digital photography should naturally be granted priority of place, but subsequent developments such as those in face recognition software and automated categorization should not be underestimated. Secondly, that improvements in what we might today call retrieval and accessibility can generate an order of abstraction and detachment from the concrete embodiment of media. And this matters because the concrete embodiment, the book as opposed to the text, yields kinds and degrees of engagement that are unique to it.

Let me try to put the matter more directly and simultaneously apply it to the case of photography. Improving accessibility meant that readers could approach the physical book as the mere repository of mental constructs, which could be poached and gleaned at whim. Consequently, the book was something to be used to gain access to the text, which now appeared for the first time as an abstract reality; it ceased to be itself a unique and precious window on the world and its affective power was compromised.

Now, just as the book yielded to the text, so the photograph yields to the image. Imagine a 19th century woman gazing lovingly at a photograph of her son. The woman does not conceive of the photograph as one instantiation of the image of her son. Today, however, we who hardly ever hold photographs anymore, we can hardly help thinking it terms of images, which may be displayed on any of a number of different platforms, not to mention manipulated at whim. The image is an order of abstraction removed from the photograph and it would be hard to imagine someone treasuring it in the same way that we might treasure an old photograph. Perhaps a thought experiment will drive this home. Try to imagine the emotional distance between the act of tearing up a photograph and deleting an image.

Now let’s come back to the problem Google Photos is intended to solve. Will automated sorting and categorization along with the ability to search succeed in making our documentation more meaningful? Moreover, will it overcome the problems associated with memory abundance? Doubtful. Instead, the tools will facilitate further abstraction and detachment. They are designed to encourage the production of even more documentary data and to further diminish our involvement in their production and storage. Consequently, we will continue to care less not more about particular images.

Of course, this hardly means the tools are useless or that images are meaningless. I’m certain that face recognition software, for instance, can and will be put to all sorts of uses, benign and otherwise and that the reams of data users will feed Google Photos will only help to improve and refine the software. And it is also true that images can be made use of in ways that photographs never could. But perhaps that is the point. A photograph we might cherish; we tend to make use of images. Unlike the useless stuff around which my memories accumulate and that I struggle to throw away, images are all use-value and we don’t think twice about deleting them when they have no use.

Finally, Google’s answer to the problem of documentation, that it takes us out of the moment as it were, is to encourage such pervasive and continual documentation that it is no longer experienced as a stepping out of the moment at all. The goal appears to be a state of continual passive documentation in which case the distinction between experience and documentation blurs so that the two are indistinguishable. The problem is not so much solved as it is altogether transcended. To experience life will be to document it. In so doing we are generating a second life, a phantom life that abides in the Cloud.

And perhaps we may, without stretching the bounds of plausibility too far, reconsider that rather ethereal, heavenly metaphor–the Cloud. As we generate this phantom life, this double of ourselves constituted by data, are we thereby hoping, half-consciously, to evade or at least cope with the unremitting passage of time and, ultimately, our mortality?