The Inhumanity of Smart Technology

I’m allergic to hyperbole. That said, Evgeny Morozov identifies one of the most important challenges we face in the coming years:

“There are many contexts in which smart technologies are unambiguously useful and even lifesaving. Smart belts that monitor the balance of the elderly and smart carpets that detect falls seem to fall in this category. The problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning.

It’s great when the things around us run smoothly, but it’s even better when they don’t do so by default. That, after all, is how we gain the space to make decisions—many of them undoubtedly wrongheaded—and, through trial and error, to mature into responsible adults, tolerant of compromise and complexity.”

Exactly right.

“Out of the crooked timber of humanity no straight thing was ever made,” Kant observed. Corollary to keep in mind: If a straight thing is made, it will be because humanity has been stripped out of it.

What is the endgame of the trajectory of innovation that is determined to eliminate human error, deviance, and folly? In every field of human endeavor — whether it be industry, medicine, education, governance — technological innovation reduces human involvement, thought, and action in the name of precision, efficiency, and effectiveness.

Morozov’s forthcoming book, To Save Everything, Click Here: The Folly of Technological Solutionism, targets what he has called “solutionism,” the temptation, I take it without having read the book yet, to view the Internet as the potential solution to every conceivable problem. I’m tempted to suggest for Morozov the target of his next book: eliminationism — the progressive elimination of human thought and action wherever possible. Life will increasingly consist of automated processes, actions, and interactions that will envelope and frame the human and render the human superfluous. Worse yet, insofar as the human is ultimately the root of our inconveniences and our problems, solutionism’s ultimate trajectory must lead to eliminationism.

There are tragic associations haunting that last formulation, so let me be clear. It is not (necessarily) the elimination of human beings that I’m worried about; it is the elimination of our humanity. The fear — and why not, let’s embrace its most popular cultural icon — is that we will be rendered zombies: alive but not living, stripped of the possibility for error, risk, failure, triumph, joy, redemption, and much of what renders our lives tragically, gloriously meaningful.

Albert Borgmann had it right. We must distinguish between “trouble we reject in principle and accept in practice and trouble we accept in practice and in principle.” In the former category, Borgmann has in mind troubles on the order of car accidents and cancer.  By “accepting them in practice,” Borgmann means that at the personal level we must cope with such tragedies when they strike. But these are troubles that we oppose in principle, and so we seek cures for cancer and improved highway safety.

wall-e

Against these, Borgmann opposes troubles that we also accept in practice, but ought to accept in principle as well. Here the examples are preparation of a meal and hiking a mountain.  These sorts of troubles, sometimes not without their real dangers, could be opposed in principle — never prepare meals at home, never hike — but such avoidance would also prevent us from experiencing their attendant joys and satisfactions. If we seek to remove all trouble or risk from our lives; if we always opt for convenience, efficiency, and ease; if, in other words, we aim indiscriminately at the frictionless life; then we simultaneously rob ourselves of the real satisfactions and pleasures that enhance and enrich our lives — that, in fact, make our lives fully human.

Huxley had it right, too:

“But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”

“In fact,” said Mustapha Mond, “you’re claiming the right to be unhappy.”

“All right then,” said the Savage defiantly, “I’m claiming the right to be unhappy.”

In claiming the right to be unhappy, the Savage was claiming the right to a fully human existence. It is a right we must take increasing care to safeguard against our own fascination with the promises of technology.

The Lifestream Stops

David Gelernter, 2013:

“And today, the most important function of the internet is to deliver the latest information, to tell us what’s happening right now. That’s why so many time-based structures have emerged in the cybersphere: to satisfy the need for the newest data. Whether tweet or timeline, all are time-ordered streams designed to tell you what’s new … But what happens if we merge all those blogs, feeds, chatstreams, and so forth? By adding together every timestream on the net — including the private lifestreams that are just beginning to emerge — into a single flood of data, we get the worldstream: a way to picture the cybersphere as a whole … What people really want is to tune in to information. Since many millions of separate lifestreams will exist in the cybersphere soon, our basic software will be the stream-browser: like today’s browsers, but designed to add, subtract, and navigate streams.”

E. M. Forster, 1909:

“Who is it?” she called. Her voice was irritable, for she had been interrupted often since the music began. She knew several thousand people, in certain directions human intercourse had advanced enormously.

But when she listened into the receiver, her white face wrinkled into smiles, and she said:

“Very well. Let us talk, I will isolate myself. I do not expect anything important will happen for the next five minutes-for I can give you fully five minutes …”

She touched the isolation knob, so that no one else could speak to her. Then she touched the lighting apparatus, and the little room was plunged into darkness.

“Be quick!” She called, her irritation returning. “Be quick, Kuno; here I am in the dark wasting my time.”

[Conversation ensues and comes to an abrupt close.]

Vashti’s next move was to turn off the isolation switch, and all the accumulations of the last three minutes burst upon her. The room was filled with the noise of bells, and speaking-tubes. What was the new food like? Could she recommend it? Has she had any ideas lately? Might one tell her one”s own ideas? Would she make an engagement to visit the public nurseries at an early date? – say this day month.

To most of these questions she replied with irritation – a growing quality in that accelerated age. She said that the new food was horrible. That she could not visit the public nurseries through press of engagements. That she had no ideas of her own but had just been told one-that four stars and three in the middle were like a man: she doubted there was much in it.

When I read Gelernter’s piece and his world-stream metaphor (illustration below), I was reminded of Forster’s story and the image of Vashti, sitting in her chair, immersed in cacophonic real-time stream of information. Of course, the one obvious difference between Gelernter’s and Forster’s conceptions of the relentless stream of information into which one plunges is the nature of the interface. In Forster’s story, “The Machine Stops,” the interface is anchored to a particular place. It is an armchair in a bare, dark room from which characters in his story rarely move. Gelernter assumes the mobile interfaces we’ve grown accustomed to over the last several years.

In Forster’s story, the great threat the Machine poses to its users is that of radical disembodiment. Bodies have atrophied, physicality is a burden, and all the ways in which the body comes to know the world have been overwhelmed by a perpetual feeding of the mind with ever more derivative “ideas.” This is a fascinating aspect of the story. Forster anticipates the insights of later philosophers such as Merleau-Ponty and Hubert Dreyfus as well as the many researchers helping us understand embodied cognition. Take this passage for example:

You know that we have lost the sense of space. We say “space is annihilated”, but we have annihilated not space, but the sense thereof. We have lost a part of ourselves. I determined to recover it, and I began by walking up and down the platform of the railway outside my room. Up and down, until I was tired, and so did recapture the meaning of “Near” and “Far”. “Near” is a place to which I can get quickly on my feet, not a place to which the train or the air-ship will take me quickly. “Far” is a place to which I cannot get quickly on my feet; the vomitory is “far”, though I could be there in thirty-eight seconds by summoning the train. Man is the measure. That was my first lesson. Man”s feet are the measure for distance, his hands are the measure for ownership, his body is the measure for all that is lovable and desirable and strong.

But how might Forster have conceived of his story if his interface had been mobile? Would his story still be a Cartesian nightmare? Or would he understand the danger to be posed to our sense of time rather than our sense of place? He might have worried not about the consequences of being anchored to one place, but rather being anchored to one time — a relentless, enduring present.

Were I Forster, however, I wouldn’t change his focus on the body. For isn’t our body and the physicality of lived experience that the body perceives also our most meaningful measure of time? Do not our memories etch themselves in our bodies? Does not a feel for the passing years emerge from the transformation of our bodies? Philosopher Merleau-Ponty spoke of the “time of the body.” Consider Shaun Gallagher’s exploration of Merleau-Ponty’s perspective:

“Temporality is in some way a ‘dimension of our being’ … More specifically, it is a dimension of our situated existence. Merleau-Ponty explains this along the lines of the Heideggerian analysis of being-in- the-world. It is in my everyday dealings with things that the horizon of the day gets defined: it is in ‘this moment I spend working, with, behind it, the horizon of the day that has elapsed, and in front of it, the evening and night – that I make contact with time, and learn to know its course’ …”

Gallagher goes on to cite the following passage from Merleau-Ponty:

“I do not form a mental picture of my day, it weighs upon me with all its weight, it is still there, and though I may not recall any detail of it, I have the impending power to do so, I still ‘have it in hand.’ . . . Our future is not made up exclusively of guesswork and daydreams. Ahead of what I see and perceive . . . my world is carried forward by lines of intentionality which trace out in advance at least the style of what is to come.”

Then Gallagher adds, “Thus, Merleau-Ponty suggests, I feel time on my shoulders and in my fatigued muscles; I get physically tired from my work; I see how much more I have to do. Time is measured out first of all in my embodied actions as I ‘reckon with an environment’ in which ‘I seek support in my tools, and am at my task rather than confronting it.'”

That last distinction between being at my task rather than confronting it seems particularly significant, especially as it involves the support of tools. Our sense of time, like our sense of place, is not an unchangeable given. It shifts and alters through technological mediation. Melvin Kranzberg, in the first of his six laws of technology, reminds us, “Technology is neither good nor bad; nor is it neutral.” Our technological mediation of space and time is never neutral; and while it may not be “bad” or “good” in some abstract sense, it can be more or less humane, more or less conducive to our well-being. If the future of the Internet is the worldstream, we should perhaps think twice before plunging.

Worldstream Gelernter

Violence and Technology

There is this well known line from Wittgenstein’s Tractatus that reads, “Whereof one cannot speak, thereof one must be silent.” There is much wisdom in this, especially when one extends its meaning beyond what Wittgenstein intended (so far as I understand what he intended). We all know very well that words often fail us when we are confronted with unbearable sorrow or unmitigated joy. In the aftermath of the horror in Newtown, Connecticut, then, what could one say? Everything else seemed trivial.

I first heard of the shooting when I logged on to Twitter to post some frivolous comment, and, of course, I did not follow through. However, I then felt the need to post something — something appropriate, something with sufficient gravitas. But I asked myself why? Why should I feel the need to post anything? To what end? So that others may note that I responded to the tragedy with just the right measure of grace and seriousness? Or to self-righteously admonish others, implicitly of course, about their own failure to respond as I deemed appropriate?

When we become accustomed to living and thinking in public, the value of unseen action and unshared thoughts is eclipsed. “I should be silent,” a part of us may acknowledge, but then in response, a less circumspect voice within us wonders, “But how will anyone know that I am being silent? A hashtag perhaps, #silent?”

I felt just then, with particular force, the stunning degree of self-indulgence invited by social media. But then, of course, I had to reckon with the fact that the well of self-indulgence tapped by social media springs from no other source but myself.

There is only one other point that I want to consider. Within my online circles, many have sought to challenge the slogan “Guns don’t kill people,” and they have done so based on premises which I am generally inclined to support. I have myself associated the technological neutrality position with this slogan, and I have found it an inadequate position. Guns, like other technologies, yield a causal force independent of the particular uses to which they are put. They enter actively and with consequence into our perception and experience of the world. This, I continue to believe, is quite true.

Several months ago, in the wake of another tragic shooting, Evan Selinger wrote a well-considered piece on this very theme and I encourage you to read it: “The Philosophy of the Technology of the Gun.”

Less effectively, in my view, but thoughtfully still, PJ Rey revisited Zeynep Tufekci’s appropriation of Aristotle’s categories of causality to frame the gun as the material cause of acts of violence. The argument here is also against technological neutrality, I’m just not entirely sure that Aristotle’s categories are fully understood by Rey or Tufekci (which is not to say that I fully understand them). The material cause is not “that without which,” but “that out of which.” But then again, I put Wittgenstein’s dictum to my own uses; I suppose Aristotle too can be used suggestively, if not rigorously. Maybe.

Thus far, I’ve been sympathetic to the claims advanced, but there is latent in these considerations (but not necessarily in the thinking of these authors) an opposite error that I’ve also seen expressed explicitly and forcefully. Last night, I caught the following comment on Twitter from Prof. Lance Strate. Strate is a respected media ecologist and I have in the past appreciated his insights and commentary. I was, however, stopped short by this tweet:

I want to make all the requisite acknowledgements here. It is a tweet, after all, and the medium is not conducive to nuance. Nor is one required to say everything one thinks about a matter whenever one speaks of that matter. And, in fairness to Strate, I also want to provide a link to his fuller discussion of the situation on his blog, “On Guns and More,” much of which I would agree with.

That said, “Surely the blame is also on him,” was my initial response to this tweet. Again, I want to read generously, particularly in a medium that is given to misunderstanding. I don’t know that Strate meant to recuse the shooter of all responsibility; in fact, I have to believe such was not the case. But this comment reminded me that in our efforts to critique the neutrality of technology position, we need to take care less we end up endorsing, in my view, more pernicious errors of judgment.

Thinking again about the manner in which a gun enters into our phenomenological experience, it is true to say that a gun wants to be shot. But this does not say everything there is to say; it doesn’t even say the most important and relevant things that could be said. Why is it, at times, not shot at all? Further, to say it wants to be shot is not yet to say what it will be shot at or why? We cannot dismiss the other forms of causality that come into play. If Aristotle is to be invoked, after all,  it should be acknowledged that he privileged final causation whenever possible.

Interestingly, in his illustration of the four causes –the making of a bronze statue — Aristotle did not take the craftsman to be the best example of an efficient cause. It was instead the knowledge the craftsman possessed that best illustrated the efficient cause. If we apply this analogously onto the present case, it suggests that knowledge of how to inflict violence is the efficient cause. And this reminds us, disturbingly, of what is latent in all of us.

It reminds me as well of some other well known lines, not from Wittgenstein this time, but from Solzhenitsyn: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?”

Kyrie Eleison.

The Conscience of a Machine

Recently, Gary Marcus predicted that within the next two to three decades we would enter an era “in which it will no longer be optional for machines to have ethical systems.” Marcus invites us to imagine the following driverless car scenario: “Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?”

In this scenario, a variation of the trolley car problem, the computer operating the car would need to make a decision (although I suspect putting it that way is an anthropomorphism). Were a human being called upon to make such a decision, it would be considered a choice of moral consequence. Consequently, writing about Marcus’ piece, Nicholas Carr concluded, “We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”

Of course, there is a sense in which autonomous machines of this sort are not really ethical agents. To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or even moral intuition among human beings. They will do as they are programmed to do. The question is, What will they be programmed to do in such circumstances? What ethical system will animate the programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be Benthamites, calculating the greatest good for the greatest number?

There is an interesting sense, though, in which an autonomous machine of the sort envisioned in these scenarios is an agent, even if we might hesitate to call it an ethical agent. What’s interesting is not that a machine may cause harm or even death. We’ve been accustomed to this for generations. But in such cases, a machine has ordinarily malfunctioned, or else some human action was at fault. In the scenarios proposed by Marcus, an action that causes harm would be the result of a properly functioning machine and it would have not been the result of direct human action. The machines decided to take an action that resulted in harm, even if it was in some sense the lesser harm. In fact, such machines might rightly be called the first truly malfunctioning machines.

There is little chance that our world will not one day be widely populated by autonomous machines of the sort that will require a “conscience” or “ethical systems.” Determining what moral calculus should inform such “moral machines,” is problematic enough. But there is another, more subtle danger that should concern us.

Such a machine seems to enter into the world of morally consequential action that until now has been occupied exclusively by human beings, but they do so without a capacity to be burdened by the weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satisfying way. They will, in other words, lose no sleep over their decisions, whatever those may be.

We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human experience to the characteristics of our machines. Take memory for example. Having first decided, by analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere storage. So now we casually talk of offloading the work of memory or of Google being a better substitute for human memory without any thought for how human memory is related to perception, understanding, creativity, identity, and more.

I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by which machines are programmed to make ethically significant decisions the machine’s “conscience,” and then turn around, reverse the direction of the metaphor, and come to understand human conscience by analogy to what the machine does. This would result in an impoverishment of the moral life.

Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the right to be unhappy, to be troubled by fully realized human conscience?

This is, of course, not merely a matter of making the “right” decisions. Part of what makes programming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envisioning would remain. The moral weightiness of human existence does not reside solely in the moment of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is precisely this “living with” our decisions that a machine conscience cannot know.

In Miguel Unamuno’s Tragic Sense of Life, he relates the following anecdote: “A pedant who beheld Solon weeping for the death of a son said to him, ‘Why do you weep thus, if weeping avails nothing?’ And the sage answered him, ‘Precisely for that reason–because it does not avail.'”

Were we to conform our conscience to the “conscience” of our future machines, we would cease to shed such tears, and our humanity lies in Solon’s tears.

_______________________________________________

Also consider Evan Selinger’s excellent and relevant piece, “Would Outsourcing Morality to Technology Diminish Our Humanity?”

Social Media and the Arts of Memory

memory theater (2)

More than a year and half ago, I published two posts, in what was to be a series of three, framing social networking sites, Facebook in particular, within the arts of memory tradition. The third post, for a variety of now forgotten reasons, never appeared. Recent discussion regarding Facebook and memory prompted me to complete what had been left unfinished. Below you will find the complete essay including the text of those two earlier posts, lightly edited, along with the concluding section. 

memory theater (2)

Early in The Art of Memory, Frances Yates pauses to envision a “forgotten social habit” of antiquity.  She invites us to wonder,“Who is that man moving slowly in the lonely building, stopping at intervals with an intent face?”  That man, Yates tells us, is a “rhetoric student forming a set of memory loci.”  The rhetoric student would have been creating the architecture of a mental space into which they would then place vivid images imaginatively designed to recollect the themes or words of a prepared oration.  While delivering the oration, the rhetor would navigate the mental space coming upon each carefully placed image which triggered their memory accordingly.  This work of designing mental space and populating the space with striking images followed the prescriptions of the techniques of artificial memory widely practiced in classical antiquity.

What if, however, we updated Yates’ scene by setting it in the present?  The scene would be instantly recognizable as long as we equipped our imagined person with a cell phone.  The stopping at intervals and the intent face would correspond to any of the multiple uses to which an Internet-enabled smart phone may be put:  reading or sending a text message, downloading songs, taking or sending pictures and video, updating social media profiles, or finding directions with GPS, to name but a few.  What is striking is how often these activities would, like that of the ancient rhetor, involve the work of memory. Much of what cell phones are used for has very little to do with making a phone call, after all. Cell phones are more likely to be used to access the Internet, send a text message, take a picture, or film a video.  Given these capabilities cell phones have become prosthetic memory devices; to lose a cell phone would be to induce a state of partial amnesia.  Or, it may be better to say it might induce a fear of future amnesia since our ability to approach the present as a field of potential memories would be undermined.

Social networking sites (SNS) are of special interest because of how they explicitly trade in memory.  This leads Joanne Garde-Hansen to ask in “MyMemories?:  Personal Digital Archive Fever and Facebook,” “If memory and identity can be seen as coterminous and SNSs involve literally digitising ones’ self into being then what is at stake when memories and identities are practiced on Facebook?”

She goes on to add,

“It goes without saying that the allure of the site is in its drawing together in one place memory practices: creating photograph albums, sharing photographs, messaging, joining groups and alumni memberships, making ‘friends’ and staying in touch with family.”

It would be fair to acknowledge that SNS such as Facebook traffic in more than the allure of memory practices. Nonetheless, the production, maintenance, and retrieval of memory is integral to the practices deployed on social networks.

Following Jacques Derrida, Garde-Hansen considers Facebook as an instance of the archiving archive. Thus, she points out, the architecture of a SNS such as Facebook is not neutral with respect to the memories it archives.  As Derrida observed,

“… the technical structure of the archiving archive also determines the structure of the archivable content even in its very coming into existence and in its relationship to the future.  The archivization produces as much as it records the event.”

Garde-Hansen also draws SNS into the tension between database and narrative addressed by Manovich in The Language of New Media.  In her view, the most significant aspect of Manovich’s analysis of new media for SNS is the comparison he draws between the visual organization of digital media interfaces and spatial montage.  “Manovich’s emphasis upon spatial narrative is,” according to Garde-Hansen, “extremely pertinent to thinking through the emergence of SNSs and how these sites remediate personal and collective memory.” Framed in this way, memory as spatial montage challenges “the rise and dominance of history,” the “power of the written word” to order the past temporally, and the “twentieth century’s emphasis upon the single, unadulterated image (think cinema).”

Derrida’s insight suggests that the sorts of memories we are able to archive with social media may already be directing our interactions in the present.  (For an insightful discussion of this point anchored on an analysis of faux-vintage photography see Nathan Jurgenson’s, “The Faux-Vintage Photo.”)  Drawing in Manovich’s database/narrative opposition further suggests that the visual/spatial mode of ordering memories on SNS potentially shifts how meaning is derived from memory and, consequently, how we understand the self.

Returning to the scene suggested by Yates we may also consider SNS such as Facebook as instances of new artificial memory spaces constructed to supplement and augment the natural memory.  In the artificial memory tradition we already see memory rendered spatially and visually in a manner that anticipates the representation and organization of memory on SNS.  Situating SNS within the long history of spatial and visual memory also affords us the opportunity to consider such sites in the context of a complex and rich tradition of philosophical reflection on the nature of memory.

What emerges is a history of memory practices that alternate between a Platonic focus on memory as the presence of an absence and an Aristotelian emphasis on memory as the record of the past.  There are several thematic threads that weave this story together including the opposition of internal memory to memory supported by inscription, the connection between memory and imagination, memory as the index of desire, the related tensions between space and time and databases and narratives, and the relationship of memory to identity.  Yet for all the complexity those themes introduce, we will begin with a story.

The Origin of the Arts of Memory

Spatiality, images, and death have long been woven together in the complex history of remembering.  Each appears prominently in the founding myth of what Yates called the “art of memory” as it is recounted by Cicero in his De oratore. According to the story, the poet Simonides of Ceos was contracted by Scopas, a Thessalian nobleman, to compose a poem in his honor.  To the nobleman’s chagrin, Simonides devoted half of his oration to the praise of the gods Castor and Pollux.  Feeling himself cheated out of half of the honor, Scopas brusquely paid Simonides only half the agreed upon fee and told him to seek the rest from the twin gods.  Not long afterward that same evening, Simonides was summoned from the banqueting table by news that two young men were calling for him at the door.  Simonides sought the two callers, but found no one.  While he was out of the house, however, the roof caved in killing all of those gathered around the table including Scopas. As Yates puts it, “The invisible callers, Castor and Pollux, had handsomely paid for their share in the panegyric by drawing Simonides away from the banquet just before the crash.”

The bodies of the victims were so disfigured by the manner of death that they were rendered unidentifiable even by family and friends.  Simonides, however, found that he was able to recall where each person was seated around the table and in this way he identified each body.  This led Simonides to the realization that place and image were the keys to memory, and in this case, also a means of preserving identity through the calamity of death.  In Cicero’s words,

“[Simonides] inferred that persons desiring to train [their memory] must select places and form mental images of the things they wish to remember and store those images in the places, so that the order of the places will preserve the order of the things, and the images of the things will denote the things themselves, and we shall employ the places and images respectively as a wax writing-tablet and the letters written on it.”

Cicero is one of three classical sources on the principles of artificial memory that evolved in the ancient world as a component of rhetorical training.  The other two sources are Quintilian’s Institutio oratoria and the anonymous Ad Herennium.  It is through the Ad Herennium, mistakenly attributed to Cicero, that the art of memory migrates into medieval culture where it is eventually assimilated into the field of ethics.  Cicero’s allusion to the wax-writing table, however, reminds us that discussion of memory in the ancient world was not limited to the rhetorical schools.  Memory as a block of wax upon which we make impressions is a metaphor attributed to Socrates in Plato’s Theaetetus where it appears as a gift of Mnemosyne, the mother of the muses:

“Imagine, then, for the sake of argument, that our minds contain a block of wax, which in this or that individual may be larger or smaller, and composed of wax that is comparatively pure or muddy, and harder in some, softer in others, and sometimes of just the right consistency.

Let us call it the gift of the Muses’ mother, Memory, and say that whenever we wish to remember something we see or hear or conceive in our own minds, we hold this wax under the perceptions or ideas and imprint them on it as we might stamp the impressions of a seal ring.  Whatever is so imprinted we remember and know so long as the image remains; whatever is rubbed out or has not succeeded in leaving an impression we have forgotten and do not know.”

Plato and Aristotle in Rafeal's "School of Athens"

The Platonic understanding of memory was grounded in an epistemology which located the ability to apprehend truth in an act of recollection.  Plato believed that the highest forms of knowledge were not derived from sense experience, but were first apprehended by the soul in a pre-existent state and remain imprinted deep in a person’s memory.  Truth consists in matching the sensible experience of physical reality to the imprint of eternal Forms or Ideas whose images or imprints reside in memory.  Consequently, the chief aim of education, and the highest calling of memory, is the remembering of these Ideas and this aim is principally attained through “dialectical enquiry,” a process, modeled by Plato’s dialogs, by which a student may arrive at a remembering of the Ideas.

At this point, we should notice that the anteriority, or “pastness,” of the knowledge in question is, strictly speaking, incidental.  What is important is the presence of the absent Idea or Form to be contemplated.  It is to evoke the presence of this absence that memory is deployed.  It is the presence of eternal Ideas that secures the apprehension of truth, goodness, or beauty in the present.  Locating the memory within the span of time past does not bear upon its value which rests in its being possessed as a model against which to measure experience.

By contrast, the principle aspect to note about Aristotle’s understanding of memory is that he distinguishes it from the imagination by noting its reference to the historical time.  While Plato’s focus on the image and its presence becomes “an obstacle to recognizing the specificity of the properly temporalizing function of memory,” according to Paul Riceour, in Aristotle’s “proud declaration” that “all memory is of the past” we find acknowledgement of that specificity.  In Aristotle’s account, memory is also, as it were, naturalized.  Along with his emphasis on memory being “of the past,” it is the past of sense experience that, for Aristotle, stocks memory.  Gone is Plato’s theory of the preexistence of the soul and memory as the deposit of knowledge of the Forms glimpsed in that preexistent state.

Ricoeur, in Memory, History, Forgetting, begins his consideration of the heritage of Greek reflections on memory with the following observation:

“Socratic philosophy bequeathed to us two rival and complementary topoi on this subject, one Platonic, the other Aristotelian.  The first, centered on the theme of the eikōn [image], speaks of the present representation of an absent thing; it argues implicitly for enclosing the problematic of memory within that of imagination.  The second, centered on the theme of the representation of a thing formerly perceived, acquired, or learned, argues for including the problematic of the image within that of remembering.”

As he goes on to note, from these two framings of the problematic of memory “we can never completely extricate ourselves.”

Reflecting for just a moment on the nature of our own memories it is not difficult to see why this might be the case.  If we remember our mother, for example, we may do so either by contemplating some idealized image of her in our mind’s eye or else by recollecting a moment from our shared past.  In both cases we may be said to be remembering our mother, but the memories differ along the Platonic/Aristotelian divide suggested by Ricoeur.  In the former case I remember her in a way that seeks her presence without reference to time past; in the latter, I remember her in a way that situates her chronologically in the past.

What we find as we pursue the artificial memory tradition through the medieval period into the Renaissance is a persistent distancing of memory from narrative and toward presence.  In its various manifestations it becomes an art seeking the presence of the divine, of virtue, or of esoteric knowledge mediated through images in a space that can be apprehended at a glance.

“One of the most striking manifestations of the Renaissance use of the art,” Yates explains, “is the Memory Theater of Giulio Camillo … based (so he believes) on archetypes of reality on which depend secondary images covering the whole realm of nature and of man.” Camillo, mostly forgotten in our age, was in his time a celebrity that attracted the attention of notable contemporaries including the King of France and Erasmus.  It is in fact, from the hand of Erasmus’ secretary that we have the most complete record of Camillo’s memory theater.

This manifestation of the artificial memory, however, also included Hermetic and Cabalistic elements.  Yates, in fact, describes Camillo’s lifetime work as an effort to the blend the Hermetic-Cabalist tradition founded by Pico della Mirandola with the classical art of memory.  Camillo also links these esoteric traditions to the recently revived currents of Neoplatonism so that his memory theater would “represent the eternal order of truth.”

Camillo’s theater was a physical object large enough for two people to enter into in order to see the drawn image of a theater used to organize the whole range of human knowledge. Gone from Camillo’s theater is the space that at least implied a certain temporality, or at least sequentiality as it was navigated.  Now the space itself is a fixed image that is immediately and simultaneously present to user.  Temporality is overthrown by presence.  Again Ricoeur:  “The revenge of the Platonic, and especially Neoplatonic, reminiscence over the Aristotelian psychology of memory and recollection is complete, but at the price of the transformation of reasoned speculation into mystagogy.”

Memory, Databases, Narrative

Where memory is conceived in Aristotelian fashion as principally defined by its reference to a past in time, it shares a natural affinity with narrative.  “Narrative is,” as Katherine Hayles has put it, “a temporal technology.” Where memory is divorced from temporality in Platonic fashion, narrative seems to fade from view.  To represent memories visually is to render them present, but it may also mean abstracting them from their place in the narratives we tell about our lives.  The artificial memory techniques, detached from temporality as they were, may be understood on the model of the database that, in Manovich’s analysis, is the “natural enemy of narrative.”

The architecture of the mental space in the artificial memory tradition functions similarly to an interface that allows for multiple arrangements of the images which are so many icons representing various types of data points.  Like a database, what the artificial memory facilitated was principally random access and retrieval.  As Manovich goes on to note, “a database can support narrative, but there is nothing in the logic of the medium itself that would foster its generation.”  Likewise with the artificial memory; it could support a narrative, but it does not necessarily generate one.

In his brief discussion of the artificial memory, Ricoeur forcefully makes a similar point:  “spatialization obliterates temporalization … the notion of place has chased away the mark of the past which had characterized memory since Aristotle’s De memoria et reminiscentia.  Memory no longer consists in recalling the past but in actualizing what has been learned and stored away in mental space.”

In response to Manovich, Hayles sought to articulate a more symbiotic relationship between narratives and databases:  “No longer singular, narratives remain the necessary others to database’s ontology, the perspectives that invest the formal logic of database operations with human meanings and that gesture toward the unknown hovering beyond the brink of what can be classified and enumerated.” Hayles offers an instructive reframing of the opposition between narrative and database.  However, drawing artificial memory into the database/narrative polarity suggests that memories stored in databases may not be seeking narratives at all.  While meaning can be secured by the deployment of narratives that bring order to the disorder of lived experience, the desire for meaning is not the only desire that memory answers to. Memory also desires presence.

Coming back to social networking sites, we might say then personal memories, of the sort archived by Facebook, yield presence rather than the sort of meaning that arises from narratives. Not unlike the mental constructs of the arts of memory, Facebook operates on the model of the database and the spatial montage. Like the artificial memory, it may sustain a narrative, but it does not necessarily generate one.  Facebook backgrounds the narrative coherence of memory grounded in past experience in favor of the immediacy of spatially and visually mediated presence. We do well, however, not to frame this opposition too starkly. The introduction of Facebook’s Timeline interface may be seen as a tilt in the direction of Aristotelian memory and narrative. Nonetheless, memory, when anchored to spatially arranged databases of images, more readily answers to the desire for the immediacy of presence rather than the temporally inflected logic of narrative.

It may be helpful to consider Facebook alongside the diary, another form of externalized memory. The material form of the diary encourages a narrative posture toward experience. Writing already requires the narrativizing of one’s experience and the diary’s form elicits the sequential ordering of these narratives and suggests an anticipation of closure. Visual databases of memory, whether mental, analog, or digital, may invite commentary and explication, but not a linear narrative of one’s experience.

Conclusion

Returning to the story of Simonides, we remember that the artificial memory tradition had its roots in the trauma of death. In the face of great loss we may very well seek the consolations of a narrative that attempts to meaningfully frame our experience. But we may also seek to recover the presence of what is now absent.

In “A Photograph,” the poet, Czeslaw Milosz, observed:

“Few tasks more difficult
Than to write a treatise
On a man who looks
At an old photograph.

Why he does it
Is incomprehensible
And his feelings
Cannot be explained.

Seemingly it’s simple:
She was his love.
But here precisely
Questions begin.”

Then, toward the end of the poem, Milosz writes,

“And, inconceivable,
He addressed her,
Perfectly certain
That she hears him:

‘O maiden of the Lord,
Promised to me,
With whom I was to have
At least twelve children,

‘Obtain for me the grace
Of your strong faith.
We living are too weak
Without your assistance.

You are for me now
The mystery of time
i.e., of a person
Changing and the same,

Who runs in the garden
Fragrant after the rain
With a ribbon in your hair
And lives in the beyond.

You see how I try
To reach with words
What matters most
And how I fail.

Though perhaps this moment
When you are so close
Is precisely your help
And an act of forgiveness.’”

There is something about the image, the photograph — remember Barthes’ mother — that evokes the consolations of memory as presence. Life consists of countless separations and losses; in such instances we may also seek similar consolation. We know we will be parted and we know the present is ephemeral, so we build our archives of memories to later avail ourselves of the presence they offer.

According to Ivan Illich, “What anthropologists distinguish as ‘cultures’ the historian of mental spaces might distinguish as different ‘memories.’” What historians of media have classified as oral, literate, print, and electronic cultures were distinguished largely along the lines of what could be remembered, how, and by whom. It would be presumptuous to say that the same will be true of digital culture, but it would be surprising if such were not the case. We do well then to consider how digital media shapes our remembering, both what it allows us to remember and what it encourages us to remember.