How to Think About Memory and Technology

I suppose it is the case that we derive some pleasure from imagining ourselves to be part of a beleaguered but noble minority. This may explain why a techno-enthusiast finds it necessary to attack dystopian science fiction on the grounds that it is making us all fear technology while I find that same notion ludicrous.

Likewise, Salma Noreen closes her discussion of the internet’s effect on memory with the following counsel: “Rather than worrying about what we have lost, perhaps we need to focus on what we have gained.” I find that a curious note on which to close because I tend to think that we are not sufficiently concerned about what we have lost or what we may be losing as we steam full speed ahead into our technological futures. But perhaps I also am not immune to the consolations of belonging to an imagined beleaguered community of my own.

So which is it? Are we a society of techno-skeptics with brave, intrepid techno-enthusiasts on the fringes stiffening our resolve to embrace the happy technological future that can be ours for the taking? Or are we a society of techno-enthusiasts for whom the warnings of the few techno-skeptics are nothing more than a distant echo from an ever-receding past?

I suspect the latter is closer to the truth, but you can tell me how things look from where you’re standing.

My main concern is to look more closely at Noreen’s discussion of memory, which is a topic of abiding interest to me. “What anthropologists distinguish as ‘cultures,’” Ivan Illich wrote, “the historian of mental spaces might distinguish as different ‘memories.'” And I rather think he was right. Along similar lines, and in the early 1970s, George Steiner lamented, “The catastrophic decline of memorization in our own modern education and adult resources is one of the crucial, though as yet little understood, symptoms of an afterculture.” We’ll come back to more of what Steiner had to say a bit further on, but first let’s consider Noreen’s article.

She mentions two studies as a foil to her eventual conclusion. The first suggesting “the internet is leading to ‘digital amnesia’, where individuals are no longer able to retain information as a result of storing information on a digital device,” and the other “that relying on digital devices to remember information is impairing our own memory systems.”

“But,” Noreen counsels her readers, “before we mourn this apparent loss of memory, more recent studies suggest that we may be adapting.” And in what, exactly, does this adaptation consist? Noreen summarizes it this way: “Technology has changed the way we organise information so that we only remember details which are no longer available, and prioritise the location of information over the content itself.”

This conclusion seems to me banal, which is not to say that it is incorrect. It amounts to saying that we will not remember what we do not believe we need to remember and that, when we have outsourced our memory, we will take some care to learn how we might access it in the future.

Of course, when the Google myth dominates a society, will we believe that there is anything at all that we ought to commit to memory? The Google myth in this case is the belief that the every conceivable bit of knowledge that we could ever possibly desire is just a Google search away.

The sort of analysis Noreen offers, which is not uncommon, is based on an assumption we should examine more closely and also leaves a critical consideration unaddressed.

The assumption is that there are no distinctions within the category of memory. All memories are assumed to be discreet facts of the sort which one would need to know in order to do well on Jeopardy. But this assumption ignores the diversity of what we call memories and the diversity of functions to which memory is put. Here is how I framed the matter some years back:

All of this leads me to ask, What assumptions are at play that make it immediately plausible for so many to believe that we can move from internalized memory to externalized memory without remainder?  It would seem, at least, that the ground was prepared by an earlier reduction of knowledge to information or data.  Only when we view knowledge as the mere aggregation of discreet bits of data, can we then believe that it makes little difference whether that data is stored in the mind or in a database.

We seem to be approaching knowledge as if life were a game of Jeopardy which is played well by merely being able to access trivial knowledge at random.  What is lost is the associational dimension of knowledge which constructs meaning and understanding by relating one thing to another and not merely by aggregating data.  This form of knowledge, which we might call metaphorical or analogical, allows us to experience life with the ability to “understand in light of,” to perceive through a rich store of knowledge and experience that allows us to see and make connections that richly texture and layer our experience of reality.

But this understanding of memory seems largely absent from the sorts of studies that are frequently cited in discussions of offloaded or outsourced memory. I’ll add another relevant consideration I’ve previously articulated, that there is a silent equivocation that slips into these discussions: the notion of memory we tend to assume is our current understanding of memory derived by comparison to computer memory, which is essentially storage.

Having first identified a computer’s storage  capacity as “memory,” a metaphor dependent upon the human capacity we call “memory,” we have now come to reverse the direction of the metaphor by understanding human “memory” in light of a computer’s storage capacity.  In other words we’ve reduced our understanding of memory to mere storage of information.  And now we read all discussions of memory in light of this reductive understanding.

As for the unaddressed critical consideration, if we grant that we must all outsource or externalize some of our memory, and that it may even be admittedly advantageous to do so, how do we make qualitative judgments about the memory that we can outsource to our benefit and the memory we should on principle internalize (if we even allow for the latter possibility)?

Here we might take a cue from the religious practices of Jews, Christians, and Muslims, who have long made the memorization of Scripture a central component of their respective forms of piety. Here’s a bit more from Steiner commenting on what can be known about early modern literacy:

Scriptural and, in a wider sense, religious literacy ran strong, particularly in Protestant lands. The Authorized Version and Luther’s Bible carried in their wake a rich tradition of symbolic, allusive, and syntactic awareness. Absorbed in childhood, the Book of Common Prayer, the Lutheran hymnal and psalmody cannot but have marked a broad compass of mental life with their exact, stylized articulateness and music of thought. Habits of communication and schooling, moreover, sprang directly from the concentration of memory. So much was learned and known by heart — a term beautifully apposite to the organic, inward presentness of meaning and spoken being within the individual spirit.

Learned by heart–a beautifully apt phrase, indeed. Interestingly, this is an aspect of religious practice that, while remaining relatively consistent across the transition from oral to literate society, appears to be succumbing to the pressures of the Google myth, at least among Protestants. If I have an app that lets me instantly access any passage of my sacred text, in any of a hundred different translations, why would I bother to memorize any of it.

The answer, of course, best and perhaps only learned by personal experience, is that there is a qualitative difference between the “organic, inward presentness of meaning” that Steiner describes and merely knowing that I know how to find a text if I were inclined to find it. But the Google myth, and the studies that examine it, seem to know nothing of that qualitative difference, or, at least, they choose to bracket it.

I should note in passing that much of what I have recently written about attention is also relevant here. Distraction is the natural state of someone who has no goal that might otherwise command or direct their attention. Likewise, forgetfulness is the natural state of someone who has no compelling reason to commit something to memory. At the heart of both states may be the liberated individual will yielded by modernity. Distraction and forgetfulness seem both to stem from a refusal to acknowledge an order of knowing that is outside of and independent of the solitary self. To discipline our attention and to learn something by heart is, in no small measure, to submit the self to something beyond its own whims and prerogatives.

So, then, we might say that one of the enduring consequences of new forms of externalized memory is not only that they alter the quantity of what is committed to memory but that they also reconfigure the meaning and value that we assign to both the work of remembering and to what is remembered. In this way we begin to see why Illich believed that changing memories amounted to changing cultures. This is also why we should consider that Plato’s Socrates was on to something more than critics give him credit for when he criticized writing for how it would affect memory, which was for Plato much more than merely the ability to recall discreet bits of data.

This last point brings me, finally, to an excellent discussion of these matters by John Danaher. Danaher is always clear and meticulous in his writing and I commend his blog, Philosophical Disquisitions, to you. In this post, he explores the externalization of memory via a discussion of a helpful distinction offered by David Krakauer of the Santa Fe Institute. Here is Danaher’s summary of the distinction between two different types of cognitive artifacts, or artifacts we think with:

Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.

Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.

Danaher critically interacts with Krakauer’s distinction, but finds it useful. It is useful because, like Albert Borgmann’s work, it offers to us concepts and categories by which we might begin to evaluate the sorts of trade-offs we must make when deciding what technologies we will use and how.

Also of interest is Danaher’s discussion of cognitive ecology. Invoking earlier work by Donald Norman, Danaher explains that “competitive cognitive artifacts don’t just replace or undermine one cognitive task. They change the cognitive ecology, i.e. the social and physical environment in which we must perform cognitive tasks.” His critical consideration of the concept of cognitive ecology brings him around to the wonderful work Evan Selinger has been doing on the problem of technological outsourcing, work that I’ve cited here on more than a few occasions. I commend to you Danaher’s post for both its content and its method. It will be more useful to you than the vast majority of commentary you might otherwise encounter on this subject.

I’ll leave you with the following observation by the filmmaker Luis Bunuel: “Our memory is our coherence, our reason, our feeling, even our action. Without it, we are nothing.” Let us take some care and give some thought, then, to how our tools shape our remembering.


Google Photos and the Ideal of Passive Pervasive Documentation

I’ve been thinking, recently, about the past and how we remember it. That this year marks the 20th anniversary of my high school graduation accounts for some of my reflective reminiscing. Flipping through my senior yearbook, I was surprised by what I didn’t remember. Seemingly memorable events alluded to by friends in their notes and more than one of the items I myself listed as “Best Memories” have altogether faded into oblivion. “I will never forget when …” is an apparently rash vow to make.

But my mind has not been entirely washed by Lethe’s waters. Memories, assorted and varied, do persist. Many of these are sustained and summoned by stuff, much of it useless, that I’ve saved for what we derisively call sentimental reasons. My wife and I are now in the business of unsentimentally trashing as much of this stuff as possible to make room for our first child. But it can be hard parting with the detritus of our lives because it is often the only tenuous link joining who we were to who we now are. It feels as if you risk losing a part of yourself forever if you were to throw away that last delicate link.

“Life without memory,” Luis Bunuel tells us, “is no life at all.” “Our memory,” he adds, “is our coherence, our reason, our feeling, even our action. Without it, we are nothing.” Perhaps this accounts for why tech criticism was born in a debate about memory. In the Phaedrus, Plato’s Socrates tells a cautionary tale about the invention of writing in which writing is framed as a technology that undermines the mind’s power to remember. What we can write down, we will no longer know for ourselves–or so Socrates worried. He was, of course, right. But, as we all know, this was an incomplete assessment of writing. Writing did weaken memory in the way Plato feared, but it did much else besides. It would not be the last time critics contemplated the effects of a new technology on memory.

I’ve not written nearly as much about memory as I once did, but it continues to be an area of deep interest. That interest was recently renewed not only by personal circumstances but also by the rollout of Google Photos, a new photo storage app with cutting edge sorting and searching capabilities. According to Steven Levy, Google hopes that it will be received as a “visual equivalent to Gmail.” On the surface, this is just another digital tool designed to store and manipulate data. But the data in question is, in this case, intimately tied up with our experience and how we remember it. It is yet another tool designed to store and manipulate memory.

When Levy asked Bradley Horowitz, the Google executive in charge of Photos, what problem does Google Photos solve? Horowitz replied,

“We have a proliferation of devices and storage and bandwidth, to the point where every single moment of our life can be saved and recorded. But you don’t get a second life with which to curate, review, and appreciate the first life. You almost need a second vacation to go through the pictures of the safari on your first vacation. That’s the problem we’re trying to fix — to automate the process so that users can be in the moment. We also want to bring all of the power of computer vision and machine learning to improve those photos, create derivative works, to make suggestions…to really be your assistant.”

It shouldn’t be too surprising that the solution to the problem of pervasive documentation enabled by technology is a new technology that allows you to continue documenting with even greater abandon. Like so many technological fixes to technological problems, it’s just a way of doubling down on the problem. Nor is it surprising that he also suggested this would help users “be in the moment” without of a hint of irony.

But here is the most important part of the whole interview, emphasis mine:

“[…] so part of Google photos is to create a safe space for your photos and remove any stigma associated with saving everything. For instance, I use my phone to take pictures of receipts, and pictures of signs that I want to remember and things like that. These can potentially pollute my photo stream. We make it so that things like that recede into the background, so there’s no cognitive burden to actually saving everything.”

Replace saving with remembering and the potential significance of a tool like Google Photos becomes easier to apprehend. Horowitz is here confirming that users will need to upload their photos to Google’s Cloud if they want to take advantage of Google Photos’ most impressive features. He anticipates that there will be questions about privacy and security, hence the mention of safety. But the really important issue here is this business about saving everything.

I’m not entirely sure what to make of the stigma Horowitz is talking about, but the cognitive burden of “saving everything” is presumably the burden of sorting and searching. How do you find the one picture you’re looking for when you’ve saved thousands of pictures across a variety of platforms and drives? How do you begin to organize all of these pictures in any kind of meaningful way? Enter Google Photos and its uncanny ability to identify faces and group pictures into three basic categories–People, Places, and Things–as well as a variety of sub-categories such as “food,” “beach,” or “cars.” Now you don’t need that second life to curate your photos. Google does it for you. Now we may document our lives to our heart’s content without a second thought about whether or not we’ll ever go back to curate our unwieldy hoard of images.

I’ve argued elsewhere that we’ve entered an age of memory abundance, and the abundance of memories makes us indifferent to them. When memory is scarce, we treasure it and care deeply about preserving it. When we generate a surfeit of memory, our ability to care about it diminishes proportionately. We can no longer relate to how Roland Barthes treasured his mother’s photograph; we are more like Andy Warhol, obsessively recording all of his interactions and never once listening to the recordings. Plato was, after all, even closer to the mark than we realized. New technologies of memory reconfigure the affections as well as the intellect. But is it possible that Google Photos will prove this judgement premature? Has Google figured out how we may have our memory cake and eat it too?

I think not, and there’s a historical precedent that will explain why.

Ivan Illich, in his brilliant study of medieval reading and the evolution of the book, In the Vineyard of the Text, noted how emerging textual technologies reconfigured how readers related to what they read. It is a complex, multifaceted argument and I won’t do justice to it here, but the heart of it is summed up in the title of Illich’s closing chapter, “From Book to Text.” After explaining what Illich meant by the that formulation, I’m going to suggest that we consider an analogous development: from photograph to image.

Like the photography, writing is, as Plato understood, a mnemonic technology. The book or codex is only one form the technology has taken, but it is arguably the most important form owing to its storage capacity and portability. Contrast the book to, for instance, a carved stone tablet or a scroll and you’ll immediately recognize the brilliance of the design. But the matter of sorting and searching remained a significant problem until the twelfth century. It is then that new features appeared to improve the book’s accessibility and user-friendliness, among them chapter titles, pagination, and the alphabetized index. Now one cloud access particular passages without having to either read the whole work or, more to the point, either memorize the passages or their location in the book (illuminated manuscripts were designed to aide with the latter).

My word choice in describing the evolution of the book above was, of course, calculated to make us see the book as a technology and also to make certain parallels to the case of digital photography more obvious. But what was the end result of all of this innovation? What did Illich mean by saying that the book became a text?

Borrowing a phrase Katherine Hayles deployed to describe a much later development, I’d say that Illich is getting at one example of how information lost its body. In other words, prior to these developments it was harder to imagine the text of a book as a free-floating reality that could be easily lifted and presented in a different format. The ideas, if you will, and the material that conveyed them–the message and medium–were intimately bound together; one could hardly imagine the two existing independently. This had everything to do with the embodied dimensions of the reading experience and the scarcity of books. Because there was no easy way to dip in and out of a book to look for a particular fragment and because one would likely encounter but one copy of a particular work, the work was experienced as a whole that lived within the particular pages of the book one held in hand.

The book had then been read reverentially as a window on the world; it yielded what Illich termed monastic reading. The text was later, after the technical innovations of the twelfth century, read as a window on the mind of the author; it yielded scholastic reading. We might also characterize these as devotional reading and academic reading, respectively. Illich summed it up this way:

“The text could now be seen as something distinct from the book. It was an object that could be visualized even with closed eyes [….] The page lost the quality of soil in which words are rooted. The new text was a figment on the face of the book that lifted off into autonomous existence [….] Only its shadow appeared on the page of this or that concrete book. As a result, the book was no longer the window onto nature or god; it was no longer the transparent optical device through which a reader gains access to creatures or the transcendent.”

Illich had, a few pages earlier, put the matter more evocatively: “Modern reading, especially of the academic and professional type, is an activity performed by commuters or tourists; it is no longer that of pedestrians and pilgrims.”

I recount Illich’s argument because it illuminates the changes we are witnessing with regards to photography. Illich demonstrated two relevant principles. First, that small technical developments can have significant and lasting consequences for the experience and meaning of media. The move from analog to digital photography should naturally be granted priority of place, but subsequent developments such as those in face recognition software and automated categorization should not be underestimated. Secondly, that improvements in what we might today call retrieval and accessibility can generate an order of abstraction and detachment from the concrete embodiment of media. And this matters because the concrete embodiment, the book as opposed to the text, yields kinds and degrees of engagement that are unique to it.

Let me try to put the matter more directly and simultaneously apply it to the case of photography. Improving accessibility meant that readers could approach the physical book as the mere repository of mental constructs, which could be poached and gleaned at whim. Consequently, the book was something to be used to gain access to the text, which now appeared for the first time as an abstract reality; it ceased to be itself a unique and precious window on the world and its affective power was compromised.

Now, just as the book yielded to the text, so the photograph yields to the image. Imagine a 19th century woman gazing lovingly at a photograph of her son. The woman does not conceive of the photograph as one instantiation of the image of her son. Today, however, we who hardly ever hold photographs anymore, we can hardly help thinking it terms of images, which may be displayed on any of a number of different platforms, not to mention manipulated at whim. The image is an order of abstraction removed from the photograph and it would be hard to imagine someone treasuring it in the same way that we might treasure an old photograph. Perhaps a thought experiment will drive this home. Try to imagine the emotional distance between the act of tearing up a photograph and deleting an image.

Now let’s come back to the problem Google Photos is intended to solve. Will automated sorting and categorization along with the ability to search succeed in making our documentation more meaningful? Moreover, will it overcome the problems associated with memory abundance? Doubtful. Instead, the tools will facilitate further abstraction and detachment. They are designed to encourage the production of even more documentary data and to further diminish our involvement in their production and storage. Consequently, we will continue to care less not more about particular images.

Of course, this hardly means the tools are useless or that images are meaningless. I’m certain that face recognition software, for instance, can and will be put to all sorts of uses, benign and otherwise and that the reams of data users will feed Google Photos will only help to improve and refine the software. And it is also true that images can be made use of in ways that photographs never could. But perhaps that is the point. A photograph we might cherish; we tend to make use of images. Unlike the useless stuff around which my memories accumulate and that I struggle to throw away, images are all use-value and we don’t think twice about deleting them when they have no use.

Finally, Google’s answer to the problem of documentation, that it takes us out of the moment as it were, is to encourage such pervasive and continual documentation that it is no longer experienced as a stepping out of the moment at all. The goal appears to be a state of continual passive documentation in which case the distinction between experience and documentation blurs so that the two are indistinguishable. The problem is not so much solved as it is altogether transcended. To experience life will be to document it. In so doing we are generating a second life, a phantom life that abides in the Cloud.

And perhaps we may, without stretching the bounds of plausibility too far, reconsider that rather ethereal, heavenly metaphor–the Cloud. As we generate this phantom life, this double of ourselves constituted by data, are we thereby hoping, half-consciously, to evade or at least cope with the unremitting passage of time and, ultimately, our mortality?

For Your Consideration

In the recent past, I might have been tempted to write a blog post about these. As things stand, I’ll merely point you to them.

“Valley of God” in the Financial Times — Faith in Silicon Valley. In to be taken in both senses. File under religion of technology.

“He stopped going to church. Instead, he went to the computer – “there was this thing called Google” – and started researching theories of evolution to recast his understanding of the world. After the terrorist attacks of September 11 2001, he discovered the potential to organise political activists on the internet. And when he got sick again, he credited the internet with saving his life. He replaced his faith in the Christian God of his childhood with faith in technology.”

“Making the Land Our Own” in American Scientist — A review of American Georgics: Writings on Farming, Culture, and the Land. Opening:

“Forty-eight years ago in his groundbreaking book, The Machine in the Garden: Technology and the Pastoral Ideal in America, historian Leo Marx cited Thomas Jefferson to illuminate the tension between farming and industry that has characterized land use in the United States for more than two centuries.”

“Computer Literacy and the Cybernetic Dream” — Short lecture by Ivan Illich delivered in 1987. Interesting throughout.

“With great pains she has trained her inner Descartes and her inner Pascal to watch each other: to balance mind and body, spirit and flesh, logic and feeling.”

“When I think of the glazing which the screen brings out in the eyes of its user, my entrails rebel when somebody says that screen and eye are ‘facing’ each other.”

Marcel Jousse: Forgotten Pioneer of Media Studies

Marcell Jousse was a pioneering scholar of gesture and orality. He was a younger contemporary and student of Marcel Mauss. During the inter-war years, he published a series of seminal studies on orality and gesture that garnered wide spread recognition. The publication of his first book in 1925, The Rhythmic and Mnemotechnical Oral Style of the Verbo-motors, caused an immediate sensation and earned him a series of prestigious posts in Paris, including a stint at the Sorbonne. However, shortly after his death in 1961, Jousse’s work fell into relative obscurity. Because his work is only recently finding its way into English translation, thanks largely to the efforts of Edgard Richard Sienaert, he is little known in the English-speaking world. (To get a feel for how little known, take a look at his Wikipedia page). But his work did not escape notice altogether. It features prominently in Walter Ong’s Orality and Literacy.

Ong advanced a simple, yet profound thesis: “writing restructures consciousness.” As Ong traced the antecedents of his thesis, which was largely the synthesis of a substantial body of existing work, he acknowledged a debt to Jousse’s distinction, based on his rural upbringing and extensive field work in the Middle East, between “oral composition” and “written composition.” Further on, Ong succinctly summarized Jousse’s larger theoretical framework:

“Protracted orally based thought, even when not in formal verse, tends to be highly rhythmic, for rhythm aids recall, even physiologically. Jousse has shown the intimate linkage between rhythmic oral patterns, the breathing process, gesture, and the bilateral symmetry of the human body in ancient Aramaic and Hellenic targums …”

Ong also deployed Jousse’s formulation, verbomotor, to designate cultures that “retain enough oral residue to remain significantly word-attentive in a person-interactive context (the oral type of context) rather than object-attentive.” It may not be entirey unreasonable to suggest that Ong’s work is in large part an elaboration of Jousse’s research. And, while I haven’t done the research to confirm this, I’m willing to bet that somewhere along the line he played part in the thought of Marshall McLuhan.

Not unlike McLuhan, Jousse’s method and writing was controversial, and in some respects ahead of his time. Here is Sienaert’s description of his fist book which was at the time was termed “The Jousse Bomb” (I’m not making that up):

“The Oral Style is a most unusual book. Jousse had read some five thousand books from a bewildering variety of disciplines. From these, he selected five hundred pertinent to his topic, and from them he chose extracts which reflected in some way his observations, which he linked by his own bracketed words, sentences and paragraphs. He thus recycled old materials, building a new house from old bricks, following his own research injunction: The aim of research is to quest for and discover fresh insights and under­standing. But how can we discover something fresh and new when it appears as if all has already been discovered? By the incessant, meticulous and de­tailed scrutiny of the Old.”

Ivan Illich also drew on Jousse in his study of medieval cultures of reading, In the Vineyard of the Text. Illich was particularly impressed by Jousse’s work on psychomotor reading techniques employed in Jewish, Christian, and Islamic settings. Memorization in these contexts was construed as a fully embodied rather than strictly mental activity. Illich noted that the content of sacred texts was memorized “through careful attention paid to the psychomotor nerve impulses which accompany the sentences being learned.” In Koranic and Jewish schools, students read aloud as they swayed and rocked back and forth and in this way were able to later “re-evoke” the text through the activation of those same body movements. In this analysis, Illich is explicitly drawing on research conducted by Jousse:

“Marcel Jousse has studied these psychomotor techniques of fixing a spoken sequence in the flesh. He has shown that for many people, remembrance means the triggering of a well-established sequence of muscular patterns to which the utterances are tied. When the child is rocked during a cradle song, when the reapers bow to the rhythm of a harvest song, when the rabbi shakes his head while he prays or searches for the right answer, or when the proverb comes to mind only upon tapping for a while — according to Jousse, these are just a few examples of a widespread linkage of utterance and gesture. Each culture has given its own form to this bilateral, dissymmetric complementarity by which sayings are graven right and left, forward and backward into trunk and limbs, rather than just into the ear and the eye.”

Ong’s and Illich’s concerns overlap with, but do not encompass the scope of Jousse’s ambitious anthropological project. Jousse developed a cosmological, mimetic theory of human communication. The universe, according to Jousse, impresses itself upon human beings. In fact, it impresses itself on all objects and organisms. The whole of reality is acting and acted upon. Human beings, however, not only receive this impression; they also act out the impression they have received, and this acting out is originally gestural. Sienaert summarizes:

“Man thus first relates to the world which imposes upon him the play of actual experiences. But this is not a passive process: on reception of reality, man is also animated by an energy that is released and that makes him react in the form of gestures.”

Moreover, human beings are uniquely capable of not only responding in their gestures to the impressions of reality, they are capable of re-playing or re-presenting those impressions. In other words, they can remember, they have memories. And before the advent of language, these memories were carried in the body. The transition from gestural to spoken language marks, in Jousse’s view, the transition from anthropology to ethnology. Generic humanity is particularized through the conventional language into which they are socialized.

Yet, even after this transition, the gestural foundations of communication and response to the universe remain embedded in the human being. These underlying structuring principles reveal themselves in what Jousse termed “the oral style.” The oral style is encapsulated in three laws summarized as follows by Sienaert:

1. Le rythmo-mimisme: the law of rhythmo-mimicry. Man is a mimic, he receives, registers, plays, and replays his actual experiences; as movement is possible in sequence only, mimicry is necessarily linked with rhythm.

2. Le bilatéralisme: the law of bilateralism. Man can only express himself in accordance with his physical structure which is bilateral—left and right, up and down, back and forth—and like his global and manual expression, his verbal expression will tend to be bilateral, to balance symmetrically, following a physical and physiological need for equilibrium …

3. Le formulisme: the law of formulism. The biological tendency towards the stereotyping of gestures creates habit, which ensures immediate, easy and sure replay; it is a facilitating psycho-physiological device as it organizes the intussusceptions and the mnesic replay in automatisms—acquired devices necessary to a firm basis for action …

In formulating these laws, based on his study of oral cultures, Jousse came strikingly close to the most prominent contours of the phenomenological account of the body’s role in human perception developed independently by the tradition of thought spanning Husserl and Merleau-Ponty. These laws, in other words, may be understood to govern not only verbal expression, but also embodied experience as a whole.

Oral Social Literacy, Past and Present

Ivan Illich’s In the Vineyard of the Text is an exploration of the evolution of reading and the book in Western Europe during the 13th century. It is focused on Hugh of St. Victor and a well-known work of his titled the Didascalicon, essentially the first guide to the art of reading.

Illich notes, “Before Hugh’s generation, the book is a record of the author’s speech or dictation. After Hugh, increasingly it becomes a repertory of the author’s thought, a screen onto which one projects still unvoiced intentions.”

Illich early on acknowledges his debt to Walter Ong who synthesized a great deal of research on the cultural consequences of the shift from orality to literacy (and later to what Ong called the secondary orality of electronic media).

What is sometimes lost in this schema is the persistence of orality after the emergence of literacy. And not only in the sense that oral cultures existed alongside literate ones, but also in the persistence of orality within literate societies.

A full 1500 years after literacy was effectively internalized into Western society (which is not the same thing as saying that all of those living in Western society were literate), reading remained a fundamentally oral activity. The quotation from Illich above is drawn from a chapter titled, “Recorded Speech to Record of Thought.” That title nicely captures the degree to which writing was understood as a record of oral communication rather than its own distinct medium prior to the period Illich examined.

Here is Illich again on the orality (and corporeality) of literacy through the late medieval period:

“In a tradition of one and a half millennia, the sounding pages are echoed by the resonance  of the moving lips and tongue.  The reader’s ears pay attention, and strain to catch what the reader’s mouth gives forth.  In this manner the sequence of letters translates directly into body movements and patterns nerve impulses.  The lines are a sound track picked up by the mouth and voiced by the reader for his own ear.  By reading, the page is literally embodied, incorporated.”

And here again on the oral and social nature of reading:

“The monastic reader — chanter or mumbler — picks the words from the lines and creates a public social auditory ambience. All those who, with the reader, are immersed in this hearing milieu are equals before the sound … Fifty years after Hugh, typically, this was no longer true. The technical activity of deciphering no longer creates an auditory and, therefore, a social space. The reader then flips through pages. His eyes mirror the two-dimensional page. Soon he will conceive of his own mind in analogy with a manuscript. Reading will become an individualistic activity, intercourse between a self and a page.”

As I read these passages again today, I was reminded of an essay that appeared not too long ago in the Wall Street Journal. In “Is This the Future of Punctuation?”, Henry Hitchings, the author of The Language Wars: A History of Proper English, makes the following observation about new proposed punctuations marks such as the  interrobang:

“Such marks are symptoms of an increasing tendency to punctuate for rhetorical rather than grammatical effect. Instead of presenting syntactical and logical relationships, punctuation reproduces the patterns of speech.”

The emergence of telephony, radio, and television marked the re-emergence of orality following the era of print literacy’s dominance. Ong called this secondary orality. Having appeared after literacy, it was not identical to primary orality, but it nonetheless represented a reemergence of orality and its habits which would now compete with literacy on the cultural stage.

Within the last twenty years, however, a funny thing has happened on the way to the world of secondary orality. Writing or text has reasserted itself. Text messaging, emails, online reading, e-reading, etc. — all of these together mean that most of us are deciphering of a lot of text each day. Even our television screens, depending on what we are watching, may be chock full of text, scrolling or otherwise.

But this reemergence of text is marked by orality, as the observation by Hitchings suggests. Can we call this secondary literacy? Is it still useful to speak in terms of literacy and orality?

It has always seemed to me that the orality/literacy distinction got at important historical developments in communication and consciousness. The bare dichotomy glossed over a good deal and it was always in need of qualification, but it was serviceable nonetheless. Secondary orality also pointed to important developments. But now what we appear to have, text having reasserted itself, is a thoroughly blended media environment.

It’s chief characteristic is neither its orality nor its literacy. Rather, it is the preponderance of both together — overlapping, interpenetrating, jostling, complementing, conflating.

Interestingly, there has also been, of course, a reemergence of social literacy, but it is not tied to the oral as it was in the circumstances described by Illich above. Rather than an orally constituted literate social anchored to physical presence, we have a diffused literacy based, image-inflected social often untethered from physical presence.

A social space was, then, constituted by an oral performance of the written text and gathered presences. We have, today, spaces constituted as social by silent reading and the presence of absences.

File all of this under “thinking aloud.” (Except, of course, that it wasn’t!)