In Search of the Real

While advancing age is no guarantee of advancing self-knowledge, I have found that growing up a bit can be enlightening. Looking back, it now seems pretty clear to me that I have always been temperamentally Arcadian – and I’m grateful to W. H. Auden for helping me come to this self-diagnosis. In the late 1940s, Auden wrote an essay distinguishing the Arcadian and Utopian personalities. The former looks instinctively to the past for truth, goodness, and beauty; the latter searches for those same things in the unrealized future.

Along with Auden, but in much less distinguished fashion, I am an Arcadian; there is little use denying it. When I was on the cusp of adolescence, I distinctly recall lamenting with my cousin the passing of what we called the “good old days.” Believe it; it is sadly true. The “good old days” incidentally were the summer vacations we enjoyed not more than two or three years earlier. If I am not careful, I risk writing the grocery list elegiacally. I believe, in fact, that my first word was a sigh. This last is not true, alas, but it would not have been out of character.

So you can see that this presents a problem of sorts for someone who writes about technology. The temptation to criticize is ever present and often difficult to resist. With so many Utopians about, one can hardly be blamed. In truth, though, there are plenty of Arcadians about as well. The Arcadian is the critic of technology, the one whose first instinct is to mourn what is lost rather than celebrate what is gained. It is with this crowd that I instinctively run. They are my kindred spirits.

But Auden knew enough to turn his critical powers upon his own Arcadianism. As Alan Jacobs put it in his Introduction to Auden’s “The Age of Anxiety,” “Arcadianism may have contributed much to Auden’s mirror, but he knew that it had its own way of warping reflections.” And so do I, at least in my better moments.

I acknowledge my Arcadianism by way of self-disclosure leading into a discussion of Nathan Jurgenson’s provocative essay in The New Inquiry, “The IRL Fetish.” IRL here stands for “in real life,” offline experience as opposed to the online or virtual, and Jurgenson takes aim at those who fetishize offline experience. I can’t be certain if he had Marx, Freud, or Lacan in view when he chose to describe the obsession with offline experience as a fetish. I suspect it was simply a rather suggestive term that connoted something of the irrational and esoteric. But it does seem clear that he views this obsession/fetish as woefully misguided at best and this because it is built on an erroneous conceptualization of the relationship between the online and the offline.

The first part of Jurgenson’s piece describes the state of affairs that has given rise to the IRL Fetish. It is an incisive diagnosis written with verve. He captures the degree to which the digital has penetrated our experience with clarity and vigor. Here is a sampling:

“Hanging out with friends and family increasingly means also hanging out with their technology. While eating, defecating, or resting in our beds, we are rubbing on our glowing rectangles, seemingly lost within the infostream.” [There is more than one potentially Freudian theme running through this piece.]

“The power of ‘social’ is not just a matter of the time we’re spending checking apps, nor is it the data that for-profit media companies are gathering; it’s also that the logic of the sites has burrowed far into our consciousness.”

“Twitter lips and Instagram eyes: Social media is part of ourselves; the Facebook source code becomes our own code.”

True. True. And, true.

From here Jurgenson sums up the “predictable” response from critics: “the masses have traded real connection for the virtual,” “human friends, for Facebook friends.” Laments are sounded for “the loss of a sense of disconnection,” “boredom,” and “sensory peace.” The equally predictable solution, then, is to log-off and re-engage the “real” world.

Now it does not seem to me that Jurgenson thinks this is necessarily bad counsel as far as it goes. He acknowledges that, “many of us, indeed, have been quite happy to occasionally log-off …” The real problem, according to Jurgenson, what is “new” in the voices of the chorus of critics is arrogant self-righteousness. Those are my words, but I think they do justice to Jurgenson’s evaluation. “Immense self-satisfaction,” “patting ourselves on the back,” boasting, “self-congratulatory consensus,” constructing “their own personal time-outs as more special” – these are his words.

This is a point I think some of Jurgenson’s critics have overlooked. At this juncture, his complaint is targeted rather precisely, at least as I read it, at the self-righteousness implicit in certain valorizations of the offline. Now, of course, deciding who is in fact guilty of self-righteous arrogance may involve making judgment calls that more often than not necessitate access to a person’s opaque intentions, and there is, as of yet, no app for that. (Please don’t tell me if there is.) But, insofar as we are able to reasonably identify the attitudes Jurgenson takes to task, then there is nothing particularly controversial about calling them out.

In the last third of the essay, Jurgenson pivots on the following question: “How have we come to make the error of collectively mourning the loss of that which is proliferating?” Response: “In great part, the reason is that we have been taught to mistakenly view online as meaning not offline.”

At this point, I do want to register a few reservations. Let me begin with the question above and the claim that “offline experience” is proliferating. What I suspect Jurgenson means here is that awareness of offline experience and a certain posture toward offline experience is proliferating. And this does seem to be the case. Semantically, it would have to be. The notion of the offline as “real” depends on the notion of the online; it would not have emerged apart from the advent of the online. The online and the offline are mutually constitutive as concepts; as one advances, the other follows.

It remains the case, however, that “offline,” only recently constituted as a concept, describes an experience that paradoxically recedes as it comes into view. Consequently, Jurgenson’s later assertion – “There was and is no offline … it has always been a phantom.” – is only partially true. In the sense that there was no concept of the offline apart from the online and that the online, once it appears, always penetrates the offline, then yes, it is true enough. However, this does not negate the fact that while there was no concept of the offline prior to the appearance of the online, there did exist a form of life that we can retrospectively label as offline. There was, therefore, an offline (even if it wasn’t known as such) experience realized in the past against which present online/offline experience can be compared.

What the comparison reveals is that a form of consciousness, a mode of human experience is being lost. It is not unreasonable to mourn its passing, and perhaps even to resist it. It seems to me that Jurgenson would not necessarily be opposed to this sort of rear-guard action if it were carried out without an attendant self-righteousness or aura of smug superiority. But he does appear to be claiming that there is no need for such rear-guard actions because, in fact, offline experience is as prominent and vital as it ever was. Here is a representative passage:

“Nothing has contributed more to our collective appreciation for being logged off and technologically disconnected than the very technologies of connection. The ease of digital distraction has made us appreciate solitude with a new intensity. We savor being face-to-face with a small group of friends or family in one place and one time far more thanks to the digital sociality that so fluidly rearranges the rules of time and space. In short, we’ve never cherished being alone, valued introspection, and treasured information disconnection more than we do now.”

It is one thing, however, to value a kind of experience, and quite another to actually experience it. It seems to me, in fact, that one portion of Jurgenson’s argument may undercut the other. Here are his two central claims, as I understand them:

1. Offline experience is proliferating, we enjoy it more than ever before.

2. Online experience permeates offline experience, the distinction is untenable.

But if the online now permeates the offline – and I think Jurgenson is right about this – then it cannot also be the case that offline experience is proliferating. The confusion lies in failing to distinguish between “offline” as a concept that emerges only after the online appears, and “offline” as a mode of experience unrecognized as such that predates the online. Let us call the former the theoretical offline and the latter the absolute offline.

Given the validity of claim 2 above, then claim 1 only holds for the theoretical offline not the absolute offline. And it is the passing of the absolute offline that critics mourn. The theoretical offline makes for a poor substitute.

The real strength of Jurgenson’s piece lies in his description of the immense interpenetration of the digital and material (another binary that does not quite hold up, actually). According to Jurgenson, “Smartphones and their symbiotic social media give us a surfeit of options to tell the truth about who we are and what we are doing, and an audience for it all, reshaping norms around mass exhibitionism and voyeurism.” To put it this way is to mark the emergence of a ubiquitous, unavoidable self-consciousness.

I would not say as Jurgenson does at one point, “Facebook is real life.” The point, of course, is that every aspect of life is real. There is no non-being in being. Perhaps it is better to speak of the real not as the opposite of the virtual, but as that which is beyond our manipulation, what cannot be otherwise. In this sense, the pervasive self-consciousness that emerges alongside the socially keyed online is the real. It is like an incontrovertible law that cannot be broken. It is a law haunted by the loss its appearance announces, and it has no power to remedy that loss. It is a law without a gospel.

Once self-consciousness takes its place as the incontrovertibly real, it paradoxically generates a search for something other than itself, something more real. This is perhaps the source of what Jurgenson has called the IRL fetish, and in this sense it has something in common with the Marxian and Freudian fetish: it does not know what it seeks. The disconnection, the unplugging, the logging off are pursued as if they were the sought after object. But they are not. The true object of desire is a state of pre-digital innocence that, like all states of innocence, once lost can never be recovered.

Perhaps I spoke better than I knew when I was a child, of those pleasant summers. After all, I am of that generation for which the passing from childhood into adulthood roughly coincided with the passage into the Digital Age. There is a metaphor in that observation. To pass from childhood into adulthood is to come into self-awareness, it is to leave naivety and innocence behind. The passage into the Digital Age is also a coming into a pervasive form of self-awareness that now precludes the possibility of naïve experience.

All in all, it would seem that I have stumbled into my Arcadianism yet again.

Arts of Memory 2.0: The Sherlock Update

I’m a little behind the times, I know, but I just finished watching the second episode in the second season of Sherlock, “The Hounds of Baskerville.” If you’ve watched the episode, you’ll remember a scene in which Sherlock asks to be left alone so that he may enter his “Mind Palace.” (If you’ve not been watching Sherlock, you really ought to.) The “Mind Palace” in question turns out to be what has traditionally been called a memory palace or memory theater. It is the mental construct at the heart of the ancient ars memoria, or arts of memory.

Longtime (and long-suffering) readers will remember a handful of posts discussing this ancient art and also likening Facebook to something like a materialized memory palace (here, here, and here). To sum up:

“… the basic idea is that one constructs an imagined space in the mind (similar to the work of the Architect in the film Inception, only you’re awake) and then populates the space with images that stand in for certain ideas, people, words, or whatever else you want to remember.  The theory is that we remember images and places better than we do abstract ideas or concepts.”

And here is the story of origins:

“… the founding myth of what Frances Yates has called the “art of memory” as recounted by Cicero in his De oratore. According to the story, the poet Simonides of Ceos was contracted by Scopas, a Thessalian nobleman, to compose a poem in his honor.  To the nobleman’s chagrin, Simonides devoted half of his oration to the praise of the gods Castor and Pollux.  Feeling himself cheated out of half of the honor, Scopas brusquely paid Simonides only half the agreed upon fee and told him to seek the rest from the twin gods.  Not long afterward that same evening, Simonides was summoned from the banqueting table by news that two young men were calling for him at the door.  Simonides sought the two callers, but found no one.  While he was out of the house, however, the roof caved in killing all of those gathered around the table including Scopas. As Yates puts it, “The invisible callers, Castor and Pollux, had handsomely paid for their share in the panegyric by drawing Simonides away from the banquet just before the crash.

The bodies of the victims were so disfigured by the manner of death that they were rendered unidentifiable even by family and friends.  Simonides, however, found that he was able to recall where each person was seated around the table and in this way he identified each body.  This led Simonides to the realization that place and image were the keys to memory, and in this case, also a means of preserving identity through the calamity of death.”

The most interesting thing about the manner in which Sherlock presents the memory palace is that it has been conceived on the model of something like a touchscreen interface. You can watch the clip below to see what I mean. In explaining what Holmes is doing to a third party, Watson describes something like a traditional memory palace (not in the video clip). But what we see him doing is quite different. Rather than mentally walking through an architectural space, Holmes swipes at images (visualized for the audience) organized into something like an alphabetic multi-media database.

Surprisingly, though, this stripped down structure does have a precedent in the medieval practice of the arts of memory. Ivan Illich describes what 12th century scholar Hugh of St. Victor required of his pupils:

“… Hugh asks his pupils to acquire an imaginary inner space … and tells them how to proceed in its construction. He asks the pupil to imagine a  sequence of whole numbers, to step on the originating point of their run and let the row reach the horizon. Once these highways are well impressed upon the fantasy of the child, the exercise consists in mentally ‘visiting’ these numbers at random. In his imagination the student is to dart back and forth to each of the spots he has marked by a roman numeral.”

This flat and bare schematic was the foundation for more elaborate, three dimensional memory palaces to be built later.

The update to the memory theater is certainly not out of keeping with the spirit of the tradition which always looked to familiar spaces as a model. What more familiar space can we conceive of these days than the architecture of our databases. Thought experiment: Visualize your Facebook page. Can you do it? Can you scroll through it? Can you mentally click and visualize new pages? Can you scroll through your friends? Might you even be able to mentally scroll through your pictures? Well, there you have it; you have a memory palace and you didn’t even know it.

The Simple Life in the Digital Age

America has always been a land of contradictions. At the very least we could say the nation’s history has featured the sometimes creative, sometimes destructive interplay of certain tensions. At least one of these tensions can be traced right back to the earliest European settlers. In New England, Puritans established a “city on a hill,” a community ordered around the realization of a spiritual ideal.  Further south came adventurers, hustlers, and entrepreneurs looking to make their fortune. God and gold, to borrow the title of Walter R. Mead’s account of the Anglo-American contribution to the formation of the modern world, sums it up nicely.  Of course, this is also a rather ancient opposition. But perhaps we could say that never before had these two strands come together in quite the same way to form the double helix of a nation’s DNA.

This tension between spirituality and materialism also overlaps with at least two other tensions that have characterized American culture from its earliest days: The first of these, the tension between communitarianism and individualism, is easy to name. The other, though readily discernible, is a little harder to capture. For now I’m going to label this pair hustle and contemplation and hope that it conveys the dynamic well enough. Think Babbitt and Thoreau.

These pairs simplify a great deal of complexity, and of course they are merely abstractions. In reality, the oppositions are interwoven and mutually dependent. But thus qualified, they nonetheless point to recurring and influential types within American culture. These types, however, have not been balanced and equal. There has always seemed to be a dominant partner in each pairing: materialism, individualism, and hustle. But it would be a mistake to underestimate the influence of spirituality, communitarianism, and contemplation. Perhaps it is best to view them as the counterpoint to the main theme of American culture, together creating the harmony of the whole.

One way of nicely summing up all that is entailed by the counterpoints is to call it the pursuit of the simple life. The phrase sounds quaint, but it worked remarkably well in the hands of historian David E. Shi. In 1985, right in the middle of the decade that was to become synonymous with crass materialism – the same year Madonna released “Material Girl” – Shi published The Simple Life: Plain Living And High Thinking In American Culture. The audacity!

Shi weaves a variegated tapestry of individuals and groups that have advocated the simple life in one form or another throughout American history. Even though he purposely leaves out the Amish, Mennonites, and similar communities, he still is left with a long and diverse list of practitioners. Altogether they represent a wide array of motives animating the quest for the simple life. These include: “a hostility toward luxury and a suspicion of riches, a reverence for nature and a preference for rural over urban ways of life and work, a desire for personal self-reliance through frugality and diligence, a nostalgia for the past and a scepticism toward the claims of modernity, conscientious rather than conspicuous consumption, and an aesthetic taste for the plain and functional.”

This net gathers together Puritans and Quakers, Jeffersonians and Transcendentalists, Agrarians and Hippies, and many more. Perhaps if Shi were to update his work he might include hipsters in the mix. In any case, he would have no shortage of contemporary trends and movements to choose from. None of them dominant, of course, but recognizable and significant counterpoints still.

If I were tasked with updating Shi’s book, for example, I would certainly include a chapter on the critics of the digital age. Not all such critics would fit neatly into the simple life tradition, but I do think a good many would – particularly those who are concerned that the pace and rhythm of digitally augmented life crowds out solitude, silence, and reflection. Think, for example, of the many “slow” movements and advocates (myself included) of digital sabbaths. They would comfortably take their place alongside a many of the individuals and movements in Shi’s account who have taken the personal and social consequences of technological advance as their foil. Thoreau is only the most famous example.

Setting present day critics of digital life in the tradition identified by Shi has a few advantages. For one thing, it reminds us that the challenges posed by digital technologies, while having their particularities, are not entirely novel in character. Long before the dawn of the digital age, individuals struggled to find the right balance between their ideals for the good life and the possibilities and demands created by the emergence of new technologies.

Moreover, we may readily and fruitfully apply some of Shi’s conclusions about the simple life tradition to the contemporary criticisms of life in the digital age.

First, the simple life has always been a minority ethic. “Many Americans have not wanted to lead simple lives,” Shi observes, “and not wanting to is the best reason for not doing so.” But, in his view, this does not diminish the salutary leavening effect of the few on the culture at large.

Yet , Shi concedes, “Proponents of the simple life have frequently been overly nostalgic about the quality of life in olden times, narrowly anti-urban in outlook , and too disdainful of the benefits of prosperity and technology.” Better to embrace the wisdom of Lewis Mumford, “one of the sanest of all the simplifiers” in Shi’s estimation. According to Mumford,

“It is not enough to say, as Rousseau once did, that one has only to reverse all current practice to be right … If our new philosophy is well-grounded we shall not merely react against the ‘air-conditioned nightmare’ of our present culture; we shall also carry into the future many elements of quality that this culture actually embraces.”

Sound advice indeed.

If we are tempted to dismiss the critics for their inconsistencies, however, Shi would have us think again: “When sceptics have had their say, the fact remains that there have been many who have demonstrated that enlightened self-restraint can provide a sensible approach to living that can be fruitfully applied in any era.”

But it is important to remember that the simple life at its best, now as ever, requires a person “willing it for themselves.” Impositions of the simple life will not do. In fact, they are often counterproductive and even destructive. That said, I would add, though Shi does not make this point in his conclusion, that the simple life is perhaps best sustained within a community of practice.

Wisely, Shi also observes, “Simplicity is more aesthetic than ascetic in its approach to good living.” Consequently, it is difficult to lay down precise guidelines for the simple life, digital or otherwise. Moderation takes many forms. And so individuals must deliberately order their priorities “so as to distinguish between the necessary and superfluous, useful and wasteful, beautiful and vulgar,” but no one such ordering will be universally applicable.

Finally, Shi’s hopeful reading of the possibilities offered by the pursuit of the simple life remains resonant:

“And for those with the will to believe in the possibility of the simple life and act accordingly, the rewards can be great. Practitioners can gradually wrest control of their own lives from the manipulative demands of the marketplace and the workplace … Properly interpreted, such a modern simple life informed by its historical tradition can be both socially constructive and personally gratifying.”

Nathan Jurgenson has recently noted that criticisms of digital technologies are often built upon false dichotomies and a lack of historical perspective. In this respect they are no different than criticisms advanced by advocates of the simple life who were also tempted by similar errors. Ultimately, this will not do. Our thinking needs to be well-informed and clear-sighted, and the historical context Shi provides certainly moves us toward that end. At the very least, it reminds us that the quest for simplicity in the digital age had its analog precursors from which we stand to learn a few things.

Robotic Zeitgeist

Robotics and AI are in the air. A sampling:

“Bot with boyish personality wins biggest Turing test”: “Eugene Goostman, a chatbot with the personality of a 13-year-old boy, won the biggest Turing test ever staged, on 23 June, the 100th anniversary of the birth of Alan Turing.”

“Time To Apply The First Law Of Robotics To Our Smartphones”: “We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day.”

“Robot Hand Beats You at Rock, Paper, Scissors 100% Of The Time”: “This robot hand will play a game of rock, paper, scissors with you. Sounds like fun, right? Not so much, because this particular robot wins every. Single. Time.”

Next, two on the same story coming out of Google’s research division:

“I See Cats”: “Google researchers connected 16,000 computer cores together into a huge neural net (like the network of neurons in your brain) and then used a software program to ask what it (the neural net) “saw” in a pool of 1 million pictures downloaded randomly from the internet.”

“The Triumph of Artificial Intelligence! 16,000 Processors Can Identify a Cat in a YouTube Video Sometimes”: “Perhaps this is not precisely what Turing had in mind.”

Much of this talk about AI has coincided with what would have been Turing’s 100th birthday. Most of it has celebrated the brilliant mathematician and lamented the tragic nature of his life and death. This next piece, however, takes a critical look at the course of AI (or better, the ideology of AI) since Turing:

“The Trouble with the Turing Test”: “But these are not our only alternatives; there is a third way, the way of agnosticism, which means accepting the fact that we have not yet achieved artificial intelligence, and have no idea if we ever will.”

And on a slightly different, post-humanist note (via Evan Selinger):

The International Journal of Machine Consciousness has devoted an entire issue to “Mind Uploading.”

There you go; enough to keep you thinking today.

Technology and Perception: That By Which We See Remains Unseen

“Looking along the beam, and looking at the beam are very different experiences.”
— C. S. Lewis

I wrote recently about the manner in which ubiquitous realities tend to fade from view. They are, paradoxically, too pervasive to be noticed. And I suggested (although, of course, this was nothing like an original observation) that it is these very realities, hiding in front of our noses as the cliché has it, which most profoundly shape our experience. I made note of this phenomenon in order to say that very often these ever-present, unnoticed realities are technological realities.

I want to return to these thoughts and, with a little help from Maurice Merleau-Ponty, unpack at least one of the ways in which certain technologies fade from view while simultaneously shaping our perception. In doing so I’ll also draw on a helpful article by Philip Brey, “Technology and Embodiment in Ihde and Merleau-Ponty.”

The purpose of Brey’s article is to supplement and shore up certain categories developed by the philosopher of technology, Don Ihde. To do so, Brey traces certain illustrations used by Ihde back to their source in Merleau-Ponty’s Phenomenology of Perception.

Ihde sought to create a taxonomy that categorized a limited set of ways humans interacted with technology, and among his categories was one he termed “embodiment relations.” Ihde defined embodiment relations as those in which a technology mediates an individual’s perception of the world and gives a series of examples including glasses, telescopes, hearing aids, and a blind man’s cane. An interesting feature of each of these technologies is that they “withdraw” from view when their use becomes habitual. Ihde lists other examples, however, which Brey finds problematic as exemplars of the category. These include the hammer and a feathered hat.

(The example of the feather hat is drawn from Merleau-Ponty. As a lady wearing a feathered hat makes her way about, she interacts with her surroundings in light of this feature that amounts to an extension of her body.)

In both cases, Brey believes the example is less about perception (although it can be involved) and more about action. Consequently, Brey offers some further distinctions to better get at the kinds of relations Ihde was attempting to classify. He begins by dividing embodiment relations into relations that mediate perception and those that mediate motor skills.

Brey goes on to make further distinctions among the kinds of embodiment relations that mediate motor skills. Some of these involve navigational skills and tend to be of the sort that “enlarge” one’s body. The feathered hat fits into this category as do other items such as a worn backpack that require the user to incorporate the object into one’s body schema in such a way that we pre-consciously navigate as if the object were a part of our body. Then there are embodiment relations which mediate motor skills in interaction with the environment. The hammer fits into this category. These objects become part of our body schema in order to extend our action in the world.

These clarifications and distinctions are helpful, and Brey is right to distinguish between embodiment relations geared toward perception and those geared toward action. But he is also right to point out that even those tools that are geared toward action involve perception to some degree. While a hammer is used primarily to mediate action, it also mediates perception. For example, a hammer strike reveals something about the surface struck.

Yet Brey believes that in this class of embodiment relations the perceptual function is “subordinate” to the motor function. This is probably a sound conclusion, but it does not seem to take into account a more subtle way in which perception comes into play. Elsewhere, I’ve written about the manner in which technology in-hand affects our perception of the world not only by offering sensory feedback, but also by shaping our interpretive acts of perception, our seeing-as. I won’t rehash that argument here; instead I want to focus on the withdrawing character of technologies through which we perceive.

The sorts of tools that mediate perception ordinarily do so while they themselves recede from view. Summarizing Ihde’s discussion of embodiment relations, Brey offers the following description of the phenomenon:

“In embodiment relations, the embodied technology does not, or hardly, become itself an object of perception. Rather, it ‘withdraws’ and serves as a (partially) transparent means through which one perceives one’s environment, thus engendering a partial symbiosis of oneself and it.”

Consider the eye as a paradigmatic example. We see all things through it, but we never see it (unless, of course, in a mirror). This is a function of what Michael Polanyi has called the “from-to” character of perception. Our intentionality is directed from our body outward to the world. “The bodily processes hide,” Mark Johnson explains, “in order to make possible our fluid, automatic experiencing of the world.”

The technologies that we take into an embodied relation do not ordinarily achieve quite so complete a withdrawal, but they do ordinarily fade from our awareness as objects in themselves. Contact lenses, for example, or the blind man’s cane. In fact, almost any tool of which we become expert users tends to withdraw as an object in its own right in order to facilitate our perception or our action.

In short essay titled “Meditation in a Toolshed,” C. S. Lewis offeres an excellent illustration of this dynamic. Granted, he was offering an illustration of different phenomenon, but I think it fits here as well. Lewis described entering into a dark toolshed and seeing before him a shaft of light coming in through a crack above the door. At that moment Lewis “was seeing the beam, not seeing things by it.” But then he stepped into the beam:

“Instantly the whole previous picture vanished. I saw no toolshed, and (above all) no beam. Instead I saw, framed in the irregular cranny at the top of the door, green leaves moving on the branches of a tree outside and beyond that, 90 odd million miles away, the sun. Looking along the beam, and looking at the beam are very different experiences.”

Notice his emphasis on the manner in which the beam itself disappears from view when one sees through it. That through which we perceive ceases to be an object that we perceive. Returning to where we began then, we might say that one manner in which a technology becomes too pervasive too be noticed is by becoming that by which we perceive the world or some aspect of it.

It is easiest to recognize the dynamic at work in objects that are specifically designed to enhance our physical senses — eyeglasses, for example. But the principle may be expanded further (even if the mechanics shift) to include other less obvious ways we perceive through technology. The whole of Marshall McLuhan’s work, in fact, could be seen as an attempt to understand how all technology is media technology that alters perception. In other words, all technology mediates reality in some fashion, but the mediating function withdraws from view because it is that through which we perceive the content. It is the beam of light into which we step to perceive some other thing and, as with the beam, it remains unseen even while it enables and shapes our seeing.


Tip the Writer

$1.00