Evaluating the Promise of Technological Outsourcing

“It is crucial for a resilient democracy that we better understand how these powerful, ubiquitous websites are changing the way we think, interact and behave.” The websites in question are chiefly Google and Facebook. The admonition to better understand their impact on our thinking and civic deliberations comes from an article in The Guardian by Evan Selinger and Brett Frischmann, “Why it’s dangerous to outsource our critical thinking to computers.”

Selinger and Frischmann are the authors of one the forthcoming books I am most eagerly anticipating, Being Human in the 21st Century to be published by Cambridge University Press. I’ve frequently cited Selinger’s outsourcing critique of digital technology (e.g., here and here), which the authors will be expanding and deepening in this study. In short, Selinger has explored how a variety of apps and devices outsource labor that is essential or fundamental to our humanity. It’s an approach that immediately resonated with me, primed as I had been for it by Albert Borgmann’s work. (You can read about Borgmann in the latter link above and here.)

In this case, the crux of Selinger and Frischmann’s critique can be found in these two key paragraphs:

Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.

This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

Their concern, then, is that we may be encouraged to outsource an essential skill to a device or application that promises to do the work for us. In this case, the skill we are tempted to outsource is a critical component of a healthy citizenry. As they put it, “Democracies don’t simply depend on well-informed citizens – they require citizens to be capable of exerting thoughtful, independent judgment.”

As I’m sure Selinger and Frischmann would agree, this outsourcing dynamic is one of the dominant features of the emerging techno-social landscape, and we should work hard to understand its consequences.

As some of you may remember, I’m fond of questions. They are excellent tools for thinking, including thinking about the ethical implications of technology. “Questioning is the piety of thought,” Heidegger once claimed in a famous essay about technology. With that in mind I’ll work my way to a few questions we can ask of outsourcing technologies.

My approach will take its point of departure from Marshall McLuhan’s Laws of Media, sometimes called the Four Effects or McLuhan’s tetrad. These four effects were offered by McLuhan as a compliment to Aristotle’s Four Causes and they were presented as a paradigm by which we might evaluate the consequences of both intellectual and material things, ideas and tools.

The four effects were Retrieval, Reversal, Obsolescence, and Enhancement. Here are a series of questions McLuhan and his son, Eric McLuhan, offered to unpack these four effects:

A. “What recurrence or RETRIEVAL of earlier actions and services is brought into play simultaneously by the new form? What older, previously obsolesced ground is brought back and inheres in the new form?”

B. “When pushed to the limits of its potential, the new form will tend to reverse what had been its original characteristics. What is the REVERSAL potential of the new form?”

C. “If some aspect of a situation is enlarged or enhanced, simultaneously the old condition or un-enhanced situation is displaced thereby. What is pushed aside or OBSOLESCED by the new ‘organ’?”

D. “What does the artefact ENHANCE or intensify or make possible or accelerate? This can be asked concerning a wastebasket, a painting, a steamroller, or a zipper, as well as about a proposition in Euclid or a law of physics. It can be asked about any word or phrase in any language.”

These are all useful questions, but for our purposes the focus will be on the third effect, Obsolescence. It’s in this class of effects that I think we can locate what Selinger calls digital outsourcing. I began by introducing all four, however, so that we wouldn’t be tempted to think that displacement or outsourcing is the only dynamic to which we should give our attention.

When McLuhan invites us to ask what a new technology renders obsolete, we may immediately imagine older technologies that are set aside in favor of the new. Following Borgmann, however, we can also frame the question as a matter of human labor or involvement. In other words, it is not only about older tools that we set aside but also about human faculties, skills, and subjective engagement with the world–these, too, can be displaced or outsourced by new tools. The point, of course, is not to avoid every form of technological displacement, this would be impossible and undesirable. Rather, what we need is a better way of thinking about and evaluating these displacements so that we might, when possible, make wise choices about our use of technology.

So we can begin to elaborate McLuhan’s third effect with this question:

1. What kind of labor does the tool/device/app displace? 

This question yields at least five possible responses:

a. Physical labor, the work of the body
b. Cognitive labor, the work of the mind
c. Emotional labor, the work of the heart
d. Ethical labor, the work of the conscience
e. Volitional labor, the work of the will

The schema implied by these five categories is, of course, like all such schemas, too neat. Take it as a heuristic device.

Other questions follow that help clarify the stakes. After all, what we’re after is not only a taxonomy but also a framework for evaluation.

2. What is the specific end or goal at which the displaced labor is aimed?

In other words, what am I trying to accomplish by the use the technology in question? But the explicit objective I set out to achieve may not be the only effect worth considering; there are implicit effects as well. Some of these implicit effects may be subjective and others may be social; in either case they are not always evident and may, in fact, be difficult to perceive. For example, in using GPS, navigating from Point A to Point B is the explicit objective. However, the use of GPS may also impact my subjective experience of place, for example, and this may carry political implications. So we should also consider a corollary question:

2a. Are there implicit effects associated with the displaced labor?

Consider the work of learning: If the work of learning is ultimately subordinate to becoming a certain kind of person, then it matters very much how we go about learning. This is because  the manner in which we go about acquiring knowledge constitutes a kind of practice that over the long haul shapes our character and disposition in non-trivial ways. Acquiring knowledge through apprenticeship, for example, shapes people in a certain way, acquiring knowledge through extensive print reading in another, and through web based learning in still another. The practice which constitutes our learning, if we are to learn by it, will instill certain habits, virtues, and, potentially, vices — it will shape the kind of person we are becoming.

3. Is the labor we are displacing essential or accidental to the achievement of that goal?

As I’ve written before, when we think of ethical and emotional labor, it’s hard to separate the labor itself from the good that is sought or the end that is pursued. For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous.

A related question arises when we remember the implicit effects we discussed above:

3a. Is the labor essential or accidental to the implicit effects associated with the displaced labor?

4. What skills are sustained by the labor being displaced? 

4a. Are these skills valuable for their own sake and/or transferable to other domains?

These two questions seem more straightforward, so I will say less about them. The key point is essentially the one made by Selinger and Frischmann in the article with which we began: the kind of critical thinking that demigrated require of their citizens should be actively cultivated. Outsourcing that work to an algorithm may, in fact, weaken the very skill it seeks to support.

These questions should help us think more clearly about the promise of technological outsourcing. They may also help us to think more clearly about what we have been doing all along. After all, new technologies often cast old experiences in new light. Even when we are wary or critical of the technologies in question, we may still find that their presence illuminates aspects of our experience by inviting us to think about what we had previously taken for granted.

Presidential Debates and Social Media, or Neil Postman Was Right

imageI’ve chosen to take my debates on Twitter. I’ve done so mostly in the interest of exploring what difference it might make to take in the debates on social media rather than on television.

Of course, the first thing to know is that the first televised debate, the famous 1960 Kennedy/Nixon debate, is something of a canonical case study in media studies. Most of you, I suspect, have heard at some point about how polls conducted after the debate found that those who listened on the radio were inclined to think that Nixon had gotten the better of Kennedy while those who watched the debate on television were inclined to think that Kennedy had won the day.

As it turns out, this is something like a political urban legend. At the very least, it is fair to say that the facts of the case are somewhat more complicated. Media scholar, W. Joseph Campbell of American University, leaning heavily on a 1987 article by David L. Vancil and Sue D. Pendell, has shown that the evidence for viewer-listener disagreement is surprisingly scant and suspect. What little empirical evidence did point to a disparity between viewers and listeners depended on less than rigorous methodology.

Campbell, who’s written a book on media myths, is mostly interested in debunking the idea that viewer-listener disagreement was responsible for the outcome of the election. His point, well-taken, is simply that the truth of the matter is more complicated. With this we can, of course, agree. It would be a mistake, however, to write off the consequences over time of the shift in popular media. We may, for instance, take the first Clinton/Trump debate and contrast it to the Kennedy/Nixon debate and also to the famous Lincoln/Douglas debates. It would be hard to maintain that nothing has changed. But what is the cause of that change?

dd-3Does the evolution of media technology alone account for it? Probably not, if only because in the realm of human affairs we are unlikely to ever encounter singular causes. The emergence of new media itself, for instance, requires explanation, which would lead us to consider economic, scientific, and political factors. However, it would be impossible to discount how new media shape, if nothing else, the conditions under which political discourse evolves.

Not surprisingly, I turned to the late Neil Postman for some further insight. Indeed, I’ve taken of late to suggesting that the hashtag for 2016, should we want one, ought to be #NeilPostmanWasRight. This was a sentiment that I initially encountered in a fine post by Adam Elkus on the Internet culture wars. During the course of his analysis, Elkus wrote, “And at this point you accept that Neil Postman was right and that you were wrong.”

I confess that I rather agreed with Postman all along, and on another occasion I might take the time to write about how well Postman’s writing about technology holds up. Here, I’ll only cite this statement of his argument in Amusing Ourselves to Death:

“My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content—in a phrase, by creating new forms of truth-telling.”

This is the argument Postman presents in a chapter aptly title “Media as Epistemology.” Postman went on to add, admirably, that “I am no relativist in this matter, and that I believe the epistemology created by television not only is inferior to a print-based epistemology but is dangerous and absurdist.”

Let us make a couple of supporting observations in passing, neither of which is original or particularly profound. First, what is it that we remember about the televised debates prior to the age of social media? Do any of us, old enough to remember, recall anything other than an adroitly delivered one-liner? And you know exactly which I have in mind already. Go ahead, before reading any further, call to mind your top three debate memories. Tell me if at least one of these is not among the three.

Reagan, when asked about his age, joking that we would not make an issue out of his opponent’s youth and inexperience.

Sen. Bentsen reminding Dan Quayle that he is no Jack Kennedy.

Admiral Stockdale, seemingly lost on stage, wondering, “Who am I? Why am I here?”

So how did we do? Did we have at least one of those in common? Here’s my point: what is memorable and what counts for “winning” or “losing” a debate in the age of television had precious little to do with the substance of an argument. It had everything to do with style and image. Again, I claim no great insight in saying as much. In fact, this is, I presume, conventional wisdom by now.

(By the way, Postman gets all the more credit if your favorite presidential debate memories involved an SNL cast member, say Dana Carvey, for example.)

Consider as well an example fresh from the first Clinton/Trump debate.

You tell me what “over-prepared” could possibly mean. Moreover, you tell me if that was a charge that you can even begin to imagine being leveled against Lincoln or Douglas or, for that matter, Nixon or Kennedy.

Let’s let Marshall McLuhan take a shot at explaining what Mr. Todd might possibly have meant.

I know, you’re not going to watch the whole thing. Who’s got the time? [#NeilPostmanWasRight] But if you did, you would hear McLuhan explaining why the 1976 Carter/Ford debate was an “atrocious misuse of the TV medium” and “the most stupid arrangement of any debate in the history of debating.” Chiefly, the content and the medium were mismatched. The style of debating both candidates embodied was ill-suited for what television prized, something approaching casual ease, warmth, and informality. Being unable to achieve that style means “losing” the debate regardless of how well you knew your stuff. As McLuhan tells Tom Brokaw, “You’re assuming that what these people say is important. All that matters is that they hold that audience on their image.”

Incidentally, writing in Slate about this clip in 2011, David Haglund wrote, “What seems most incredible to me about this cultural artifact is that there was ever a time when The Today Show would spend ten uninterrupted minutes talking about the presidential debates with a media theorist.” [#NeilPostmanWasRight]

So where does this leave us? Does social media, like television, present us with what Postman calls a new epistemology? Perhaps. We keep hearing a lot of talk about post-factual politics. If that describes our political climate, and I have little reason to doubt as much, then we did not suddenly land here after the advent of social media or the Internet. Facts, or simply the truth, has been fighting a rear-guard action for some time now.

I will make one passing observation, though, about the dynamics of following a debate on Twitter. While the entertainment on offer in the era of television was the thrill of hearing the perfect zinger, social media encourages each of us to become part of the action. Reading tweet after tweet of running commentary on the debate, from left, right, and center, I was struck by the near unanimity of tone: either snark or righteous indignation. Or, better, the near unanimity of apparent intent. No one, it seems to me, was trying to persuade anybody of anything. Insofar as I could discern a motive factor I might on the one hand suggest something like catharsis, a satisfying expunging of emotions. On the other, the desire to land the zinger ourselves. To compose that perfect tweet that would suddenly go viral and garner thousands of retweets. I saw more than a few cross my timeline–some from accounts with thousands and thousands of followers and others from accounts with a meager few hundred–and I felt that it was not unlike watching someone hit the jackpot in the slot machine next to me. Just enough incentive to keep me playing.

A citizen may have attended a Lincoln/Douglas debate to be informed and also, in part, to be entertained. The consumer of the television era tuned in to a debate ostensibly to be informed, but in reality to be entertained. The prosumer of the digital age aspires to do the entertaining.

#NeilPostmanWasRight

Perspectives on Privacy and Human Flourishing

I’ve not been able to track down the source, but somewhere Marshall McLuhan wrote, “Publication is a self-invasion of privacy. The more the data banks record about each one of us, the less we exist.”

The unfolding NSA scandal has brought privacy front and center. A great deal is being written right now about the ideal of privacy, the threats facing it from government activities, and how it might best be defended. Conor Friedersdorf, for instance, worries that our government has built “all the infrastructure a tyrant would need.” At this juncture, the concerns seem to me neither exaggerated nor conspiratorial.

Interestingly, there also seems to be a current of opinion that fails to see what all the fuss is about. Part of this current stems from the idea that if you’ve got nothing to hide, there’s nothing to worry about. There’s an excerpt from Daniel J. Solove’s 2011 book on just this line of reasoning in the Chronicle of Higher Ed that is worth reading (link via Alan Jacobs).

Others are simply willing to trade privacy for security. In a short suggestive post on creative ambiguity with regards to privacy and government surveillance, Tyler Cowen concedes, “People may even be fine with that level of spying, if they think it means fewer successful terror attacks.”  “But,” he immediately adds, “if they acquiesce to the previous level of spying too openly, the level of spying on them will get worse.  Which they do not want.”

Maybe.

I wonder whether we are not witnessing the long foretold end of western modernity’s ideal of privacy. That sort of claim always comes off as a bit hyperbolic, but it’s not altogether misguided. If we grant that the notion of individual privacy as we’ve known it is not a naturally given value but rather a historically situated concept, then it’s worth considering both what factors gave rise to the concept and how changing sociological conditions might undermine its plausibility.

Media ecologists have been addressing these questions for quite awhile. They’ve argued that privacy, as we understand (understood?) it, emerged as a consequence of the kind of reading facilitated by print. Privacy, in their view, is the concern of a certain type of individual consciousness that arises as a by-product of the interiority fostered by reading. Print, in these accounts, is sometimes credited with an unwieldy set of effects which include the emergence of Protestantism, modern democracy, the Enlightenment, and the modern idea of the individual. That print literacy is the sole cause of these developments is almost certainly not the case; that it is implicated in each is almost certainly true.

This was the view, for example, advanced by Walter Ong in Orality and Literacy. “[W]riting makes possible increasingly articulate introspectivity,” Ong explains, “opening the psyche as never before not only to the external objective world quite distinct from itself but also to the interior self against whom the objective world is set.” Further on he wrote,

Print was also a major factor in the development of the sense of personal privacy that marks modern society. It produced books smaller and more portable than those common in a manuscript culture, setting the stage psychologically for solo reading in a quiet corner, and eventually for completely silent reading. In manuscript culture and hence in early print culture, reading had tended to be a social activity, one person reading to others in a group. As Steiner … has suggested, private reading demands a home spacious enough to provide for individual isolation and quiet.

This last point draws architecture into the discussion as Aaron Bady noted in his 2011 essay for MIT Review, “World Without Walls”:

Brandeis and Warren were concerned with the kind of privacy that could be afforded by walls: even where no actual walls protected activities from being seen or heard, the idea of walls informed the legal concept of a reasonable expectation of privacy. It still does … But contemporary threats to privacy increasingly come from a kind of information flow for which the paradigm of walls is not merely insufficient but beside the point.

This argument was also made by Marshall McLuhan who, like his student Ong, linked it to the “coming of the book.” For his part, Ong concluded “print encouraged human beings to think of their own interior conscious and unconscious resources as more and more thing-like, impersonal and religiously neutral. Print encouraged the mind to sense that its possessions were held in some sort of inert mental space.” Presumably, then, the accompanying assumption is that this thing-like inert mental space is something to be guarded and shielded from intrusion.

854px-Vermeer,_Johannes_-_Woman_reading_a_letter_-_ca._1662-1663While it is a letter, not a book that she reads, Vermeer’s Woman in Blue has always seemed to me a fitting visual illustration of this media ecological perspective on the idea of privacy. The question all of this begs is obvious: What does the decline of the age of print entail for the idea of privacy? What happens when we enter what McLuhan called the “electric age” and Ong called the age of “secondary orality,” or what we might now call the “digital age”?

McLuhan and Ong seemed to think that the notion of privacy would be radically reconfigured, if not abandoned altogether. One could easily read the rise of social media as further evidence in defense of their conclusion. The public/private divide has been endlessly blurred. Sharing and disclosure is expected. So much so that those who do not acquiesce to the regime of voluntary and pervasive self-disclosure raise suspicions and may be judged sociopathic.

Perhaps, then, privacy is a habit of thought we may have fallen out of. This possibility was explored in an extreme fashion by Josh Harris, the dot-com era Internet pioneer who subjected himself, and willing others, to unblinking surveillance. The experiment in prophetic sociology was documented by director Ondi Timoner in the film We Live in Public.

The film is offered as a cautionary tale. Harris suffered an emotional and mental breakdown as a consequences of his experimental life. On the film’s website, Timoner added this about Harris’ girlfriend who had enthusiastically signed up for the project:  “She just couldn’t be intimate in public. And I think that’s one of the important lessons in life; the Internet, as wonderful as it is, is not an intimate medium. It’s just not. If you want to keep something intimate and if you want to keep something sacred, you probably shouldn’t post it.”

This caught my attention because it introduced the idea of intimacy rather than, or in addition to, that of privacy. As Solove argued in the piece mentioned above, we eliminate the rich complexity of all that is gathered under the idea of privacy when we reduce it to secrecy or the ability to conceal socially marginalized behaviors. Privacy, as Timoner suggests, can also be understood as the pre-condition of intimacy, and, just to be clear, this should be understood as more than mere sexual intimacy.

The reduction of intimacy to sexuality recalls the popular mis-reading of the Fall narrative in the Hebrew Bible. The description of the Edenic paradise concludes – unexpectedly until familiarity has taught you to expect it – with the narrator’s passing observation that the primordial pair where naked and unashamed. A comment on sexual innocence, perhaps, but much more I think. It spoke to a radical and fearless transparency born of pure guilelessness. The innocence was total and so, then, was the openness and intimacy.

Of course, the point of the story is to set up the next tragic scene in which innocence is lost and the immediate instinct is to cover their nakedness. Total transparency is now experienced as total vulnerability, and this is the world in which we live. Intimacy of every kind is no longer a given. It must emerge alongside hard-earned trust, heroic acts of forgiveness, and self-sacrificing love. And perhaps with this realization we run up against the challenge of our digital self-publicity and the risks posed by perpetual surveillance. The space for a full-fledged flourishing of the human person is being both surrendered and withdrawn. The voluntarily and involuntarily public self, is a self that operates under conditions which undermine the possibility of its own well-being.

But, this is also why I believe Bady is on to something when he writes, “Privacy has a surprising resilience: always being killed, it never quite dies.” It is why I’m not convinced that we could entirely reduce all that is entailed in the notion of privacy to a function of print literacy. If something that answers to the name of privacy is a condition of our human flourishing in our decidedly un-Edenic condition, then one hopes we will not relinquish it entirely to either the imperatives of digital culture or the machinations of the state. It is, admittedly, a tempered hope.

Technology and Perception: That By Which We See Remains Unseen

“Looking along the beam, and looking at the beam are very different experiences.”
— C. S. Lewis

I wrote recently about the manner in which ubiquitous realities tend to fade from view. They are, paradoxically, too pervasive to be noticed. And I suggested (although, of course, this was nothing like an original observation) that it is these very realities, hiding in front of our noses as the cliché has it, which most profoundly shape our experience. I made note of this phenomenon in order to say that very often these ever-present, unnoticed realities are technological realities.

I want to return to these thoughts and, with a little help from Maurice Merleau-Ponty, unpack at least one of the ways in which certain technologies fade from view while simultaneously shaping our perception. In doing so I’ll also draw on a helpful article by Philip Brey, “Technology and Embodiment in Ihde and Merleau-Ponty.”

The purpose of Brey’s article is to supplement and shore up certain categories developed by the philosopher of technology, Don Ihde. To do so, Brey traces certain illustrations used by Ihde back to their source in Merleau-Ponty’s Phenomenology of Perception.

Ihde sought to create a taxonomy that categorized a limited set of ways humans interacted with technology, and among his categories was one he termed “embodiment relations.” Ihde defined embodiment relations as those in which a technology mediates an individual’s perception of the world and gives a series of examples including glasses, telescopes, hearing aids, and a blind man’s cane. An interesting feature of each of these technologies is that they “withdraw” from view when their use becomes habitual. Ihde lists other examples, however, which Brey finds problematic as exemplars of the category. These include the hammer and a feathered hat.

(The example of the feather hat is drawn from Merleau-Ponty. As a lady wearing a feathered hat makes her way about, she interacts with her surroundings in light of this feature that amounts to an extension of her body.)

In both cases, Brey believes the example is less about perception (although it can be involved) and more about action. Consequently, Brey offers some further distinctions to better get at the kinds of relations Ihde was attempting to classify. He begins by dividing embodiment relations into relations that mediate perception and those that mediate motor skills.

Brey goes on to make further distinctions among the kinds of embodiment relations that mediate motor skills. Some of these involve navigational skills and tend to be of the sort that “enlarge” one’s body. The feathered hat fits into this category as do other items such as a worn backpack that require the user to incorporate the object into one’s body schema in such a way that we pre-consciously navigate as if the object were a part of our body. Then there are embodiment relations which mediate motor skills in interaction with the environment. The hammer fits into this category. These objects become part of our body schema in order to extend our action in the world.

These clarifications and distinctions are helpful, and Brey is right to distinguish between embodiment relations geared toward perception and those geared toward action. But he is also right to point out that even those tools that are geared toward action involve perception to some degree. While a hammer is used primarily to mediate action, it also mediates perception. For example, a hammer strike reveals something about the surface struck.

Yet Brey believes that in this class of embodiment relations the perceptual function is “subordinate” to the motor function. This is probably a sound conclusion, but it does not seem to take into account a more subtle way in which perception comes into play. Elsewhere, I’ve written about the manner in which technology in-hand affects our perception of the world not only by offering sensory feedback, but also by shaping our interpretive acts of perception, our seeing-as. I won’t rehash that argument here; instead I want to focus on the withdrawing character of technologies through which we perceive.

The sorts of tools that mediate perception ordinarily do so while they themselves recede from view. Summarizing Ihde’s discussion of embodiment relations, Brey offers the following description of the phenomenon:

“In embodiment relations, the embodied technology does not, or hardly, become itself an object of perception. Rather, it ‘withdraws’ and serves as a (partially) transparent means through which one perceives one’s environment, thus engendering a partial symbiosis of oneself and it.”

Consider the eye as a paradigmatic example. We see all things through it, but we never see it (unless, of course, in a mirror). This is a function of what Michael Polanyi has called the “from-to” character of perception. Our intentionality is directed from our body outward to the world. “The bodily processes hide,” Mark Johnson explains, “in order to make possible our fluid, automatic experiencing of the world.”

The technologies that we take into an embodied relation do not ordinarily achieve quite so complete a withdrawal, but they do ordinarily fade from our awareness as objects in themselves. Contact lenses, for example, or the blind man’s cane. In fact, almost any tool of which we become expert users tends to withdraw as an object in its own right in order to facilitate our perception or our action.

In short essay titled “Meditation in a Toolshed,” C. S. Lewis offeres an excellent illustration of this dynamic. Granted, he was offering an illustration of different phenomenon, but I think it fits here as well. Lewis described entering into a dark toolshed and seeing before him a shaft of light coming in through a crack above the door. At that moment Lewis “was seeing the beam, not seeing things by it.” But then he stepped into the beam:

“Instantly the whole previous picture vanished. I saw no toolshed, and (above all) no beam. Instead I saw, framed in the irregular cranny at the top of the door, green leaves moving on the branches of a tree outside and beyond that, 90 odd million miles away, the sun. Looking along the beam, and looking at the beam are very different experiences.”

Notice his emphasis on the manner in which the beam itself disappears from view when one sees through it. That through which we perceive ceases to be an object that we perceive. Returning to where we began then, we might say that one manner in which a technology becomes too pervasive too be noticed is by becoming that by which we perceive the world or some aspect of it.

It is easiest to recognize the dynamic at work in objects that are specifically designed to enhance our physical senses — eyeglasses, for example. But the principle may be expanded further (even if the mechanics shift) to include other less obvious ways we perceive through technology. The whole of Marshall McLuhan’s work, in fact, could be seen as an attempt to understand how all technology is media technology that alters perception. In other words, all technology mediates reality in some fashion, but the mediating function withdraws from view because it is that through which we perceive the content. It is the beam of light into which we step to perceive some other thing and, as with the beam, it remains unseen even while it enables and shapes our seeing.


Nathaniel Hawthorne Anticipates McLuhan and de Chardin

Those familiar with Marshall McLuhan will remember his view, and it was not his alone, that our technologies are fundamentally extensions of ourselves. And in McLuhan’s view, electric technologies were extensions of our nervous system. So, for example, In Understanding Media, McLuhan writes:

“With the arrival of electric technology, man extended, or set outside himself, a live model of the central nervous system itself.” (65)

“When information moves at the speed of signals in the central nervous system, man is confronted with the obsolescence of all earlier forms of acceleration, such as road and rail. What emerges is a total field of inclusive awareness.” (143)

“It is a principle aspect of the electric age that it establishes a global network that has much of the character of our central nervous system. Our central nervous system is not merely an electric network, but it constitutes a single unified field of experience.” (460-461)

Those familiar with McLuhan will also know not only that McLuhan was a Roman Catholic (recent essay on that score here), but that he was influenced by the thought of a relatively fringe Catholic paleontologist and theologian/philosopher, Teilhard de Chardin, who, in The Future of Man, spoke of technology creating “a nervous system for humanity … a single, organized, unbroken membrane over the earth … a stupendous thinking machine.”

As it turns out, McLuhan and de Chardin were trading in a metaphor/analogy that had even older roots. At the outset of his fascinating (if your into this sort of stuff) study Electrifying America, David E. Nye cites the following passage from Nathaniel Hawthorne’s The House of Seven Gables, in which the character Clifford exclaims,

“Then there is electricity, the demon, the angel, the mighty physical power, the all-pervading intelligence!” … “Is it a fact — or have i dreamt it — that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time? Rather, the round globe is a vast head, a brain, instinct with intelligence! Or, shall we say, it is itself a thought, nothing but a thought, and no longer the substance which we deemed it!”

Hawthorne’s novel, in case you’re wondering, dates from 1851.