Macro-trends in Technology and Culture

Who would choose cell phones and Twitter over toilets and running water? Well, according to Kevin Kelly, certain rural Chinese farmers. In a recent essay exploring the possibilities of a post-productive economy, Kelly told of the remote villages he visited in which locals owned cell phones but lived in houses without even the most rudimentary forms of plumbing. It is a choice, Kelly notes, deeply influenced by tradition and culture. Kelly’s point may not be quite unassailable, but it is a fair reminder that technology is a culturally mediated phenomenon.

There are, generally speaking, two schools of thought on the relationship between technology and culture. Those tending toward some variety of technological determinism would argue that technology drives culture. Those who tend toward a social constructivist view of technology would argue the opposite. Ultimately, any theory of technology must account for the strengths and weaknesses of both of these tendencies. In fact, the framing of the relationship is probably problematic anyway since there are important ways in which technology is always cultural and culture is always technological.

For the purposes of this post, I’d like to lean toward the social constructivist perspective. No technology appears in a vacuum. It’s origins, evolution, adoption, deployment, and diffusion are all culturally condition. Moreover, the meaning of any technology is always culturally determined; it is never simply given in the form of the technology itself. Historians of technology have reminded us of this reality in numerous fascinating studies — studies of the telephone, for example, and the airplane, the electric grid, household technologies, and much else besides. When a new technology appears, it is interpreted and deployed within an already existing grid of desires, possibilities, necessities, values, symbols, expectations, and constraints. That a technology may re-order this grid in time does not negate the fact that it must first be received by it. The relationship is reciprocal.

If this is true, then it seems to me that we should situate our technologies not only within the immediate historical and social context of their genesis, but also within broader and more expansive historical trajectories. Is our use of computer technology, for example, still inflected by Baconian aspirations? What role do Cartesian dualisms play in shaping our relationship with the world through our technologies? To what degree does Christian eschatology inform technological utopianism? These seem to be important questions, the answers to which might usefully inform our understanding of the place of technology in contemporary society. Of course, these particular questions pertain especially to the West. I suspect another set of questions would apply to non-Western societies and still further questions would be raised within the context of globalization. But again, the basic premise is simply this: a given technology’s social context is not necessarily bounded by its immediate temporal horizons. We ought to be taking the long view as well.

But the rhythms of technological change (and the logic of the tech industry) would seem to discourage us from taking the long view, or at least the long view backwards in time. The pace of technological change over the last two hundred years or so has kept us busy trying to navigate the present, and its trajectory, real and ideal, casts our vision forward in the direction of imagined futures. But what if, as Faulkner quipped, the past with regards to technology is not dead or even past?

I’m wondering, for instance, about these large motive forces that have driven technological innovation in the West, such as the restoration of Edenic conditions or the quest for rational mastery over the natural world leading to the realization of Utopia. These early modern and Enlightenment motive forces directed and steered the evolution of technology in the West for centuries, and I do not doubt that they continue to exert their influence still. Yet, over the last century and half Western society has undergone a series of profound transformations. How have these shaped the evolution of technology? (The inverse question is certainly valid as well.) This is, I suppose, another way of asking about the consequences of post-modernity (which I distinguish from postmodernism) for the history of technology.

In a provocative and compelling post a few months ago, Nick Carr drew an analogy between the course of technological innovation and Maslow’s hierarchy of needs:

“The focus, or emphasis, of innovation moves up through five stages, propelled by shifts in the needs we seek to fulfill. In the beginning come Technologies of Survival (think fire), then Technologies of Social Organization (think cathedral), then Technologies of Prosperity (think steam engine), then technologies of leisure (think TV), and finally Technologies of the Self (think Facebook, or Prozac).”

I continue to find this insightful, and I think the angle I’m taking here dovetails with Carr’s analysis. The technologies of Prosperity and Leisure correspond roughly to the technologies of modernity. Technologies of the Self correspond roughly to the technologies of post-modernity. Gone is our faith in les grands récits that underwrote a variety of utopian visions and steered the evolution of technology. We live in an age of diminished expectations; we long for the fulfillment of human desires writ small. Self-fulfillment is our aim.

This is, incidentally, a trajectory that is nicely illustrated by Lydia DePillis’ suggestion that the massive Consumer Electronics Show “is what a World’s Fair might look like if brands were more important than countries.” The contrast between the world’s fairs and the CES is telling. The world’s fairs, especially those that preceded the 1939 New York fair, were quite obviously animated by thoroughly modern ideologies. They were, as President McKinley put it, “timekeepers of progress,” and one might as well capitalize Progress. On the other hand, whatever we think of the Consumer Electronics Show, it is animated by quite different and more modest spirits. The City of Tomorrow was displaced by the entertainment center of tomorrow before giving way to the augmented self of tomorrow.

Why did technological innovation take this path? Was it something in the nature of technology itself? Or, was it rather a consequence of larger sea changes in the character of society? Maybe a little of both, but probably more of the latter. It’s possible, of course, that this macro-perspective on the the co-evolution of culture and technology can obscure important details and result in misleading generalizations, but if those risks can be mitigated, it may also unveil important trends and qualities that would be invisible to more narrowly focused analysis.

‘The Connecting Is the Thinking’: Memory and Creativity

Last summer Nicholas Carr published The Shallows: What the Internet Is Doing to Our Brains, a book length extension of his 2008 Atlantic essay, “Is Google Making Us Stupid?” The book received a good bit of attention and was in the ensuing weeks reviewed seemingly everywhere.  We noted a few of those reviews here and here.  Coming in fashionably late to the show, Jim Holt has written a lenghty review in the London Review of Books titled, “Smarter, Happier, More Productive.” Perhaps a little bit of distance is helpful.

Holt’s review ends up being one of the better summaries of Carr’s book that I have read, if only because Holt details more of the argument than most reviews.  In the end, he tends to think that Carr is stretching the evidence and overstating his case on two fronts, intelligence and happiness. However, he is less sanguine on one last point, creativity, and that in relation to memory.

Holt cites two well known writers who are optimistic about off-loading their memories to the Internet:

This raises a prospect that has exhilarated many of the digerati. Perhaps the internet can serve not merely as a supplement to memory, but as a replacement for it. ‘I’ve almost given up making an effort to remember anything,’ says Clive Thompson, a writer for Wired, ‘because I can instantly retrieve the information online.’ David Brooks, a New York Times columnist, writes: ‘I had thought that the magic of the information age was that it allowed us to know more, but then I realised the magic of the information age is that it allows us to know less. It provides us with external cognitive servants – silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves.’

But as Holt notes, “The idea that machine might supplant Mnemosyne is abhorrent to Carr, and he devotes the most interesting portions of his book to combatting it.”  Why not outsource our memory?

Carr responds with a bit of rhetorical bluster. ‘The web’s connections are not our connections,’ he writes. ‘When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.’ Then he quotes William James, who in 1892 in a lecture on memory declared: ‘The connecting is the thinking.’ And James was onto something: the role of memory in thinking, and in creativity.

Holt goes on to supplement Carr’s discussion with an anecdote about the polymathic French mathematician, Henri Poincare.  What makes Poincare’s case instructive is that “his breakthroughs tended to come in moments of sudden illumination.”

Poincaré had been struggling for some weeks with a deep issue in pure mathematics when he was obliged, in his capacity as mine inspector, to make a geological excursion. ‘The changes of travel made me forget my mathematical work,’ he recounted.

“Having reached Coutances, we entered an omnibus to go some place or other. At the moment I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake, I verified the result at my leisure.”

How to account for the full-blown epiphany that struck Poincaré in the instant that his foot touched the step of the bus? His own conjecture was that it had arisen from unconscious activity in his memory.

This leads Holt to suggest, following Poincare, that bursts of creativity and insight arise from the unconscious work of memory, and that this is the difference between internalized and externalized memory.  We may be able to retrieve at will whatever random piece of information we are looking for with a quick Google search, but that seems not to approach the power of the human mind to creatively and imaginatively work with its stores of memory.  Holt concludes:

It is the connection between memory and creativity, perhaps, which should make us most wary of the web. ‘As our use of the web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the net’s capacious and easily searchable artificial memory,’ Carr observes. But conscious manipulation of externally stored information is not enough to yield the deepest of creative breakthroughs: this is what the example of Poincaré suggests. Human memory, unlike machine memory, is dynamic. Through some process we only crudely understand – Poincaré himself saw it as the collision and locking together of ideas into stable combinations – novel patterns are unconsciously detected, novel analogies discovered. And this is the process that Google, by seducing us into using it as a memory prosthesis, threatens to subvert.

And this leads me to make one additional observation.  As I’ve mentioned before, it is customary in these discussions to refer back to Plato’s Phaedrus in which Socrates warns that writing, as an externalization of memory, will actually lead to the diminishing of human memory.  Holt mentions the passage in his review and Carr mentions it as well.  When the dialog is trotted out it is usually as a “straw man”  to prove that concerns about new technologies are silly and misguided.  But it seems to me that there is a silent equivocation that slips into these discussions: the notion of memory we tend to assume is our current understanding of memory that is increasingly defined by the comparison to computer memory which is essentially storage.

It seems to me that having first identified a computer’s storage  capacity as “memory,” a metaphor dependent upon the human capacity we call “memory,” we have now come to reverse the direction of the metaphor by understanding human “memory” in light of a computer’s storage capacity.  In other words we’ve reduced our understanding of memory to mere storage of information.  And now we read all discussions of memory in light of this reductive understanding.

Given this reductive view of memory, it seems silly for Socrates (and by extension, Plato) to worry about the externalization of memory, whether it is stored inside or outside, what difference does it make as long as we can access it?  And, in fact, access becomes the problem that attends all externalized memories from the book to the Internet.  But what if memory is not mere storage?  Few seem to extend their analysis to account for the metaphysical role memory of the world of forms played within Plato’s account of the human person and true knowledge.  We may not take Plato’s metaphysics at face value, but we can’t really understand his concerns about memory without understanding their lager intellectual context.

Holt helps us to see the impoverishment of our understanding of memory from another, less metaphysically freighted, perspective.  The Poincare anecdote in its own way also challenges the reduction of memory to mere storage, linking it with the complex workings of creativity and insight.  Others have similarly linked memory to identity, wisdom, and even, in St. Augustine’s account, our understanding of the divine.  Whether one veers into the theological or not, the reduction of memory to mere storage of data should strike us as an inadequate account of memory and its significance and cause us to rethink our readiness to offload it.

Update:  Post from Carr on the issue of memory including a relevant excerpt from The Shallows, “Killing Mnemosyne.”

McLuhan’s Catholicism

Just passing along a link to Nick Carr’s brief review in The New Republic of Douglas Coupland’s new biography of Marshall McLuhan, Marshall McLuhan: You Know Nothing of My Work!.  In the review, Carr makes the following observation:

Neither his fans nor his foes saw him clearly. The central fact of McLuhan’s life, as Coupland makes clear, was his conversion, at the age of twenty-five, to Catholicism, and his subsequent devotion to the religion’s rituals and tenets. Though he never discussed it, his faith forms the moral and intellectual backdrop to all his mature work. What lay in store, McLuhan believed, was the timelessness of eternity. The earthly conceptions of past, present, and future were, by comparison, of little consequence. His role as a thinker was not to celebrate or denigrate the world but simply to understand it, to recognize the patterns that would unlock history’s secrets and thus provide hints of God’s design. His job was not dissimilar, as he saw it, from that of the artist.

Below is a clip of the exchange between McLuhan and Norman Mailer that Carr references in his review:

One of my favorite YouTube videos is a clip from a Canadian television show in 1968 featuring a debate between Norman Mailer and Marshall McLuhan. The two men, both heroes of the ’60s, could hardly be more different. Leaning forward in his chair, Mailer is pugnacious, animated, engaged. McLuhan, abstracted and smiling wanly, seems to be on autopilot. He speaks in canned riddles. “The planet is no longer nature,” he declares, to Mailer’s uncomprehending stare; “it’s now the content of an art work.”

After watching the clip, I’ve got to agree with Carr; ten minutes well spent.

________________________________________________

*See also Marx, Freud, and … McLuhan.

Drowning in the Shallow End

As George Lakoff and Mark Johnson pointed out in Metaphors We Live By, we do a lot of our thinking and understanding through metaphors that structure our thoughts and concepts.  So pervasive are these metaphors, that in most cases we don’t even realize we are using metaphors at all.  Recently, metaphors related to shallowness and depth have caught my attention.

Many of the fears expressed by critics of the Internet and the digital world revolve around a loss of depth.  We are, in their view, gaining an immense amount of breadth or surface area, but it is coming at the expense of depth and by extension rendering us rather shallow.  For example, consider this passage from a brief statement playwright Richard Foreman contributed to Edge:

… today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the “instantly available”. A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become “pancake people”—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.

The notion of “pancake people” is a variation on the shallow/deep metaphor — a good deal of surface area, not much depth.  I first came across Foreman’s analogy in the conclusion of Nicholas Carr’s much discussed piece in The Atlantic, “Is Google Making Us Stupid.” Carr’s piece generated not only a lot of discussion, but also a book published this year exploring the effects of the Internet on the brain.  Carr’s book explored a variety of recent studies suggesting that significant Internet use was inhibiting our capacity for sustained attention and our ability to think deeply.  The title of Carr’s book?  The Shallows.

What is interesting about metaphors such as deep/shallow is that we do appear to have a rather intuitive sense of what they are communicating.  I suspect we all have some notion of what it means to say that someone or some idea is not very deep, or what is meant when some one says that they are just skimming the surface of a topic.  But the nature of metaphors is such that they both hide and reveal.  They help us understand a concept by comparing it to some other, perhaps more familiar idea, but the two things are never identical and so while something is illuminated, something may be hidden.  Also, the taken for granted status of some metaphors, shallowness/depth for instance, may also lull us into thinking that we understand something when we really don’t, in the same way, for example, that St. Augustine remarked that he knew what “time” was until he was asked to define it.

What exactly is it to say that an idea is shallow or deep?  Can we describe what we mean without resorting to metaphor? It is not that I am against metaphors at all, one can’t really be against metaphorical language without losing language as we know it altogether.  It may be that we cannot get at some ideas at all without metaphor.  My point rather is to try to think … well, more deeply about the consequences of our digital world.  Having noticed that key criticisms frequently involve this idea of a loss of depth, it seems that we better be sure we know what is meant.  Very often discussions and debates don’t seem to get anywhere because the participants are using terms equivocally or without a precise sense of how they are being used by the other side.  A little sorting out of our terms, perhaps especially our metaphors, may go a long way toward advancing the conversation.  (Incidentally, that last phrase is also a metaphor.)

Here is one last instance of the metaphor that doesn’t arise out of the recent debates about the Internet, and yet appears to be quite applicable.  The following is taken from Hannah Arendt’s 1958 work, The Human Condition:

A life spent entirely in public, in the presence of others, becomes, as we would say, shallow.  While it retains visibility, it loses the quality of rising into sight from some darker ground which must remain hidden if it is not to lose its depth in a very real, non-subjective sense.

Arendt’s comments arise from a technical and complex discussion of what she identifies as the private, public, and social realms of human life.  And while she was rather prescient in certain areas, she could not have imagined the rise of the Internet and social media.  However, these comments seem to be very much in line with Jaron Lanier’s observation, that “you have to be somebody before you can share yourself.” In our rush to publicize our selves and our thoughts, we are losing the hidden and private space in which we cultivate depth and substance.

Although employing other metaphors to do so, Richard Foreman also offered a sense of what he understood to be the contrast to the “pancake people”:

I come from a tradition of Western culture in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.

This is not necessarily about the recovery of some Romantic notion of the essential self, but it is about a certain degree of complexity and solidity (metaphor’s again, I know).  In any case, it strikes me as an ideal worth preserving.  Foreman and Carr (and perhaps Arendt if she were around) seem uncertain that it is an ideal that can survive in the digital age.  At the very least, they are pointing to some of the challenges.  Given that the digital age is not going away, it is left to us, if we value the ideal, to think of how complexity, depth, and density can be preserved.  And the first thing we may have to do is bring some conceptual clarity to our metaphors.

“Questionable Classrooms”

It’s been awhile since Nicholas Carr has made an appearance, so here is Carr’s recent interview with The Chronicle of Higher Education.  Some highlights below.

On technology and teaching:

Q. Some professors are interested in integrating social technology—blogs, wikis, Twitter—into their teaching. Are you suggesting that is a misguided approach?

A. I’m suggesting that it would be wrong to assume that that path is always the best path. I’m certainly not suggesting that we take a Luddite view of technology and think it’s all bad. But I do think that the assumption that the more media, the more messaging, the more social networking you can bring in will lead to better educational outcomes is not only dubious but in many cases is probably just wrong. It has to be a very balanced approach. Educators need to familiarize themselves with the research and see that in fact one of the most debilitating things you can do to students is distract them.

On recovering one’s attention span:

Q. If the Internet is making us so distracted, how did you manage to write a 224-page book and read all the dense academic studies that much of it is based on?

A. It was hard. The reason I started writing it was because I noticed in myself this increasing inability to pay attention to stuff, whether it was reading or anything else. When I started to write the book, I found it very difficult to sit and write for a couple of hours on end or to sit down with a dense academic paper. One thing that happened at that time is I moved from outside of Boston, a really highly connected place, to renting a house in the mountains of Colorado. And I didn’t have any cellphone service. I had a very slow Internet connection. I dropped off of Facebook. I dropped out of Twitter. I basically stopped blogging for a while. And I fairly dramatically cut back on checking e-mail. After I got over the initial period of panic that I was missing out on information, my abilities to concentrate did seem to strengthen again. I felt in a weird way intellectually or mentally calmer. And I could sit down and write or read with a great deal of attentiveness for quite a long time.

And on “smart classrooms” in colleges:

Q. Colleges refer to a screen-equipped space as a “smart classroom.” What would you call it?

A. I would call it a classroom that in certain circumstances would be beneficial and in others would actually undermine the mission of the class itself. I would maybe call it a questionable classroom.