Directive from the Borg: Love All Technology, Now!

I don’t know about you, but when I look around, it seems to me, that we live in what may be conservatively labeled a technology-friendly social environment. If that seems like a reasonable estimation of the situation to you, then, it would appear, that you and I are out of touch with reality. Or, at least, this is what certain people in the tech world would have us believe. To hear some of them talk, it would appear that the technology sector is a beleaguered minority fending off bands of powerful critics, that Silicon Valley is an island of thoughtful, benign ingenuity valiantly holding off hordes of Luddite barbarians trying to usher in a new dark age.

Consider this tweet from venture capitalist Marc Andreessen.

Don’t click on that link quite yet. First, let me explain the rhetorical context. Andreessen’s riposte is aimed at two groups at once. On the one hand, he is taking a swipe at those who, like Peter Thiel, worry that we are stuck in a period of technological stagnation and, on the other, critics of technology. The implicit twofold message is simple: concerns about stagnation are misguided and technology is amazing. In fact, “never question progress or technology” is probably a better way of rendering it, but more on that in a moment.

Andreessen has really taken to Twitter. The New Yorker recently published a long profile of Andreessen, which noted that he “tweets a hundred and ten times a day, inundating his three hundred and ten thousand followers with aphorisms and statistics and tweetstorm jeremiads.” It continues,

Andreessen says that he loves Twitter because “reporters are obsessed with it. It’s like a tube and I have loudspeakers installed in every reporting cubicle around the world.” He believes that if you say it often enough and insistently enough it will come—a glorious revenge. He told me, “We have this theory of nerd nation, of forty or fifty million people all over the world who believe that other nerds have more in common with them than the people in their own country. So you get to choose what tribe or band or group you’re a part of.” The nation-states of Twitter will map the world.

Not surprisingly, Andreessen’s Twitter followers tend to be interested in technology and the culture of Silicon Valley. For this reason, I’ve found that taking a glance at the replies Andreessen’s tweets garner gives us an interesting, if at times somewhat disconcerting snapshot of attitudes about technology, at least within a certain segment of the population. For instance, if you click on that tweet above and skim the replies it has received, you would assume the linked article was nothing more than a Luddite screed about the evils of technology.

Instead, what you will find is Tom Chatfield interviewing Nick Carr about his latest book. It’s a good interview, too, well worth a few minutes of your time. Carr is, of course, a favorite whipping boy for this crowd, although I’ve yet to see any evidence that they’ve read a word Carr has written.

Here’s a sampling of some of Carr’s more outlandish and incendiary remarks:

• “the question isn’t, ‘should we automate these sophisticated tasks?’, it’s ‘how should we use automation, how should we use the computer to complement human expertise'”

• “I’m not saying that there is no role for labour-saving technology; I’m saying that we can do this wisely, or we can do it rashly; we can do it in a way that understands the value of human experience and human fulfilment, or in a way that simply understands value as the capability of computers.”

• “I hope that, as individuals and as a society, we maintain a certain awareness of what is going on, and a certain curiosity about it, so that we can make decisions that are in our best long-term interest rather than always defaulting to convenience and speed and precision and efficiency.”

• “And in the end I do think that our latest technologies, if we demand more of them, can do what technologies and tools have done through human history, which is to make the world a more interesting place for us, and to make us better people.”

Crazy talk, isn’t it? That guy, what an unhinged, Luddite fear-monger.

Carr has the temerity to suggest that we think about what we are doing, and Andreessen translates this as a complaint that technology is “ruining life as we know it.”

Here’s what this amounts to: you have no choice but to love technology. Forget measured criticism or indifference. No. Instead, you must love everything about it. Love every consequences of every new technology. Love it adamantly and ardently. Express this love proudly and repeatedly: “The world is now more awesome than ever because of technology and it will only get more awesome each and everyday.” Repeat. Repeat. Repeat.

This is pretty much it, right? You tell me?

Classic Borg Complex, of course. But wait, there’s more.

Here’s a piece from New York Times’ Style Magazine that crossed my path yesterday: “In Defense of Technology.” You read that correctly. In defense of technology. Because, you know, technology really needs defending these days. Obviously.

It gets better. Here’s the quick summary below the title: “As products and services advance, plenty of nostalgists believe that certain elements of humanity have been lost. One contrarian argues that being attached to one’s iPhone is a godsend.”

“One contrarian.”

“One.”

Read that piece, then contemplate Alan Jacobs’ 70th out of 79 theses on technology: “The always-connected forget the pleasures of disconnection, then become impervious to them.” Here are the highlights, in my view, of this defense of technology:

• “I now feel — and this is a revelation — that my past was an interesting and quite fallow period spent waiting for the Internet.”

• “I didn’t know it when I was young, but maybe we were just waiting for more stuff and ways to save time.”

• “I’ve come fully round to time-saving apps. I’ve become addicted to the luxury of clicking through for just about everything I need.”

• “Getting better is getting better. Improvement is improving.”

• “Don’t tell me the spiritual life is over. In many ways it’s only just begun.”

• “What has been lost? Nothing.”

Nothing. Got that? Nothing. So quit complaining. Love it all. Now.

Silencing the Heretics: How the Faithful Respond to Criticism of Technology

I started to write a post about a few unhinged reactions to an essay published by Nicholas Carr in this weekend’s WSJ, “Automation Makes Us Dumb.”  Then I realized that I already wrote that post back in 2010. I’m republishing “A God that Limps” below, with slight revisions, and adding a discussion of the reactions to Carr. 

Our technologies are like our children: we react with reflexive and sometimes intense defensiveness if either is criticized. Several years ago, while teaching at a small private high school, I forwarded an article to my colleagues that raised some questions about the efficacy of computers in education. This was a mistake. The article appeared in a respectable journal, was judicious in its tone, and cautious in its conclusions. I didn’t think then, nor do I now, that it was at all controversial. In fact, I imagined that given the setting it would be of at least passing interest. However, within a handful of minutes (minutes!)—hardly enough time to skim, much less read, the article—I was receiving rather pointed, even angry replies.

I was mystified, and not a little amused, by the responses. Mostly though, I began to think about why this measured and cautious article evoked such a passionate response. Around the same time I stumbled upon Wendell Berry’s essay titled, somewhat provocatively, “Why I am Not Going to Buy a Computer.” More arresting than the essay itself, however, were the letters that came in to Harper’s. These letters, which now typically appear alongside the essay whenever it is anthologized, were caustic and condescending. In response, Berry wrote,

The foregoing letters surprised me with the intensity of the feelings they expressed. According to the writers’ testimony, there is nothing wrong with their computers; they are utterly satisfied with them and all that they stand for. My correspondents are certain that I am wrong and that I am, moreover, on the losing side, a side already relegated to the dustbin of history. And yet they grow huffy and condescending over my tiny dissent. What are they so anxious about?

Precisely my question. Whence the hostility, defensiveness, agitation, and indignant, self-righteous anxiety?

I’m typing these words on a laptop, and they will appear on a blog that exists on the Internet.  Clearly I am not, strictly speaking, a Luddite. (Although, in light of Thomas Pynchon’s analysis of the Luddite as Badass, there may be a certain appeal.) Yet, I do believe an uncritical embrace of technology may prove fateful, if not Faustian.

The stakes are high. We can hardly exaggerate the revolutionary character of certain technologies throughout history:  the wheel, writing, the gun, the printing press, the steam engine, the automobile, the radio, the television, the Internet. And that is a very partial list. Katherine Hayles has gone so far as to suggest that, as a species, we have “codeveloped with technologies; indeed, it is no exaggeration,” she writes in Electronic Literature, “to say modern humans literally would not have come into existence without technology.”

We are, perhaps because of the pace of technological innovation, quite conscious of the place and power of technology in our society and in our own lives. We joke about our technological addictions, but it is sometimes a rather nervous punchline. It makes sense to ask questions. Technology, it has been said, is a god that limps. It dazzles and performs wonders, but it can frustrate and wreak havoc. Good sense seems to suggest that we avoid, as Thoreau put it, becoming tools of our tools. This doesn’t entail burning the machine; it may only require a little moderation. At a minimum, it means creating, as far as we are able, a critical distance from our toys and tools, and that requires searching criticism.

And we are back where we began. We appear to be allergic to just that kind of searching criticism. So here is my question again:  Why do we react so defensively when we hear someone criticize our technologies?

And so ended my earlier post. Now consider a handful of responses to Carr’s article, “Automation Makes Us Dumb.” Better yet, read the article, if you haven’t already, and then come back for the responses.

Let’s start with a couple of tweets by Joshua Gans, a professor of management at the University of Toronto.

Then there was this from entrepreneur, Marc Andreessen:

Even better are some of the replies attached to Andreessen’s tweet. I’ll transcribe a few of those here for your amusement.

“Why does he want to be stuck doing repetitive mind-numbing tasks?”

“‘These automatic jobs are horrible!’ ‘Stop killing these horrible jobs with automation!'” [Sarcasm implied.]

“by his reasoning the steam engine makes us weaklings, yet we’ve seen the opposite. so maybe the best intel is ahead”

“Let’s forget him, he’s done so much damage to our industry, he is just interested in profiting from his provocations”

“Nick clearly hasn’t understood the true essence of being ‘human’. Tech is an ‘enabler’ and aids to assist in that process.”

“This op-ed is just a Luddite screed dressed in drag. It follows the dystopian view of ‘Wall-E’.”

There you have it. I’ll let you tally up the logical fallacies.

Honestly, I’m stunned by the degree of apparently willful ignorance exhibited by these comments. The best I can say for them is that they are based on a glance at the title of Carr’s article and nothing more. It would be much more worrisome if these individuals had actually read the article and still managed to make these comments that betray no awareness of what Carr actually wrote.

More than once, Carr makes clear that he is not opposed to automation in principle. The last several paragraphs of the article describe how we might go forward with automation in a way that avoids some serious pitfalls. In other words, Carr is saying, “Automate, but do it wisely.” What a Luddite!

When I wrote in 2010, I had not yet formulated the idea of a Borg Complex, but this inability to rationally or calmly abide any criticism of technology is surely pure, undistilled Borg Complex, complete with Luddite slurs!

I’ll continue to insist that we are in desperate need of serious thinking about the powers that we are gaining through our technologies. It seems, however, that there is a class of people who are hell-bent on shutting down any and all criticism of technology. If the criticism is misguided or unsubstantiated, then it should be refuted. Dismissing criticism while giving absolutely no evidence of having understood it, on the other hand, helps no one at all.

I come back to David Noble’s description of the religion of technology often, but only because of how useful it is as a way of understanding techno-scientific culture. When technology is a religion, when we embrace it with blind faith, when we anchor our hope in it, when we love it as ourselves–then any criticism of technology will be understood as either heresy or sacrilege. And that seems to be a pretty good way of characterizing the responses to tech criticism I’ve been discussing: the impassioned reactions of the faithful to sacrilegious heresy.

More on Mechanization, Automation, and Animation

As I follow the train of thought that took the dream of a smart home as a point of departure, I’ve come to a fork in the tracks. Down one path, I’ll continue thinking about the distinctions among Mechanization, Automation, and Animation. Down the other, I’ll pursue the technological enchantment thesis that arose incidentally in my mind as a way of either explaining or imaginatively characterizing the evolution of technology along those three stages.

Separating these two tracks is a pragmatic move. It’s easier for me at this juncture to consider them separately, particularly to weigh the merits of the latter. It may be that the two tracks will later converge, or it may be that one or both are dead ends. We’ll see. Right now I’ll get back to the three stages.

In his comment on my last post, Evan Selinger noted that my schema was Borgmannesque in its approach, and indeed it was. If you’ve been reading along for awhile, you know that I think highly of Albert Borgmann’s work. I’ve drawn on it a time or two of late. Borgmann looked for a pattern that might characterize the development of technology, and he came up with what he called the device paradigm. Succinctly put, the device paradigm described the tendency of machines to become simultaneously more commodious and more opaque, or, to put it another way, easier to use and harder to understand.

In my last post, I used heating as an example to walk through the distinctions among mechanization, automation, and animation. Borgmann also uses heating to illustrate the device paradigm: lighting and sustaining a fire is one thing, flipping a switch to turn on the furnace is another. Food and music also serve as recurring illustrations for Borgmann. Preparing a meal from scratch is one thing, popping a TV dinner in the microwave is another. Playing the piano is one thing, listening to an iPod is another. In each case a device made access to the end product–heat, food, music–easier, instantaneous, safer, more efficient. In each case, though, the workings of the device beneath the commodious surface became more complex and opaque. (Note that in the case of food preparation, both the microwave and the TV dinner are devices.) Ease of use also came at the expense of physical engagement, which, in Borgmann’s view, results in an impoverishment of experience and a rearrangement of the social world.

Keep that dynamic in mind as we move forward. The device paradigm does a good job, I think, of helping us think about the transition to mechanization and from mechanization to automation and animation, chiefly by asking us to consider what we’re sacrificing in exchange for the commodiousness offered to us.

Ultimately, we want to avoid the impulse to automate for automation’s sake. As Nick Carr, whose forthcoming book, The Glass Cage: Automation and Us, will be an excellent guide in these matters, recently put it, “What should be automated is not what can be automated but what should be automated.”

That principle came at the end of a short post reflecting on comments made by Google’s “Android guru,” Sundar Pichai. Pichai offered a glimpse at how Google envisions the future when he described how useful it would be if your car could sense that your child was now inside and automatically changed the music playlists accordingly. Here’s part of Carr’s response:

“With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be ‘freed up’ to do something ‘more valuable.’ Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.”

But how do we come to know what should be automated? I’m not sure there’s a short answer to that question, but it’s safe to say that we’re going to need to think carefully about what we do and why we do it. Again, this is why I think Hannah Arendt was ahead of her time when she undertook the intellectual project that resulted in The Human Condition and the unfinished The Life of the Mind. In the first she set out to understand our doing and in the second, our thinking. And all of this in light of the challenges presented by emerging technological systems.

One of the upshots of new technologies, if we accept the challenge, is that they lead us to look again at what we might have otherwise taken for granted or failed to notice altogether. New communication technologies encourage us to think again about the nature of human communication. New medical technologies encourage us to think again about the nature of health. New transportation technologies encourage us to think again about the nature of place. And so on.

I had originally used the word “forced” where I settled for the word “encourage” above. I changed the wording because, in fact, new technologies don’t force us to think again about the realms of life they impact. It is quite easy, too easy perhaps, not to think at all, simply to embrace and adopt the new technology without thinking at all about its consequences. Or, what amounts to the same thing, it is just as easy to reject new technologies out of hand because they are new. In neither case would we be thinking at all. If we accept the challenge to think again about the world as new technologies cast aspects of it in a new light, we might even begin to see this development as a great gift by leading us to value, appreciate, and even love what was before unnoticed.

Returning to the animation schema, we might make a start at thinking by simply asking ourselves what exactly is displaced at each transition. When it comes to mechanization, it seems fairly straightforward. Mechanization, as I’m defining it, ordinarily displaces physical labor.

Capturing what exactly is displaced when it comes to automation is a bit more challenging. In part, this is because the distinctions I’m making between mechanization and automation on the one hand and automation and animation on the other are admittedly fuzzy. In fact, all three are often simply grouped together under the category of automation. This is a simpler move, but I’m concerned that we might not get a good grasp of the complex ways in which technologies interact with human action if we don’t parse things a bit more finely.

So let’s start by suggesting that automation, the stage at which machines operate without the need for constant human input and direction, displaces attention. When something is automated, I can pay much less attention to it, or perhaps, no attention at all. We might also say that automation displaces will or volition. When a process is automated, I don’t have to will its action.

Finally, animation– the stage at which machines not only act without direct human intervention, but also “learn” and begin to “make decisions” for themselves–displaces agency and judgment.

By noting what is displaced we can then ask whether the displaced element was an essential or inessential aspect of the good or end sought by the means, and so we might begin to arrive at some more humane conclusions about what ought to be automated.

I’ll leave things there for now, but more will be forthcoming. Right now I’ll leave you with a couple of questions I’ll be thinking about.

First, Borgmann distinguished between things and devices (see here or here). Once we move from automation to animation, do we need a new category?

Also, coming back to Arendt, she laid out two sets of three categories that overlap in interesting ways with the three stages as I’m thinking of them. In her discussion of human doing, she identifies labor, work, and action. In her discussion of human thinking, she identifies thought, will, and judgment. How can her theorizing of these categories help us understand what’s at stake in drive to automate and animate?

Macro-trends in Technology and Culture

Who would choose cell phones and Twitter over toilets and running water? Well, according to Kevin Kelly, certain rural Chinese farmers. In a recent essay exploring the possibilities of a post-productive economy, Kelly told of the remote villages he visited in which locals owned cell phones but lived in houses without even the most rudimentary forms of plumbing. It is a choice, Kelly notes, deeply influenced by tradition and culture. Kelly’s point may not be quite unassailable, but it is a fair reminder that technology is a culturally mediated phenomenon.

There are, generally speaking, two schools of thought on the relationship between technology and culture. Those tending toward some variety of technological determinism would argue that technology drives culture. Those who tend toward a social constructivist view of technology would argue the opposite. Ultimately, any theory of technology must account for the strengths and weaknesses of both of these tendencies. In fact, the framing of the relationship is probably problematic anyway since there are important ways in which technology is always cultural and culture is always technological.

For the purposes of this post, I’d like to lean toward the social constructivist perspective. No technology appears in a vacuum. It’s origins, evolution, adoption, deployment, and diffusion are all culturally condition. Moreover, the meaning of any technology is always culturally determined; it is never simply given in the form of the technology itself. Historians of technology have reminded us of this reality in numerous fascinating studies — studies of the telephone, for example, and the airplane, the electric grid, household technologies, and much else besides. When a new technology appears, it is interpreted and deployed within an already existing grid of desires, possibilities, necessities, values, symbols, expectations, and constraints. That a technology may re-order this grid in time does not negate the fact that it must first be received by it. The relationship is reciprocal.

If this is true, then it seems to me that we should situate our technologies not only within the immediate historical and social context of their genesis, but also within broader and more expansive historical trajectories. Is our use of computer technology, for example, still inflected by Baconian aspirations? What role do Cartesian dualisms play in shaping our relationship with the world through our technologies? To what degree does Christian eschatology inform technological utopianism? These seem to be important questions, the answers to which might usefully inform our understanding of the place of technology in contemporary society. Of course, these particular questions pertain especially to the West. I suspect another set of questions would apply to non-Western societies and still further questions would be raised within the context of globalization. But again, the basic premise is simply this: a given technology’s social context is not necessarily bounded by its immediate temporal horizons. We ought to be taking the long view as well.

But the rhythms of technological change (and the logic of the tech industry) would seem to discourage us from taking the long view, or at least the long view backwards in time. The pace of technological change over the last two hundred years or so has kept us busy trying to navigate the present, and its trajectory, real and ideal, casts our vision forward in the direction of imagined futures. But what if, as Faulkner quipped, the past with regards to technology is not dead or even past?

I’m wondering, for instance, about these large motive forces that have driven technological innovation in the West, such as the restoration of Edenic conditions or the quest for rational mastery over the natural world leading to the realization of Utopia. These early modern and Enlightenment motive forces directed and steered the evolution of technology in the West for centuries, and I do not doubt that they continue to exert their influence still. Yet, over the last century and half Western society has undergone a series of profound transformations. How have these shaped the evolution of technology? (The inverse question is certainly valid as well.) This is, I suppose, another way of asking about the consequences of post-modernity (which I distinguish from postmodernism) for the history of technology.

In a provocative and compelling post a few months ago, Nick Carr drew an analogy between the course of technological innovation and Maslow’s hierarchy of needs:

“The focus, or emphasis, of innovation moves up through five stages, propelled by shifts in the needs we seek to fulfill. In the beginning come Technologies of Survival (think fire), then Technologies of Social Organization (think cathedral), then Technologies of Prosperity (think steam engine), then technologies of leisure (think TV), and finally Technologies of the Self (think Facebook, or Prozac).”

I continue to find this insightful, and I think the angle I’m taking here dovetails with Carr’s analysis. The technologies of Prosperity and Leisure correspond roughly to the technologies of modernity. Technologies of the Self correspond roughly to the technologies of post-modernity. Gone is our faith in les grands récits that underwrote a variety of utopian visions and steered the evolution of technology. We live in an age of diminished expectations; we long for the fulfillment of human desires writ small. Self-fulfillment is our aim.

This is, incidentally, a trajectory that is nicely illustrated by Lydia DePillis’ suggestion that the massive Consumer Electronics Show “is what a World’s Fair might look like if brands were more important than countries.” The contrast between the world’s fairs and the CES is telling. The world’s fairs, especially those that preceded the 1939 New York fair, were quite obviously animated by thoroughly modern ideologies. They were, as President McKinley put it, “timekeepers of progress,” and one might as well capitalize Progress. On the other hand, whatever we think of the Consumer Electronics Show, it is animated by quite different and more modest spirits. The City of Tomorrow was displaced by the entertainment center of tomorrow before giving way to the augmented self of tomorrow.

Why did technological innovation take this path? Was it something in the nature of technology itself? Or, was it rather a consequence of larger sea changes in the character of society? Maybe a little of both, but probably more of the latter. It’s possible, of course, that this macro-perspective on the the co-evolution of culture and technology can obscure important details and result in misleading generalizations, but if those risks can be mitigated, it may also unveil important trends and qualities that would be invisible to more narrowly focused analysis.

‘The Connecting Is the Thinking’: Memory and Creativity

Last summer Nicholas Carr published The Shallows: What the Internet Is Doing to Our Brains, a book length extension of his 2008 Atlantic essay, “Is Google Making Us Stupid?” The book received a good bit of attention and was in the ensuing weeks reviewed seemingly everywhere.  We noted a few of those reviews here and here.  Coming in fashionably late to the show, Jim Holt has written a lenghty review in the London Review of Books titled, “Smarter, Happier, More Productive.” Perhaps a little bit of distance is helpful.

Holt’s review ends up being one of the better summaries of Carr’s book that I have read, if only because Holt details more of the argument than most reviews.  In the end, he tends to think that Carr is stretching the evidence and overstating his case on two fronts, intelligence and happiness. However, he is less sanguine on one last point, creativity, and that in relation to memory.

Holt cites two well known writers who are optimistic about off-loading their memories to the Internet:

This raises a prospect that has exhilarated many of the digerati. Perhaps the internet can serve not merely as a supplement to memory, but as a replacement for it. ‘I’ve almost given up making an effort to remember anything,’ says Clive Thompson, a writer for Wired, ‘because I can instantly retrieve the information online.’ David Brooks, a New York Times columnist, writes: ‘I had thought that the magic of the information age was that it allowed us to know more, but then I realised the magic of the information age is that it allows us to know less. It provides us with external cognitive servants – silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves.’

But as Holt notes, “The idea that machine might supplant Mnemosyne is abhorrent to Carr, and he devotes the most interesting portions of his book to combatting it.”  Why not outsource our memory?

Carr responds with a bit of rhetorical bluster. ‘The web’s connections are not our connections,’ he writes. ‘When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.’ Then he quotes William James, who in 1892 in a lecture on memory declared: ‘The connecting is the thinking.’ And James was onto something: the role of memory in thinking, and in creativity.

Holt goes on to supplement Carr’s discussion with an anecdote about the polymathic French mathematician, Henri Poincare.  What makes Poincare’s case instructive is that “his breakthroughs tended to come in moments of sudden illumination.”

Poincaré had been struggling for some weeks with a deep issue in pure mathematics when he was obliged, in his capacity as mine inspector, to make a geological excursion. ‘The changes of travel made me forget my mathematical work,’ he recounted.

“Having reached Coutances, we entered an omnibus to go some place or other. At the moment I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake, I verified the result at my leisure.”

How to account for the full-blown epiphany that struck Poincaré in the instant that his foot touched the step of the bus? His own conjecture was that it had arisen from unconscious activity in his memory.

This leads Holt to suggest, following Poincare, that bursts of creativity and insight arise from the unconscious work of memory, and that this is the difference between internalized and externalized memory.  We may be able to retrieve at will whatever random piece of information we are looking for with a quick Google search, but that seems not to approach the power of the human mind to creatively and imaginatively work with its stores of memory.  Holt concludes:

It is the connection between memory and creativity, perhaps, which should make us most wary of the web. ‘As our use of the web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the net’s capacious and easily searchable artificial memory,’ Carr observes. But conscious manipulation of externally stored information is not enough to yield the deepest of creative breakthroughs: this is what the example of Poincaré suggests. Human memory, unlike machine memory, is dynamic. Through some process we only crudely understand – Poincaré himself saw it as the collision and locking together of ideas into stable combinations – novel patterns are unconsciously detected, novel analogies discovered. And this is the process that Google, by seducing us into using it as a memory prosthesis, threatens to subvert.

And this leads me to make one additional observation.  As I’ve mentioned before, it is customary in these discussions to refer back to Plato’s Phaedrus in which Socrates warns that writing, as an externalization of memory, will actually lead to the diminishing of human memory.  Holt mentions the passage in his review and Carr mentions it as well.  When the dialog is trotted out it is usually as a “straw man”  to prove that concerns about new technologies are silly and misguided.  But it seems to me that there is a silent equivocation that slips into these discussions: the notion of memory we tend to assume is our current understanding of memory that is increasingly defined by the comparison to computer memory which is essentially storage.

It seems to me that having first identified a computer’s storage  capacity as “memory,” a metaphor dependent upon the human capacity we call “memory,” we have now come to reverse the direction of the metaphor by understanding human “memory” in light of a computer’s storage capacity.  In other words we’ve reduced our understanding of memory to mere storage of information.  And now we read all discussions of memory in light of this reductive understanding.

Given this reductive view of memory, it seems silly for Socrates (and by extension, Plato) to worry about the externalization of memory, whether it is stored inside or outside, what difference does it make as long as we can access it?  And, in fact, access becomes the problem that attends all externalized memories from the book to the Internet.  But what if memory is not mere storage?  Few seem to extend their analysis to account for the metaphysical role memory of the world of forms played within Plato’s account of the human person and true knowledge.  We may not take Plato’s metaphysics at face value, but we can’t really understand his concerns about memory without understanding their lager intellectual context.

Holt helps us to see the impoverishment of our understanding of memory from another, less metaphysically freighted, perspective.  The Poincare anecdote in its own way also challenges the reduction of memory to mere storage, linking it with the complex workings of creativity and insight.  Others have similarly linked memory to identity, wisdom, and even, in St. Augustine’s account, our understanding of the divine.  Whether one veers into the theological or not, the reduction of memory to mere storage of data should strike us as an inadequate account of memory and its significance and cause us to rethink our readiness to offload it.

Update:  Post from Carr on the issue of memory including a relevant excerpt from The Shallows, “Killing Mnemosyne.”