Drowning in the Shallow End

As George Lakoff and Mark Johnson pointed out in Metaphors We Live By, we do a lot of our thinking and understanding through metaphors that structure our thoughts and concepts.  So pervasive are these metaphors, that in most cases we don’t even realize we are using metaphors at all.  Recently, metaphors related to shallowness and depth have caught my attention.

Many of the fears expressed by critics of the Internet and the digital world revolve around a loss of depth.  We are, in their view, gaining an immense amount of breadth or surface area, but it is coming at the expense of depth and by extension rendering us rather shallow.  For example, consider this passage from a brief statement playwright Richard Foreman contributed to Edge:

… today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the “instantly available”. A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become “pancake people”—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.

The notion of “pancake people” is a variation on the shallow/deep metaphor — a good deal of surface area, not much depth.  I first came across Foreman’s analogy in the conclusion of Nicholas Carr’s much discussed piece in The Atlantic, “Is Google Making Us Stupid.” Carr’s piece generated not only a lot of discussion, but also a book published this year exploring the effects of the Internet on the brain.  Carr’s book explored a variety of recent studies suggesting that significant Internet use was inhibiting our capacity for sustained attention and our ability to think deeply.  The title of Carr’s book?  The Shallows.

What is interesting about metaphors such as deep/shallow is that we do appear to have a rather intuitive sense of what they are communicating.  I suspect we all have some notion of what it means to say that someone or some idea is not very deep, or what is meant when some one says that they are just skimming the surface of a topic.  But the nature of metaphors is such that they both hide and reveal.  They help us understand a concept by comparing it to some other, perhaps more familiar idea, but the two things are never identical and so while something is illuminated, something may be hidden.  Also, the taken for granted status of some metaphors, shallowness/depth for instance, may also lull us into thinking that we understand something when we really don’t, in the same way, for example, that St. Augustine remarked that he knew what “time” was until he was asked to define it.

What exactly is it to say that an idea is shallow or deep?  Can we describe what we mean without resorting to metaphor? It is not that I am against metaphors at all, one can’t really be against metaphorical language without losing language as we know it altogether.  It may be that we cannot get at some ideas at all without metaphor.  My point rather is to try to think … well, more deeply about the consequences of our digital world.  Having noticed that key criticisms frequently involve this idea of a loss of depth, it seems that we better be sure we know what is meant.  Very often discussions and debates don’t seem to get anywhere because the participants are using terms equivocally or without a precise sense of how they are being used by the other side.  A little sorting out of our terms, perhaps especially our metaphors, may go a long way toward advancing the conversation.  (Incidentally, that last phrase is also a metaphor.)

Here is one last instance of the metaphor that doesn’t arise out of the recent debates about the Internet, and yet appears to be quite applicable.  The following is taken from Hannah Arendt’s 1958 work, The Human Condition:

A life spent entirely in public, in the presence of others, becomes, as we would say, shallow.  While it retains visibility, it loses the quality of rising into sight from some darker ground which must remain hidden if it is not to lose its depth in a very real, non-subjective sense.

Arendt’s comments arise from a technical and complex discussion of what she identifies as the private, public, and social realms of human life.  And while she was rather prescient in certain areas, she could not have imagined the rise of the Internet and social media.  However, these comments seem to be very much in line with Jaron Lanier’s observation, that “you have to be somebody before you can share yourself.” In our rush to publicize our selves and our thoughts, we are losing the hidden and private space in which we cultivate depth and substance.

Although employing other metaphors to do so, Richard Foreman also offered a sense of what he understood to be the contrast to the “pancake people”:

I come from a tradition of Western culture in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.

This is not necessarily about the recovery of some Romantic notion of the essential self, but it is about a certain degree of complexity and solidity (metaphor’s again, I know).  In any case, it strikes me as an ideal worth preserving.  Foreman and Carr (and perhaps Arendt if she were around) seem uncertain that it is an ideal that can survive in the digital age.  At the very least, they are pointing to some of the challenges.  Given that the digital age is not going away, it is left to us, if we value the ideal, to think of how complexity, depth, and density can be preserved.  And the first thing we may have to do is bring some conceptual clarity to our metaphors.

It’s Not a Game, It’s an Experience — Mark Cuban Channels Don Draper

In the first season finale of the AMC series Mad Men, Don Draper famously pitches an ad campaign for Kodak’s new slide projector in which he suggests, with appropriately melodramatic music in the background,

This is not a spaceship, it’s a time machine … It goes backwards and forwards, and it takes us to a place where we ache to go again … It’s not called ‘The Wheel.’ It’s called ‘The Carousel.’ It lets us travel around and around and back home again.

The scene was effectively parodied by SNL shortly thereafter.  You can see the scene below and watch the parody at Hulu.

Over the top perhaps, but it did convey Madison Avenue’s awareness that selling a product involves connecting with something deeper than utility or effectiveness.  Think what you will of the Dallas Maverick’s sometimes controversial owner Mark Cuban, he has perceptively argued, along similar lines, that those in the professional sports business are not selling games, they are selling experiences.  In a recent blog post he writes,

We in the sports business don’t sell the game, we sell unique, emotional experiences.We are not in the business of selling basketball. We are in the business of selling fun and unique experiences. I say it to our people at the Mavs at all time, I want a Mavs game to be more like a great wedding than anything else.

Ultimately, his post ends up being about technologies that insert themselves into the experience and thus detract from that experience.

… I hate the trend of handheld video at games. I can’t think of a bigger mistake.  The last thing I want is someone looking down at their phone to see a replay.

This is not unlike Jaron Lanier’s wondering whether we are really there at all when we tweet, blog, or update our status throughout an event or gathering.  Cuban and Lanier, in their own ways, are both arguing for our full presence in our own experience.  He concludes his post with the following observation:

The fan experience is about looking up, not looking down. If you let them look down, they might as well stay at home, the screen is always going to be better there.

This is good advice for life in general.  Look up from the screen.  Who knows, in looking one another in the eyes again, we might begin to recover the habits of respect and civility that are now so sorely missed.

Warning: A Liberal Education Leads to Independent Thinking

File this one under “Unintended Consequences.”

In the 1950’s, at the height of the Cold War, Bell Telephone Company of Pennsylvania put its most promising young managers through a rigorous 10-month training program in the Humanities with the help of the University of Pennsylvania.  During that time they participated in lectures and seminars, read voraciously, visited museums, attended the symphony, and toured Philadelphia, New York and Washington.  To top it off, many of the leading intellectuals of the time were brought in to lecture these privileged few and discuss their books.  Among the luminaries were poet W. H. Auden and sociologist David Reisman whose 1950 book, The Lonely Crowd, was a classic study of the set to which these men belonged.

The idea behind the program was simple.  Managers with only a technical background were competent at their present jobs, but they were not sufficiently well-rounded for the responsibilities of upper management.  As sociologist E. Digby Baltzell put it, “A well-trained man knows how to answer questions, they reasoned; an educated man knows what questions are worth asking.”  Already in the early 20th century “information overload” was deemed a serious problem for managers, but by the early 1950’s it was believed that computers were going to solve the problem. (I know.  That in itself is worth elaboration, but it will have to wait for another post.)  The automation associated with computers, however, ushered in a new problem — the danger that the manager would become a thoughtless, unoriginal, technically competent conformist.  Writing in 1961, Walter Buckingham warned against the possibility that automation would lead not only to a “standardization of products,” but also to a “standardization of thinking.”

But there were other worries as well.  It was feared that the Soviet Union was pulling away in the sheer numbers of scientists and engineers creating a talent gap between the USSR and America.  As a way of undercutting this advantage, many looked to the Humanities and a liberal education.  According to Thomas Woody, writing in 1950, “Liberal education was an education for free men, competent to fit them for freedom.”  Thus a humanistic education became not only a tool to better prepare business executives for the complexity of their jobs, it was a weapon against Communism.

In one sense, the program was a success.  The young men were reading more, their intellectual curiosity was heightened, and they were more open minded and able to see an argument from both sides.  There was one problem, however.  The Bell students were now less willing to be a cog in the corporate machinery.  Their priorities were reordered around family and community.  According to one participant, “Now things are different.  I still want to get along in the company, but I now realize that I owe something to myself, my family, and my community.”  Another put it this way,

Before this course, I was like a straw floating with the current down the stream.  The stream was the Bell Telephone Company.  I don’t think I will ever be like that straw again.

Consequently, the program began to appear as a threat to the company.  One other strike against the program:  a survey revealed that after passing through the program participants were likely to become more tolerant of socialism and less certain that a free democracy depended upon free business enterprise.  By 1960, the program was disbanded.

This is a fascinating story about the power of an education in the humanities to enlarge the mind and fit one for freedom.  But it is also a reminder that in an age of conformity, thinking for oneself is not always welcomed even if it is paid lip service.  After all, remember how well things turned out for Socrates.

__________________

A note about sources:  I first read about the Institute for Humanistic Studies for Executives in an op-ed piece by Wes Davis in the NY Times.  The story fascinated me and I subsequently found an article on the program written in the journal The Historian in 1998 by Mark D. Bowles titled, “The Organization Man Goes To College:  AT&T’s Experiment in Humanistic Education, 1953-1960.”  Quotes in the post are drawn from Bowles’ article.

Life Amid Ruins

In Status Anxiety — his part philosophically-minded self-help book, part social history — Alain de Botton describes two fashions that were popular in the art world during the 17th and 18th century respectively.  The first, vanitas art, took its name from the biblical book of Ecclesiastes in which it is written, “Vanity of vanity, all is vanity.”  Vanitas art which flourished especially in the Netherlands, and also in Paris, was, as the biblical citation implies, concerned with life’s fleeting nature.  As de Botton describes them,

Each still-life featured a table or sideboard on which was arranged a contrasting muddle of objects.  There might be flowers, coins, a guitar or mandolin, chess pieces, a book of verse, a laurel wreath or wine bottle: symbols of frivolity and temporal glory.  And somewhere among these would be set the two great symbols of death and the brevity of life:  a skull and an hourglass.

A bit morbid we might think, but as de Botton explains,

The purpose of such works was not to send their viewers into a depression over the vanity of all things; rather, it was to embolden them to find fault with particular aspects of their own experience, while at the same time attending more closely to the virtues of love, goodness, sincerity, humility and kindness.

Okay, still a bit morbid you might be thinking, but fascinating nonetheless.  Here is the first of two examples provided in Status Anxiety:

Philippe de Champaigne, circa 1671

Here is the second example:

Simon Renard de Saint-Andre, circa 1662

And here are a few others from among the numerous examples one can find online:

Edwart Collier, 1640
Pieter Boel, 1663
Adam Bernaert, circa 1665
Edwart Collier, 1690

Less morbid and more nostalgic, the second art fashion de Botton examines is the 18th and 19th century fascination with ruins.  This fascination was no doubt inspired in part by the unearthing of Pompeii’s sister city, Herculaneum, in 1738.  The most intriguing subset of these paintings of ancient ruins, however, were those paintings that imagined not the past in ruins, but the future.  “A number of artists,” according to de Botton, “have similarly delighted in depicting their own civilization in a tattered future form, as a warning to, and reprisal against, the pompous guardians of the age.”  Consider these the antecedents of the classic Hollywood trope in which some famous city and its monuments lies in ruins — think Planet of the Apes and the Statue of Liberty.

Status Anxiety provides three examples these future ruins.  The first depicts the Louvre in ruins:

Hubert Robert, Imaginary View of the Grande Gallerie of the Louvre in Ruins, 1796

The second depicts the ruins of the Bank of London:

Joseph Gandy, View of the Rotunda of the Bank of England in Ruins, 1798

And the third, from a later period, depicts the city of London in ruins being sketched by a man from New Zealand, “the country that in Dore’s day symbolized the future,” in much the same way that Englishmen on their Grand Tours would sketch the ruins of Athens or Rome.

Gustav Dore, The New Zealander, 1871

Finally, both of these art fashions suggested to my mind Nicolas Poussin’s Et in Arcadia ego from 1637-1638:

Nicolas Poussin, Et in Arcadia ego, 1637-38

Here, shepherds stumble upon some ancient tomb in which they read the inscription, Et in Arcadia ego.  There has been some debate about the precise way the phrase should be taken.  It may be read as the voice of death personified saying “even in Arcadia I exist,” or it may mean “the person buried in this tomb lived in Arcadia.”  In either case the moral is clear.  Death comes for the living.  It is a memento mori, a reminder of death (note the appearance of that phrase in the last piece of vanitas art above).

Admittedly, these are not the most uplifting of reflections.  However, de Botton’s point and the point of the artists who painted these works strikes me as sound:  we make a better go of the present if we live with the kind of perspective engendered by these works of art.  Our tendency to ignore our mortality and our refusal to acknowledge the limitations of a single human life may be animating much of our discontent and alienation.  Perhaps.  Certainly there is some wisdom here we may tap into.  This is pure conjecture, of course, but I wonder how many, having contemplated Gandy’s paiting, would have found the phrase “too big too fail” plausible?  Might we not, with a renewed sense of our mortality, reorder some of our priorities bringing into sharper focus the more meaningful elements of life?

It is also interesting to consider that not only do we have few contemporary equivalents of the kind of art work we’ve considered, but neither do we have any actual ruins in our midst.  America seems uniquely prepared to have been a country without a sense of the past.  Not only are we an infant society by historical standards, but even the ancient inhabitants of our lands, unlike those further south, left no monumental architecture, no tokens of antiquity.  Those of us who live in suburbia may go days without casting our eyes on anything older than twenty years.  We have been a forward looking society whose symbolic currency has not been — could not have been — ruins of the past, but rather the Frontier with its accent on the future and what may be.

I would not call into question the whole of this cultural sensibility, but perhaps we could have used just a few ruins.

“Are you really there?” — How not to become specatators of our lives

If you try to keep up with the ongoing debate regarding the Internet and the way it is shaping our world and our minds, you will inevitably come across the work of Jaron Lanier.  When you do, stop and take note.  Lanier qualifies as an Internet pessimist in Adam Thierer’s breakdown of The Great Debate over Technology’s Impact on Society, but he is an insightful pessimist with a long history in the tech industry.  Unlike other, often insightful, critics such as the late Neil Postman and Nicholas Carr, Lanier speaks with an insider’s perspective.  We noted his most recent book, You Are Not a Gadget: A Manifesto,not long ago.

Earlier this week, I ran across a short piece Lanier contributed to The Chronicle of Higher Education in response to the question, “What will be the defining idea of the coming decade, and why?” Lanier’s response, cheerfully titled “The End of Human Specialness,” was one of a number of responses solicited by The Chronicle from leading scholars and illustrators.  In his piece, Lanier recalls addressing the “common practice of students blogging, networking, or tweeting while listening to a speaker” and telling his audience at the time,

The most important reason to stop multitasking so much isn’t to make me feel respected, but to make you exist. If you listen first, and write later, then whatever you write will have had time to filter through your brain, and you’ll be in what you say. This is what makes you exist. If you are only a reflector of information, are you really there?

We have all experienced it; we know exactly what Lanier is talking about.  We’ve seen it happen, we’ve had it happen to us, and — let’s be honest — we have probably also been the offending party.  Typically this topic elicits a rant against the incivility and lack of respect such actions communicate to those who are on the receiving end, and that is not  unjustified.  What struck me about Lanier’s framing of the issue, however, was the emphasis on the person engaged in the habitual multitasking and not on the affront to the one whose presence is being ignored.

We are virtually dispersed people.  Our bodies are in one place, but our attention is in a dozen other places and, thus,  nowhere at all.  This is not entirely new; there are antecedents.  Long before smart phones enabled a steady flow of distraction and allowed us to carry on multiple interactions simultaneously, we wandered away into the daydreams our imagination conjured up for us.  My sense, however, is that such retreats into our consciousness are a different sort of  thing than our media enabled evacuations of the place and moment we inhabit.  For one thing, they were not nearly so frequent and intrusive. We might also argue that when we daydream our attention is in fact quite focused in one place, the place of our dream.  We are somewhere rather than nowhere.

Whatever we think of the antecedents, however, it is clear that many of us are finding it increasingly difficult to be fully present in our own experience.  Perhaps part of the what is going on is captured by the old adage about the man with a hammer to whom everything looks like a nail.  My most vivid experience with this dynamic came years ago with my first digital camera.  To the person with a digital camera (and enough memory), I discovered, everything looks like a picture and you can’t help but take it.  I have wonderful pictures of Italy, but very few memories.  And so we may  extrapolate:  to the person with a Twitter account, everything is a tweet waiting to be condensed into 140 characters.  To the person with a video recorder on their phone, everything is a moment to be documented.  To the person with an iPhone … well, pick the App.

In an article written by professor Barry Mauer, I recently learned about Andy Warhol’s obsessive documentation of his own experience through photographs, audiotape, videotape, and film.  In his biography of Warhol, Victor Bockris writes,

Indeed, Andy’s desire to record everything around him had become a mania.  As John Perrault, the art critic, wrote in a profile of Warhol in Vogue:  “His portable tape recorder, housed in a black briefcase, is his latest self-protection device.  The microphone is pointed at anyone who approaches, turning the situation into a theater work.  He records hours of tape every day but just files the reels away and never listens to them.”

Warhol’s behavior would, I suspect, seem less problematic today.  Here too he was perhaps simply ahead of his time.  Given much more efficient tools, we are also obsessively documenting our lives.  But what most people do tends to be viewed as normal.  It is interesting, though, that Perrault referred to Warhol’s tape recorder as a “self-protective device.”  It called to mind R. R. Reno’s analysis of the pose of ironic detachment so characteristic of our society:

We enjoy an irony that does not seek resolution because it supports our desire to be invulnerable observers rather than participants at risk. We are spectators of our lives, free from the strain of drama and the uncertainty of a story in which our souls are at stake.

“Spectators of our lives.”  The phrase is arresting, and the prospect is unsettling.  But it is hardly necessary or inevitable.  If the cost of re-engaging our own lives, of becoming participants at risk in the unfolding drama of our own story is a few less photos that we may end up deleting anyway, one less Facebook update from our phone, or one text left unread for a short while, then that is a price well worth paying.  We will be better for it, and those others, in whose presence we daily live, will be as well.