Reflections on “Growing Up Digital”

A few days ago the NY Times ran a piece by Matt Richtel called “Growing Up Digital” which remains at the moment the most emailed, most blogged, and most commented article on their site.  The piece does not necessarily break any new ground, but nicely summarizes some concerns that are on the minds of parents, teachers, and anyone who is just a bit unsettled by the emerging shape of the digital mode of being in the world.   This will be the first of probably two posts featuring excerpts from the Times story accompanied by a few elaborations beginning with . . .

Students have always faced distractions and time-wasters. But computers and cellphones, and the constant stream of stimuli they offer, pose a profound new challenge to focusing and learning.

An often overlooked or dismissed point.  Many people seem to take some comfort from saying, “this sort of thing has always been around” or “kids have always had distractions” and the like.  But while placing phenomenon on a spectrum is sometimes helpful for the sake of understanding and perspective, it often masks real transformations.  Sufficient difference in quantity can amount to a difference in quality.  A hurricane is not just a stronger breeze.  On the color spectrum it may be hard to pinpoint where the transition takes place, but at some point you are no longer orange, but blue.  Differences in scale have put us in new territory.

Researchers say the lure of these technologies, while it affects adults too, is particularly powerful for young people. The risk, they say, is that developing brains can become more easily habituated than adult brains to constantly switching tasks — and less able to sustain attention.

Adults writing on this topic who find that they have entered the digital world and believe themselves to have retained their print-literate skills often fail to recognize the difference it might make to be a digital native rather than a digital immigrant.  Adults above the age of 35 or so were brought up with a non-digital skill set associated with print (although television had already been altering the skill-scape).  Those who can’t remember not having a smart-phone or 24/7 access to the Internet are in a very different situation.  They have the digital skill set, but never picked up more than the remnants of the print skill set.  They are not in the same position as the older generation who naively look at the situation and say, “Well, I can do both, so they should be able to also … no problem here.”

But even as some parents and educators express unease about students’ digital diets, they are intensifying efforts to use technology in the classroom, seeing it as a way to connect with students and give them essential skills. Across the country, schools are equipping themselves with computers, Internet access and mobile devices so they can teach on the students’ technological territory.

Done uncritically and re-actively this amounts to digging your own grave (please note the qualifiers at the start of the sentence before becoming angry and dismissive).  To borrow and re-appropriate a line from Postman, it is not unlike “some turn-of-the-century blacksmith who not only is singing the praises of the automobile but who also believes that his business will be enhanced by it.”

The principal, David Reilly . . .  is determined to engage these 21st-century students. He has asked teachers to build Web sites to communicate with students, introduced popular classes on using digital tools to record music, secured funding for iPads to teach Mandarin and obtained $3 million in grants for a multimedia center.

Engaging 21st-century students is the goal, however, the question remains:  To what end?  Our collective cultural mind seems divided on this point without knowing it.  If we want to engage students with the goal of cultivating the mind set, skills, and sensibilities associated with print, then we’d better think twice about a bait and switch approach.  The tools of engagement will undermine the goal of engagement.  However, if we want to instill skills and sensibilities that we might loosely label digital literacy (or, following Gregory Ulmer, electracy) then the tools and the goals will be in sync.

The hope of many, including myself on my more optimistic days, is that 21st century education at its best will be able to impart both skills sets — traditional and digital literacy.  On my more pessimistic days, I’m not so sure this is going to work.  In any case, the two are not the same and the tools for each tend to work against the ends of the other.  More on this later.

Several recent studies show that young people tend to use home computers for entertainment, not learning, and that this can hurt school performance, particularly in low-income families. Jacob L. Vigdor, an economics professor at Duke University who led some of the research, said that when adults were not supervising computer use, children “are left to their own devices, and the impetus isn’t to do homework but play around.”

Really?  I could have saved them the grant money.  He goes on to note that even when homework is being done it is usually accompanied by continuous text messaging and sporadic Internet use.  Whatever homework is done under those conditions is probably of little or no value.  Mind you, depending on the assignment, the homework might have been of little or no value anyway, but that is another matter.

At Woodside, as elsewhere, students’ use of technology is not uniform. Mr. Reilly, the principal, says their choices tend to reflect their personalities. Social butterflies tend to be heavy texters and Facebook users. Students who are less social might escape into games, while drifters or those prone to procrastination, like Vishal, might surf the Web or watch videos . . . .  “The technology amplifies whoever you are,” Mr. Reilly says.

Interesting and important point that isn’t noted frequently enough.  Every personality type is a complex mix of strengths and weaknesses.  What is being amplified by the technology? The examples given in the article are not exactly encouraging:

For some, the amplification is intense. Allison Miller, 14, sends and receives 27,000 texts in a month, her fingers clicking at a blistering pace as she carries on as many as seven text conversations at a time . . .

Some shyer students do not socialize through technology — they recede into it. Ramon Ochoa-Lopez, 14, an introvert, plays six hours of video games on weekdays and more on weekends . . . Escaping into games can also salve teenagers’ age-old desire for some control in their chaotic lives. “It’s a way for me to separate myself,” Ramon says. “If there’s an argument between my mom and one of my brothers, I’ll just go to my room and start playing video games and escape.”

I’m going to wrap up this first post on the article by suggesting that parents often miss the point on this issue, but students can be quite introspective about the really significant dynamic.

Parent missing the point:

“If you’re not on top of technology, you’re not going to be on top of the world.”

Insightful students who know what is really going on:

“Video games don’t make the hole; they fill it.”

“Facebook is amazing because it feels like you’re doing something and you’re not doing anything. It’s the absence of doing something, but you feel gratified anyway.”

Follow up post:  “Second Thoughts on “Growing Up Digital”

The Language(s) of Digital Media Platforms

What follows is a thought experiment.  Comments/criticisms welcome.

In an influential 2001 book, The Language of New Media, theorist Lev Manovich presented his “attempt at both a record and a theory of the present” with regards to digital media.  He explains that his “aim” is to “describe and understand the logic driving the development of the language of new media.”  But he is quick to add,

I am not claiming that there is a single language of new media.  I use “language” as an umbrella term to refer to a number of various conventions used by designers of new media objects to organize data and structure the user’s experience.

The final product is an engaging and provocative study.  For the moment, however, I want to reflect on the notion of a “language” of digital media — it’s a suggestive metaphor.  Early in the book, Manovich explains his rationale for the term,

I do not want to suggest that we need to return to the structuralist phase of semiotics in understanding new media.  However, given that most studies of new media and cyberculture focus on their sociological, economic, and political dimensions, it was important for me to use the word language to signal the different focus of this work:  the emergent conventions, recurrent design patterns, and key forms of new media.

Manovich states explicitly that he is not claiming that there is a single, monolithic language of new media.  At a recent conference, media anthropologist John Postill made a similar point.  We do not have, he suggested,

a totalising ephocal ‘logic’ but rather ever more differentiated Internet ‘technologies, practices, contexts’ ([Miller and Slater] 2000: 3). The evidence provided in the reviewed texts strongly suggests that the Internet – and indeed the world – is becoming ever more plural and that no universal ‘logic of practice’ … is gaining ascendancy at the expense of all other logics.

I take his “logics” to be roughly parallel to Manovich’s “language,” although Postill is focusing on the practices that emerge from digital media, less so on the internal logic of the given platform.  The two, however, are surely interrelated.  So while we do not have a single language of digital media, we may still speak of languages or logics of particular platforms or interfaces.  Now in an associative leap, I want to connect this with the recent conversations surrounding Guy Deutscher’s Through the Language Glass: Why the World Looks Different in Other Languages.  Judging from reviews and interviews, I have not yet read the book, Deutscher has written a fascinating study.  More specifically though, it is his defense of linguist Roman Jakobson’s maxim concerning the difference languages make that I want to think with here.  According to Jakobson, “Languages differ essentially in what they must convey and not in what they may convey.”  In other words, languages do not necessarily constrain a native speaker’s ability to think or comprehend certain concepts, but languages do force their speakers to make certain things explicit.  In Deutscher’s words,

Languages differ in what types of information they force the speakers to mention when they describe the world. (For example, some languages require you to be more specific about gender than English does, while English requires you to be more specific about tense than some other languages. Some require you to be more specific about color differences, and so on.) And it turns out that if your language routinely obliges you to express certain information whenever you open your mouth; it forces you to pay attention to certain types of information and to certain aspects of experience that speakers of other languages may not need to be so attentive to. These habits of speech can then create habits of mind that go beyond mere speech, and affect things like memory, attention, association, even practical skills like orientation.

Now what if we press the language of digital media platforms/interfaces metaphor and ask if the Jakobson principle holds?  My initial thought is that something like the inverse of Jakobson’s principle ends up being more useful.  I could be wrong here, this is just an initial refection, but what seems most interesting about a particular platform is its specific limitations and how the user is constrained to work (often imaginatively) within those constraints.  Consider as an example Twitter’s 140 character limit or the limited symbols available for text messages.  Facebook allows greater flexibility and more media options for communication, but it is still limited.  Second Life has its own logic or language with its own particular possibilities and limitations.  And so on.

These limits are, of course, inevitable.  Every medium has its limits, nothing new there.  Yet it is worth asking what these limits are because there is always an implicit risk in becoming habituated to communication with a given medium and internalizing these limitations.  Both Manovich and Deutscher allude to this possibility.  In the excerpt above, Deutscher suggests that, “These habits of speech can then create habits of mind that go beyond mere speech, and affect things like memory, attention, association, even practical skills like orientation.”  For his part Manovich, considering the way the “language” of new media objectifies the mind’s operations, concludes,

. . . we are asked to follow pre-programmed, objectively existing associations.  Put differently, in what can be read as an updated version of French philosopher Louis Althusser’s concept of “interpellation,” we are asked to mistake the structure of somebody else’s mind for our own . . . . The cultural technologies of an industrial society — cinema and fashion — asked us to identify with someone else’s bodily image.  Interactive media ask us to identify with someone else’s mental structure.  If the cinema viewer, male and female, lusted after and tried to emulate the body of the movie star, the computer user is asked to follow the mental trajectory of the new media designer.

So to sum up:

Digital media platforms exhibit something like a particular language or logic.

Borrowing and tweaking Jakobson’s maxim, “Languages of digital media platforms differ essentially in what they cannot (or, encourage us not to) convey and not in what they may convey.”

For consideration:  What assumptions and limitations are internalized by the habitual use of particular digital media platforms?  What communicative structures could we be internalizing and what are their limitations?  Do we then import these limitations into other areas of our thinking and communication in the world?

Comments welcome.

When Words and Action Part Company

I’ve not been one to jump on the Malcolm Gladwell bandwagon; I can’t quite get past the disconcerting hair.  That said, his recent piece in The New Yorker, “Small Change:  Why the revolution will not be tweeted,” makes a compelling case for the limits of social media when it comes to generating social action.

Gladwell frames his piece as a study in contrasts.  He begins by recounting the evolution of the 1960 sit-in movement that began when four freshmen from North Carolina A & T sat down and ordered coffee at the lunch counter of the local Woolworth’s and refused to move when the waitress insisted, “We don’t serve Negroes here.”  Within days the protest grew and spread across state lines and tensions mounted.

Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

Almost reflexively now, the devotees of social media power will trot out the Twitter-enabled 2009 Iranian protests as an example of what social media can do.  Gladwell, anticipating as much, quotes Mark Pfeifle, a former national-security adviser, who believes that, “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy.”  Pfeifle went so far as to call for Twitter’s nomination for the Nobel Peace Prize.  That is a bit of a stretch one is inclined to believe, and Gladwell explains why:

In the Iranian case … the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

You can read the Foreign Policy article by Esfandiari Gladwell, “Misreading Tehran:  The Twitter Devolution,” online.   Gladwell argues that social media is unable to promote significant and lasting social change because they foster weak rather than strong-tie relationships.  Promoting and achieving social change very often means coming up against entrenched cultural norms and standards that will not easily give way.  And as we know from the civil rights movement, the resistance is often violent.  As Gladwell reminds us,

. . . Within days of arriving in Mississippi, three [Freedom Summer Project] volunteers—Michael Schwerner, James Chaney, and Andrew Goodman—were kidnapped and killed, and, during the rest of the summer, thirty-seven black churches were set on fire and dozens of safe houses were bombed; volunteers were beaten, shot at, arrested, and trailed by pickup trucks full of armed men. A quarter of those in the program dropped out. Activism that challenges the status quo—that attacks deeply rooted problems—is not for the faint of heart.

A subsequent study of the participants in the Freedom Schools was conducted by Doug McAdam:

“All  of the applicants—participants and withdrawals alike—emerge as highly committed, articulate supporters of the goals and values of the summer program,” he concluded. What mattered more was an applicant’s degree of personal connection to the civil-rights movement . . . . [P]articipants were far more likely than dropouts to have close friends who were also going to Mississippi. High-risk activism, McAdam concluded, is a “strong-tie” phenomenon.

Gladwell also goes on to explain why hierarchy, another feature typically absent from social media activism, is indispensable to successful movements while taking some shots along the way at Clay Shirky’s much more optimistic view of social media outlined in Here Comes Everybody: The Power of Organizing Without Organizations.

Not suprisingly, Gladwell’s piece has been making the rounds online the past few days. In response to Gladwell, Jonah Lehrer posted “Weak Ties, Twitter and the Revolution” on his blog The Frontal Cortex.  Lehrer begins by granting, “These are all worthwhile and important points, and a necessary correction to the (over)hyping of Twitter and Facebook.”  But he believes Gladwell has erred in the other direction.  Basing his comments on Mark Granovetter’s 1973 paper, “The Strength of Weak Ties,” Lehrer concludes:

. . . I would quibble with Gladwell’s wholesale rejection of weak ties as a means of building a social movement. (I have some issues with Shirky, too.) It turns out that such distant relationships aren’t just useful for getting jobs or spreading trends or sharing information. According to Granovetter, they might also help us fight back against the Man, or at least the redevelopment agency.

Read the whole post to get the full argument and definitely read Lehrer’s excellent review of Shirky’s book linked in the quotation above.  Essentially Lehrer is offering a kind of middle ground between Shirky and Gladwell.  Since I tend toward mediating positions myself, I think he makes a valid point; but I do lean toward Gladwell’s end of the spectrum nonetheless.

Here, however, is one more angle on the issue:  perhaps the factors working against the potential of social media are not only inherent in the form itself, but also a condition of society that predates the arrival of digital media by generations.  In The Human Condition, Hannah Arendt argued that power, the kind of power to transform society that Gladwell has in view,

. . . is actualized only where word and deed have not parted company, where words are  not empty and deeds not brutal, where words are not used to veil intentions but to disclose realities, and deeds are not used to violate and destroy but to establish relations and create new realities.

Arendt made that claim in the late 1950’s and she argued that even then words and deeds had been drifting apart for some time.  I suspect that since then the chasm has yawned ever wider and that social media participates in and reinforces that disjunction.  It would be unfair, however, to single out social media since the problem extends to most forms of public discourse, of which social media is but one example.

In The Disenchantment of Secular Discourse, Steven D. Smith argues that

It is hardly an exaggeration to say that the very point of ‘public reason’ is to keep the public discourse shallow – to keep it from drowning in the perilous depths of questions about ‘the nature of the universe,’ or ‘the end and object of life,’ or other tenets of our comprehensive doctrines.

If Smith is right — you can read Stanley Fish’s review in the NY Times to get more of a feel for his argument — social media already operate within a context in which the habits of public discourse have undermined our ability to take words seriously.  To put it another way, the assumptions shaping our public discourse encourage the divorce of words and deeds by stripping our language of its appeal to the deeper moral and metaphysical resources necessary to compel social action.  We tend to get stuck in the analysis and pseudo-debate without ever getting to action. As Fish puts it:

While secular discourse, in the form of statistical analyses, controlled experiments and rational decision-trees, can yield banks of data that can then be subdivided and refined in more ways than we can count, it cannot tell us what that data means or what to do with it . . . . Once the world is no longer assumed to be informed by some presiding meaning or spirit (associated either with a theology or an undoubted philosophical first principle) . . . there is no way, says Smith, to look at it and answer normative questions, questions like “what are we supposed to do?” and “at the behest of who or what are we to do it?”

Combine this with Kierkegaard’s 19th century observations about the Press that now appear all the more applicable to the digital world.  Consider the following summary of Kierkegaard’s fears offered by Hubert Dreyfus in his little book On the Internet:

. . . the new massive distribution of desituated information was making every sort of information immediately available to anyone, thereby producing a desituated, detached spectator.  Thus, the new power of the press to disseminate information to everyone in a nation led its readers to transcend their local, personal involvement . . . . Kierkegaard saw that the public sphere was destined to become a detached world in which everyone had an opinion about and commented on all public matters without needing any first-hand experience and without having or wanting any responsibility.

Kierkegaard suggested the following motto for the press:

Here men are demoralized in the shortest possible time on the largest possible scale, at the cheapest possible price.

I’ll let you decide whether or not that motto may be applied even more aptly to existing media conditions.  In any case, the situation Kierkegaard believed was created by the daily print press in his own day is at least a more likely possibility today.  A globally connected communications environment geared toward creating a constant, instantaneous, and indiscriminate flow of information, together with the assumptions of public discourse described by Smith, numbs us into docile indifference — an indifference social media may be powerless to overthrow, particularly when the stakes are high.  We are offered instead the illusion of action and involvement, the sense of participation in the debate.  But there is no meaningful debate, and by next week the issue, whatever the issue is, will still be there, and we’ll be busy discussing the next thing.  Meanwhile action walks further down a lonely path, long since parted from words.

Drowning in the Shallow End

As George Lakoff and Mark Johnson pointed out in Metaphors We Live By, we do a lot of our thinking and understanding through metaphors that structure our thoughts and concepts.  So pervasive are these metaphors, that in most cases we don’t even realize we are using metaphors at all.  Recently, metaphors related to shallowness and depth have caught my attention.

Many of the fears expressed by critics of the Internet and the digital world revolve around a loss of depth.  We are, in their view, gaining an immense amount of breadth or surface area, but it is coming at the expense of depth and by extension rendering us rather shallow.  For example, consider this passage from a brief statement playwright Richard Foreman contributed to Edge:

… today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the “instantly available”. A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become “pancake people”—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.

The notion of “pancake people” is a variation on the shallow/deep metaphor — a good deal of surface area, not much depth.  I first came across Foreman’s analogy in the conclusion of Nicholas Carr’s much discussed piece in The Atlantic, “Is Google Making Us Stupid.” Carr’s piece generated not only a lot of discussion, but also a book published this year exploring the effects of the Internet on the brain.  Carr’s book explored a variety of recent studies suggesting that significant Internet use was inhibiting our capacity for sustained attention and our ability to think deeply.  The title of Carr’s book?  The Shallows.

What is interesting about metaphors such as deep/shallow is that we do appear to have a rather intuitive sense of what they are communicating.  I suspect we all have some notion of what it means to say that someone or some idea is not very deep, or what is meant when some one says that they are just skimming the surface of a topic.  But the nature of metaphors is such that they both hide and reveal.  They help us understand a concept by comparing it to some other, perhaps more familiar idea, but the two things are never identical and so while something is illuminated, something may be hidden.  Also, the taken for granted status of some metaphors, shallowness/depth for instance, may also lull us into thinking that we understand something when we really don’t, in the same way, for example, that St. Augustine remarked that he knew what “time” was until he was asked to define it.

What exactly is it to say that an idea is shallow or deep?  Can we describe what we mean without resorting to metaphor? It is not that I am against metaphors at all, one can’t really be against metaphorical language without losing language as we know it altogether.  It may be that we cannot get at some ideas at all without metaphor.  My point rather is to try to think … well, more deeply about the consequences of our digital world.  Having noticed that key criticisms frequently involve this idea of a loss of depth, it seems that we better be sure we know what is meant.  Very often discussions and debates don’t seem to get anywhere because the participants are using terms equivocally or without a precise sense of how they are being used by the other side.  A little sorting out of our terms, perhaps especially our metaphors, may go a long way toward advancing the conversation.  (Incidentally, that last phrase is also a metaphor.)

Here is one last instance of the metaphor that doesn’t arise out of the recent debates about the Internet, and yet appears to be quite applicable.  The following is taken from Hannah Arendt’s 1958 work, The Human Condition:

A life spent entirely in public, in the presence of others, becomes, as we would say, shallow.  While it retains visibility, it loses the quality of rising into sight from some darker ground which must remain hidden if it is not to lose its depth in a very real, non-subjective sense.

Arendt’s comments arise from a technical and complex discussion of what she identifies as the private, public, and social realms of human life.  And while she was rather prescient in certain areas, she could not have imagined the rise of the Internet and social media.  However, these comments seem to be very much in line with Jaron Lanier’s observation, that “you have to be somebody before you can share yourself.” In our rush to publicize our selves and our thoughts, we are losing the hidden and private space in which we cultivate depth and substance.

Although employing other metaphors to do so, Richard Foreman also offered a sense of what he understood to be the contrast to the “pancake people”:

I come from a tradition of Western culture in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.

This is not necessarily about the recovery of some Romantic notion of the essential self, but it is about a certain degree of complexity and solidity (metaphor’s again, I know).  In any case, it strikes me as an ideal worth preserving.  Foreman and Carr (and perhaps Arendt if she were around) seem uncertain that it is an ideal that can survive in the digital age.  At the very least, they are pointing to some of the challenges.  Given that the digital age is not going away, it is left to us, if we value the ideal, to think of how complexity, depth, and density can be preserved.  And the first thing we may have to do is bring some conceptual clarity to our metaphors.

It’s Not a Game, It’s an Experience — Mark Cuban Channels Don Draper

In the first season finale of the AMC series Mad Men, Don Draper famously pitches an ad campaign for Kodak’s new slide projector in which he suggests, with appropriately melodramatic music in the background,

This is not a spaceship, it’s a time machine … It goes backwards and forwards, and it takes us to a place where we ache to go again … It’s not called ‘The Wheel.’ It’s called ‘The Carousel.’ It lets us travel around and around and back home again.

The scene was effectively parodied by SNL shortly thereafter.  You can see the scene below and watch the parody at Hulu.

Over the top perhaps, but it did convey Madison Avenue’s awareness that selling a product involves connecting with something deeper than utility or effectiveness.  Think what you will of the Dallas Maverick’s sometimes controversial owner Mark Cuban, he has perceptively argued, along similar lines, that those in the professional sports business are not selling games, they are selling experiences.  In a recent blog post he writes,

We in the sports business don’t sell the game, we sell unique, emotional experiences.We are not in the business of selling basketball. We are in the business of selling fun and unique experiences. I say it to our people at the Mavs at all time, I want a Mavs game to be more like a great wedding than anything else.

Ultimately, his post ends up being about technologies that insert themselves into the experience and thus detract from that experience.

… I hate the trend of handheld video at games. I can’t think of a bigger mistake.  The last thing I want is someone looking down at their phone to see a replay.

This is not unlike Jaron Lanier’s wondering whether we are really there at all when we tweet, blog, or update our status throughout an event or gathering.  Cuban and Lanier, in their own ways, are both arguing for our full presence in our own experience.  He concludes his post with the following observation:

The fan experience is about looking up, not looking down. If you let them look down, they might as well stay at home, the screen is always going to be better there.

This is good advice for life in general.  Look up from the screen.  Who knows, in looking one another in the eyes again, we might begin to recover the habits of respect and civility that are now so sorely missed.