“You know, like when you realize you left your phone at home …”

The discipline of anthropology cut its teeth on the study of cultures that were deemed “primitive” and exotic by the standards of nineteenth century Western, industrialized society. North American and European nations were themselves undergoing tremendous transformations wrought by the advent of groundbreaking new technologies — the steam engine, railroad, and telegraph, to name just three. These three alone dramatically reordered the realms of industry, transportation, and communication. Altogether they had the effect of ratcheting up the perceived pace of cultural evolution. Meanwhile, the anthropologists studied societies in which change, when it could be perceived, appeared to proceed at a glacial pace. Age-old ritual and tradition structured the practice of everyday life and a widely known body of stories ordered belief and behavior.

“All that is solid melts into air, and all that is holy is profaned …” — so wrote Marx and Engels in 1848. The line evocatively captures the erosive consequences of modernity. The structures of traditional society then recently made an object of study by the anthropologists were simultaneously passing out of existence in the “modern” world.

I draw this contrast to point out that our own experience of rapid and disorienting change has a history. However out of sorts we may feel as we pass through what may be justly called the digital revolution, it probably does not quite compare with the sense of displacement engendered by the technological revolutions of the late nineteenth and early twentieth century. I still tend to think that the passage from no electricity to near ubiquitous electrification is more transformative than the passage from no Internet to ubiquitous Internet. (But I could be persuaded otherwise.)

So when, in “You’ve Already Forgotten Yesterday’s Internet,” Philip Bump notes that the Internet is “a stunningly effective accelerant” that has rendered knowledge a “blur,” he is identifying the present position and velocity of a trajectory set in motion long ago. Of course, if Bump is right, and I think he is certainly in the ball park so far as his diagnosis is concerned, then this history is irrelevant since no one really remembers it anyway, at least not for long.

Bump begins his brief post by making a joke out of the suggestion that he was going to talk about Herodotus. Who talks about Herodotus? Who even knows who Herodotus was? The joke may ultimately be on us, but Bump is right. The stories that populated the Western imagination for centuries have been largely forgotten. Indeed, as Bump suggests, we can barely keep the last several months in mind, much less the distant past:

“The web creates new shared points of reference every hour, every minute. The growth is exponential, staggering. Online conversation has made reference to things before World War II exotic — and World War II only makes the cut because of Hitler.

Yesterday morning, an advisor to Mitt Romney made a comment about the Etch-A-Sketch. By mid-afternoon, both of his rivals spoke before audiences with an Etch-A-Sketch in hand. The Democratic National Committee had an ad on the topic the same day. The point of reference was born, spread — and became trite — within hours.”

Bump’s piece is itself over a week old, and I’m probably committing some sort of blogging sin by commenting on it at this point. But I’ll risk offending the digital gods of time and forgetting because he’s neatly captured the feel of Internet culture. But this brings us back to the origins of anthropology and the very idea of culture. Whatever we might mean by culture now, it has very little to do with the structures of traditional, “solid” societies that first filled the term with meaning. Our culture, however we might define it, is no longer characterized by the persistence of the past into the present.

I should clarify: our culture is no longer characterized by the acknowledged, normative persistence of the past into the present. By this clarification I’m trying to distinguish between the sense in which the past persists whether we know it or like it, and the sense in which the past persists because it is intentionally brought to bear on the present. The persistence of the past in the former sense is, as far as I can tell, an unavoidable feature of our being time-bound creatures. The latter, however, is a contingent condition that obtained in pre-modern societies to a greater degree, but no longer characterizes modern (or post-modern, if you prefer) society to the same extent.

Notably, our culture no longer trades on a stock of shared stories about the past. Instead (beware, massive generalizations ahead) we moved into a cultural economy of shared experience. Actually, that’s not quite right either. It’s not so much shared experience as it is a shared existential sensibility — affect.

I am reminded of David Foster Wallace’s comments on what literature can do:

“There’s maybe thirteen things, of which who even knows which ones we can talk about.  But one of them has to do with the sense of, the sense of capturing, capturing what the world feels like to us, in the sort of way that I think that a reader can tell ‘Another sensibility like mine exists.’  Something else feels this way to someone else.  So that the reader feels less lonely.”

Wallace goes on to describe the work of avant-garde or experimental literature as “the stuff that’s about what it feels like to live.  Instead of being a relief from what it feels like to live.”

When the objective content of culture, the stories for example, are marginalized for whatever myriad reasons, there still remains the existential level of lived experience which then becomes the object of analysis and comment. Talk about “what it feels like to be alive” now does the work shared stories accomplished in older cultural configurations. We’re all meta now because our focus has shifted to our own experience.

Consider the JFK assassination as a point of transition. It may be the first event about which people began to ask and talk about where they were when the event transpired. The story becomes about where I was when I heard the news. This is an indicator of a profound cultural shift. The event itself fades into the background as the personal experience of the event moves forward. The objectivity of the event becomes less important than the subjective experience. Perhaps this exemplifies a general societal trend. We may not exchange classical or biblical allusions in our everyday talk, but we can trade accounts of our anxiety and nostalgia that will ring broadly true to others.

We don’t all know the same stories, but we know what it feels like to be alive in a time when information washes over us indiscriminately. The story we share is now about how we can’t believe this or that event is already a year past. If feels as if it were just yesterday, or it feels as if it was much longer ago. In either case, what we feel is that we don’t have a grip on the passage of time or the events carried on the flood. Or we share stories about the anxiety that gripped us when we realized we had left our phone at home. This story resonates. That experience becomes our new form of allusion. It is not an allusion to literature or history, it is an allusion to shared existential angst.

The Slippery Slope Is Greased By Risk Aversion

In a short post, Walter Russell Mead links to five stories under the heading of “Big Brother Goes Hi-Tech.” Click over for links to stories about face scanners in Japan, phone-monitoring in Iran, “infrared antiriot cameras” in China, and computer chips embedded in the uniforms of school children in Brazil.

The stories from Japan, China, and Iran may be the most significant (and disconcerting) in terms of the scope of the reach of the technologies in question, but it was the item from Brazil that caught my attention. Perhaps because it involved school children and we are generally more troubled by problematic developments that directly impact children. The story to which Mead linked appeared in the NY Times and it amounts to little more than a brief note. Here is the whole of it:

“Grade-school students in a northeastern Brazilian city are using uniforms embedded with computer chips that alert parents if they are cutting classes, the city’s education secretary, Coriolano Moraes, said Thursday. Twenty-thousand students in 25 of Vitória da Conquista’s 213 public schools started using T-shirts with chips this week, Mr. Moraes said. By 2013, all of the city’s 43,000 public school students will be using them, he added. The chips send a text message to the cellphones of parents when their children enter the school or alert the parents if their children have not arrived 20 minutes after classes have begun. The city government invested $670,000 in the project, Mr. Moraes said.”

So what to make of this? It differs from the technologies being deployed in China, Japan, and Iran in that it is being implemented in the light of day.

[Curious side note: I misspelled “implemented” in the sentence above and it was auto-corrected to read “implanted”. Perhaps the computers know something we don’t!]

On the face of it, there is nothing secretive about this program and I would be surprised if there was not some kind of opt out provision for parents. Also, from this short notice it is unclear whether the augmented T-shirts can be tracked or if they simply interact with a sensor on school premises and are inactive outside of school grounds. If the technology could be used to track the child’s location outside of school it would be more problematic.

Or perhaps it might be more attractive. The same impulse that would sanction these anti-truancy T-shirts, taken further along the path of its own logic, would also seem to sanction technology that tracks a child’s location at all times. It is all about safety and security, at least that is how it would be presented and justified. It would be the ultimate safe-guard against kidnapping. It would also solve, or greatly mitigate the problem of children wandering off on their own and finding themselves lost. Of course, clothing can be removed from one’s person, which to my mind opens up all kinds of flaw’s with the Brazilian program. How long will it take clever teenagers to figure out all sorts of ways to circumventing this technology, really?

Recalling the auto-correct hint, then, it would seem that the answer to this technology’s obvious design flaw would be to imbed the chips subcutaneously. We already do it with our pets. Wouldn’t it be far more tragic to lose a child than to lose a pet?

Now, seriously, how outlandish does this sound at this techno-social juncture we find ourselves in? Think about it. Is it just me or does it not seem as if we are passed the point where we would be shocked by the possibility of implanted chips. I’m sure there is a wide spectrum of opinion on such matters, but the enthusiasts are not exactly on the fringes.

Consider the dynamic that Thomas de Zengotita has labeled “Justin’s Helmet Principle.” Sure Justin looks ridiculous riding down the street with his training wheels on, more pads than a lineman, and a helmet that makes him look like Marvin the Martian, but do I want the burden of not decking out Justin in this baroque assemblage of safety equipment, have him fall, and seriously injure himself?  No probably not.  So on goes the safety crap.

Did we sense that there was something a little off when we started sending off our first graders to school with cell phones, just a fleeting moment of incongruity perhaps?  Maybe.  Did we dare risk not giving them the cell phone and have them get lost or worse without a way of getting help?  Nope.  So there goes Johnny with the cell phone.

And in the future we might add, did we think it disconcerting when we first started implanting chips in our children. Definitely, but did we want to risk having them be kidnapped or lost and not be able to find them? No, of course not.

The slippery slope is greased by the oil of risk aversion.

Thoughts?

The Internet & the Youth of Tomorrow: Highlights from the Pew Survey

The Pew Internet & American Life Project conducted “an opt-in, online survey of a diverse but non-random sample of 1,021 technology stakeholders and critics … between August 28 and October 31, 2011.” The survey presented two scenarios for the youth of 2020, asked participants to choose which they thought more likely, and then invited elaboration.

Here are the two scenarios and the responses they garnered:

Some 55% agreed with the statement:

In 2020 the brains of multitasking teens and young adults are “wired” differently from those over age 35 and overall it yields helpful results. They do not suffer notable cognitive shortcomings as they multitask and cycle quickly through personal- and work- related tasks. Rather, they are learning more and they are more adept at finding answers to deep questions, in part because they can search effectively and access collective intelligence via the internet. In sum, the changes in learning behavior and cognition among the young generally produce positive outcomes.

Some 42% agreed with the opposite statement, which posited:

In 2020, the brains of multitasking teens and young adults are “wired” differently from those over age 35 and overall it yields baleful results. They do not retain information; they spend most of their energy sharing short social messages, being entertained, and being distracted away from deep engagement with people and knowledge. They lack deep-thinking capabilities; they lack face-to-face social skills; they depend in unhealthy ways on the internet and mobile devices to function. In sum, the changes in behavior and cognition among the young are generally negative outcomes.

However the report also noted the following:

While 55% agreed with the statement that the future for the hyperconnected will generally be positive, many who chose that view noted that it is more their hope than their best guess, and a number of people said the true outcome will be a combination of both scenarios.

In all honesty, I am somewhat surprised the results split so evenly. I would have expected the more positive scenario to perform better than it did. The most interesting aspect of the report, however, are of course the excerpts presented from the respondents’ elaborations. Here are few with some interspersed commentary.

A number of respondents wrote about the skills that will be valued in the emerging information ecosystem:

  • There are concerns about new social divides. “I suspect we’re going to see an increased class division around labor and skills and attention,” said media scholar danah boyd.
  • “The essential skills will be those of rapidly searching, browsing, assessing quality, and synthesizing the vast quantities of information,” wrote Jonathan Grudin, principal researcher at Microsoft. “In contrast, the ability to read one thing and think hard about it for hours will not be of no consequence, but it will be of far less consequence for most people.”

Among the more interesting excerpts was this from Amber Case, cyberanthropologist and CEO of Geoloqi:

  • “The human brain is wired to adapt to what the environment around it requires for survival. Today and in the future it will not be as important to internalize information but to elastically be able to take multiple sources of information in, synthesize them, and make rapid decisions … Memories are becoming hyperlinks to information triggered by keywords and URLs. We are becoming ‘persistent paleontologists’ of our own external memories, as our brains are storing the keywords to get back to those memories and not the full memories themselves.”
I’m still not convinced at all by the argument against internalization. (You can read why here, here, and here.) But she is certainly correct about our “becoming ‘persistent paleontologists’ of our own external memory.” And the point was memorably put as well. We are building vast repositories of external memory and revisiting those stores in ways that are historically novel. We’ve yet to register the long term consequences.

The notion of our adaptability to new information environments was also raised frequently:

  • Cathy Cavanaugh, an associate professor of educational technology at the University of Florida, noted, “Throughout human history, human brains have elastically responded to changes in environments, society, and technology by ‘rewiring’ themselves. This is an evolutionary advantage and a way that human brains are suited to function.”

This may be true enough, but what is missing from these sorts of statements is any discussion of which environments might be better or worse for human beings. To acknowledge that we adapt is to say nothing about whether or not we ought to adapt. Or, if one insists, Borg-like, that we must adapt or die, there is little discussion about whether this adaptation leaves us on the whole better off. In other words, we ought to be asking whether the environment we are asked to adapt to is more or less conducive to human flourishing. If it is not, then all the talk of adaptation is a thinly veiled fatalism.

Some, however, did make strong and enthusiastic claims for the beneficence of the emerging media environment:

  • “The youth of 2020 will enjoy cognitive ability far beyond our estimates today based not only on their ability to embrace ADHD as a tool but also by their ability to share immediately any information with colleagues/friends and/or family, selectively and rapidly. Technology by 2020 will enable the youth to ignore political limitations, including country borders, and especially ignore time and distance as an inhibitor to communications. There will be heads-up displays in automobiles, electronic executive assistants, and cloud-based services they can access worldwide simply by walking near a portal and engaging with the required method such as an encrypted proximity reader (surely it will not be a keyboard). With or without devices on them, they will communicate with ease, waxing philosophic and joking in the same sentence. I have already seen youths of today between 20 and 35 who show all of these abilities, all driven by and/or enabled by the internet and the services/technologies that are collectively tied to and by it.”

This was one of the more techno-uptopian predictions in the survey. The notion of “embracing ADHD as a tool” is itself sufficiently jarring to catch one’s attention. One gets the gist of what the respondent is foreseeing — a society in which cognitive values have been radically re-ordered. Where sustained attention is no longer prized, attention deficit begins to seem like a boon. The claims about the irrelevance of geographic and temporal limits are particularly interesting (or disconcerting). They seemingly make a virtue of disembodied rootlessness. The youth of the future will, in this scenario, be temporally and spatially homeless, virtually dispersed. (The material environment of the future imagined here also invites comparison to the dystopian vision of the film Wall-E.)

Needless to say, not all respondents were nearly so sanguine. Most interestingly, many of the youngest respondents were among the most concerned:

  • A number of the survey respondents who are young people in the under-35 age group—the central focus of this research question—shared concerns about changes in human attention and depth of discourse among those who spend most or all of their waking hours under the influence of hyper connectivity.

This resonates with my experience teaching as well. There’s a palpable unease among many of the most connected with the pace, structure, and psychic consequences of the always on life. They appear to be discovering through experience what is eloquently put by Annette Liska:

  • Annette Liska, an emerging-technologies design expert, observed, “The idea that rapidity is a panacea for improved cognitive, behavioral, and social function is in direct conflict with topical movements that believe time serves as a critical ingredient in the ability to adapt, collaborate, create, gain perspective, and many other necessary (and desirable) qualities of life. Areas focusing on ‘sustainability’ make a strong case in point: slow food, traditional gardening, hands- on mechanical and artistic pursuits, environmental politics, those who eschew Facebook in favor of rich, active social networks in the ‘real’ world.”

One final excerpt:

  • Martin D. Owens, an attorney and author of Internet Gaming Law, Just as with J.R.R. Tolkien’s ring of power, the internet grants power to the individual according to that individual’s wisdom and moral stature. Idiots are free to do idiotic things with it; the wise are free to acquire more wisdom. It was ever thus.

In fact, the ring in Tolkien’s novels is a wholly corrupting force. The “wisdom and moral stature” of the wearer may only forestall the deleterious effects. The most wise avoided using it at all. I won’t go so far as to suggest that the same applies to the Internet, but I certainly couldn’t let Tolkien be appropriated in the service of misguided view of technological neutrality.

“Many books are read but some books are lived”

Just a quick post to pass along a link to a wonderful essay that appeared recently in The New Republic. Leon Wieseltier’s “Voluminous” is a smart, evocative reflection on the meaning of books and a personal library that is Benjamin-esque in its effect. Here are a couple of excerpts.  Do click through to read the rest. I trust you will find it worth your time.

“Many books are read but some books are lived, so that words and ideas lose their ethereality and become experiences, turning points in an insufficiently clarified existence, and thereby acquire the almost mystical (but also fallible) intimacy of memory.”

And…

“My books are not dead weight, they are live weight—matter infused by spirit, every one of them, even the silliest. They do not block the horizon; they draw it. They free me from the prison of contemporaneity: one should not live only in one’s own time. A wall of books is a wall of windows.”

This is one of those pieces that resonates deeply with me for how well it puts words to my own sensibilities (even if I might not strike quite so adversarial a tone toward digital media). I hope you’ll enjoy.

Many thanks to the reader who took the time to email me the link!

Ray Bradbury Goes to Disneyland: Automatons, Animatronics, and Robots

In October of 1965, Ray Bradbury wrote “The Machine-Tooled Happyland,” his reflections on Disneyland. He begins by recalling his delight as a child with all things Disney and then his dismay at an essay in The Nation that equated Disneyland with Las Vegas. Here is Bradbury:

“Vegas’s real people are brute robots, machine-tooled bums.

Disneyland’s robots are, on the other hand, people, loving, caring and eter­nally good.”

Here’s more:

“Snobbery now could cripple our intellectual development. After I had heard too many people sneer at Disney and his audio-animatronic Abraham Lincoln in the Illinois exhibit at the New York World’s Fair, I went to the Disney robot factory in Glendale. I watched the finishing touches being put on a second computerized, electric- and air-pressure-driven humanoid that will “live” at Disneyland from this summer on. I saw this new effigy of Mr. Lincoln sit, stand, shift his arms, turn his wrists, twitch his fingers, put his hands behind his back, turn his head, look at me, blink and prepare to speak. In those fewmoments I was filled with an awe I have rarely felt in my life.

Only a few hundred years ago all this would have been considered blasphemous, I thought. To create man is not man’s business, but God’s, it would have been said. Disney and every technician with him would have been bundled and burned at the stake in 1600.”

Regarding that last thought, perhaps a bit of hyperbole. But there is something to it. Consider this recent fascinating Wilson Quarterly essay by Max Byrd, “Man as Machine.” Byrd discusses the popularity of automatons in Europe, particularly France, during the 18th and 19th centuries:

“Automates of various kinds have been around since antiquity, as toys or curiosities. But in the middle of the 19th century, in one of the odder artistic enthusiasms the French are famously prone to, a positive mania for automates like the dulcimer player swept the country. People flocked to see them in galleries, museums, touring exhibitions. Watchmakers and craftsmen competed to make more and still more impossibly complex clockwork figures, animals and dolls that would dance, caper, perform simple household tasks—in one case, even write a line or two with pen and ink. The magician Robert Houdin built them for his act. Philosophers and journalists applauded them as symbols of the mechanical genius of the age. Like so many such fads, however, the Golden Age of Automates lasted only a short time. By about 1890 it had yielded the stage to even newer technologies: Edison’s phonograph and the Lumière brothers’ amazing cinematograph.”

Back to Bradbury, the whole piece is a touching appreciation of Walt Disney (the man) and the possibilities of animatronics for the teaching of history:

“Emerging from the robot museums of tomorrow, your future student will say: I know, I believe in the history of the Egyptians, for this day I helped lay the cornerstone of the Great Pyramid.

Or, I believe Plato actually existed, for this afternoon under a laurel tree in a lovely country place I heard him discourse with friends, argue by the quiet hour; the building stones of a great Republic fell from his mouth.”

Read the whole thing. Together with Byrd’s piece it offers interesting background to questions such as those posed recently in an excerpt of Patrick Lin’s Robot Ethics in Slate: “The Big Robot Questions”. Read that too.