“It’s like, you know … the end of print disciplined speech?”

In “What Happens in Vagueness Stays in Vagueness,” Clark Whelton takes aim at what he calls “the linguistic virus that infected spoken language in the late twentieth century” — vagueness.  Here’s the opening example:

I recently watched a television program in which a woman described a baby squirrel that she had found in her yard. “And he was like, you know, ‘Helloooo, what are you looking at?’ and stuff, and I’m like, you know, ‘Can I, like, pick you up?,’ and he goes, like, ‘Brrrp brrrp brrrp,’ and I’m like, you know, ‘Whoa, that is so wow!’ ” She rambled on, speaking in self-quotations, sound effects, and other vocabulary substitutes, punctuating her sentences with facial tics and lateral eye shifts. All the while, however, she never said anything specific about her encounter with the squirrel.

In the mid-1980s, Mr. Whelton began noticing increasingly aberrant speech patterns in prospective interns for New York City mayor Edward Koch’s speech writing staff.  “Like,” “you know,” “like, you know,” along with non-committal interrogative tones particularly distressed Whelton.  He goes on to add,

Undergraduates … seemed to be shifting the burden of communication from speaker to listener. Ambiguity, evasion, and body language, such as air quotes—using fingers as quotation marks to indicate clichés—were transforming college English into a coded sign language in which speakers worked hard to avoid saying anything definite.

Whelton comes closest to the true nature of the situation here, but I think there is an important consideration that is missing.  I’m inclined to think that the sorts of language patterns Whelton criticizes reflect a reversion to language environments that are more oral in nature than they are literate (a situation that Walter Ong called secondary orality).

The cadences and syntax of “high,” “correct,” “proper,” etc. English are a product of writing in general and intensified by print; they are not a necessary function of spoken language itself which is ordinarily much more chaotic.  Writing is removed from the holistic context that helps give face-to-face communication its meaning.  To compensate writing must work hard to achieve clarity and precision since the words themselves bear the burden of conveying the whole of the meaning.  Oral communication can tolerate vagueness in words and syntax because it can rely on intonation, volume, inflection, and other non-verbal cues to supply meaning. As an experiment try transcribing anyone of your countless verbal exchanges and note the sometimes startling difference between spoken language and written language.

Where print monopolizes communication, the patterns of written speech begin to discipline spoken language. “Vague” talk then may be characteristic of those whose speech patterns, because they have been formed in a world in which print’s monopoly has been broken, have not been so disciplined by print literacy.

Interestingly, new media is often quite “print-ish,” that is text isolated from sound — emails, text messaging, Twitter, blogs, Facebook (although with images there) — and this has required the invention of a system of signs aimed at taming the inherent “vagueness” of written communication that is restricted in length and thus not given the freedom to compensate for the loss of non-verbal and auditory cues with precise syntax and copious language.  : )

Memory, Knowledge, Identity, Technology

Memory, knowledge, identity, technology — these are intimately bound together and it would be difficult to disentangle one from the others.  What is it to know something if not to remember it?  Beyond the biological facts of my existence what constitutes my identity more significantly than my memory?  What could I remember without technologies including writing, books, pictures, videos, and more?  Or to put it in a more practical way, what degree of panic might ensue if your Facebook profile were suddenly and irrevocably deleted?  Or if your smart phone were to pass into the hands of another?  Of if you lost your flash drive?  Pushing the clock back just a little, we might have similarly asked about the loss of a diary or photo albums.

The connection among these four, particularly memory and technology, is established as early as the Platonic dialogs, most famously the Phaedrus in which Socrates criticizes writing for its harmful effects on internal memory and knowledge.  What we store in written texts (or hard drives, or “the cloud”) we do not remember ourselves and thus do not truly know it.  The form of this debate recurs throughout the subsequent history of technology all the way to the present debates over the relative merits of computers and the Internet for learning and education.  And in these debates it is almost de rigueur to begin by citing Plato’s Phaedrus either the reinstall or dismiss the Socratic critique.  Neil Postman began his book, Technopoly: The Surrender of Culture to Technology, with reference to Phaedrus, and Phaedrus appears as well in Nicholas Carr’s now (in)famous Atlantic essay, “Is Google Making Us Stupid?”.

The rejoinder comes quickly though:  Surely Socrates failed to appreciate the gains in knowledge that writing would make possible.  And what if I offload information to external memory, this simply frees my mind for more significant tasks. There is, of course, an implicit denigration of mere memory in this rebuttal to Socrates.

Yet some tension, some uneasiness remains.  Otherwise the critique would not continue resurfacing and it wouldn’t elicit such strong push back when it did.  In other words, the critique seems to strike at a nerve, a sensitive one at that, and when again we consider the intimate interrelationship of memory with our ideas about knowledge and education and with the formation and maintenance of our identities it is not surprising at all.  A few posts down I cited Illich’s claim that

What anthropologists distinguish as ‘cultures’ the historian of mental spaces might distinguish as different ‘memories.’  The way to recall, to remember, has a history which is, to some degree, distinct from the history of the substance that is remembered.

I’m wondering now whether it might also be true that a history of personal identity or of individuality could be told through a history of memory and its external supports.  Might we be able to argue that individualism is a function of technologies of memory that allow a person to create and sustain his own history apart from that of the larger society?

In any case, memory has captured my attention and fascinating questions are following hard.  What is memory anyway, what is it to remember a name, a look, a person, a fact, a feeling, where something is, how to do something, or simply to do something?  What do we remember when we remember?  How do we remember?  Why do we remember?  And, of course, how have the answers to all of these questions evolved along with the development of technology from the written word to the external hard drive?

On that last note, I wonder if our choice to call a computer’s capacity to store data “memory” has not in turn affected how we think of our own memory.  I’m especially thinking of a flash drive that we hold in hand and equate with stored memory.  In this device I keep my pictures, my documents, my videos, my memories — memory, or a certain conception of it, is objectified, reified.  Is memory merely mental storage?  Or has this metaphor atrophied our understanding of memory?

Of course, metaphors for memory are nothing new.  I’m beginning to explore some of these ideas with Paul Ricoeur’s Memory, History, Forgetting, and Ricoeur reminds us that in another Platonic dialog, the Theaetetus, Socrates offers the block of wax in our souls as a metaphor for our memory.  And Socrates suggests, “We may look at it, then, as a gift of Mnemosyne [Memory], the mother of the Muses.” I’ll keep you posted as the Muses urge.

 

Social Media: Good for Groups, Bad for Individuals?

Remember those IBM “You Make the Call” spots during NFL games that used to show you a controversial play and then ask you to make the call before revealing what was in fact the right call?

Well, here’s a variation:

A recent Pew survey has been widely taken to suggest that Internet use doesn’t kill healthy social life after all.

A recent Stanford study suggests to some that social media, Facebook in particular, is making us sad.

You make the call!

Admittedly, this is not exactly an either/or situation, it may even be both/and, I just felt like alluding to the vintage commercial.  (Follow that last link and you’ll also see some vintage Tandy and IBM computers.)

Here’s a little more from each.  Regarding the Pew survey:

Pew found that 80 percent of Internet users participate in groups, as compared with 56 percent of non-Internet users.

Twitter users were the most social. 85 percent of them were involved in group activity offline, followed by 82 percent of social networking users. The results from the survey identify the use of social media and online activities as helpful in the process of disseminating information and engaging group members.

“The virtual world is no longer a separate world from our daily life and lots of Americans are involved in more kinds of groups,” said Rainie.

From the Slate story about the Stanford study:

Facebook is “like being in a play. You make a character,” one teenager tells MIT professor Sherry Turkle in her new book on technology, Alone Together. Turkle writes about the exhaustion felt by teenagers as they constantly tweak their Facebook profiles for maximum cool. She calls this “presentation anxiety,” and suggests that the site’s element of constant performance makes people feel alienated from themselves. (The book’s broader theory is that technology, despite its promises of social connectivity, actually makes us lonelier by preventing true intimacy.)

With that excerpt I’m killing two birds with one stone by pointing you to Sherry Turkle’s most recent work which has drawn considerable attention over the last month or so. See Johah Lehrer’s review here and, in a lighter vein, watch Turkle on The Colbert Report here.

Also of interest in the Slate article is the differentiation between male and female use of Facebook:

Facebook oneupsmanship may have particular implications for women. As Meghan O’Rourke has noted here in Slate, women’s happiness has been at an all-time low in recent years. O’Rourke and two University of Pennsylvania economists who have studied the male-female happiness gap argue that women’s collective discontent may be due to too much choice and second-guessing–unforeseen fallout, they speculate, of the way our roles have evolved over the last half-century. As the economists put it, “The increased opportunity to succeed in many dimensions may have led to an increased likelihood in believing that one’s life is not measuring up.”

If you’re already inclined to compare your own decisions to those of other women and to find yours wanting, believing that others are happier with their choices than they actually are is likely to increase your own sense of inadequacy. And women may be particularly susceptible to the Facebook illusion. For one thing, the site is inhabited by more women than men, and women users tend to be more active on the site, as Forbes has reported. According to a recent study out of the University of Texas at Austin, while men are more likely to use the site to share items related to the news or current events, women tend to use it to engage in personal communication (posting photos, sharing content “related to friends and family”). This may make it especially hard for women to avoid comparisons that make them miserable. (Last fall, for example, the Washington Post ran a piece about the difficulties of infertile women in shielding themselves from the Facebook crowings of pregnant friends.)

Regarding the Pew survey, I’m wondering if it says as much as its proponents take it to say.  I’m not sure it necessarily says much about the quality of the social interaction involved, but more significantly, dividing the population between Internet users and non-Internet users seems less than helpful and may give us nothing deeper than mere correlation.

Regarding the Slate story, find the strong push back in the comment section from the woman who benefited from Facebook during a time of deep depression.  Generalizations will never be without exceptions, of course, and it may be more helpful to think of social media exacerbating rather than causing certain dispositions or emotional states.

Obama Talks With A Computer

[Correction:  Mr. Scocca informs me via email that the dialog in his piece was an actual transcript of a session with Eliza.  So nothing “mock” or “contrived” about it.  All the more interesting, read on.]

Over at Slate, Tom Scocca has staged a mock dialogue with Eliza to good, even if somewhat contrived, effect.  Eliza, in case you’re wondering, is a computer program designed in the 1960’s to simulate human interaction based on certain scripts (such as DOCTOR which simulates a particular style of psychotherapy).

About half way through the dialogue we get these few lines which more or less encapsulate Scocca’s main concern:

TS: On Twitter, I am having a hard time telling the bots from the humans.

Eliza: How long have you been having a hard time telling the bots from the humans?

TS: Weeks and weeks now.

Eliza: Please go on.

TS: It is not because the bots are fluent or facile.

Eliza: Do any other reasons not come to mind?

TS: It is because so much of the human conversation has come down to bot level.

And a few lines later he adds,

TS: That’s not really someone who likes my blog post. It’s a pharma ad-bot that presumably grabs anything with “Obama” in it.

Eliza: Why do you say your blog post its a pharma adbot that presumably grabs anything with obama in it?

TS: Well, of course, by now I have been trained to put “Obama” into the headlines myself, for SEO purposes.

Eliza: Can you elaborate on that?

TS: My actual transactions with human readers are so machine-mediated, it’s necessary for me to keep machine-logic in mind while I’m writing.

I’m taking these observations as rather useful illustrations of how the language (or, logic) of a digital media platform shapes our communication to fit within its own limitations.  Borrowing linguist Roman Jakobson’s maxim regarding languages, I suggested a few posts down that, “Languages of digital media platforms differ essentially in what they cannot (or, encourage us not to) convey and not in what they may convey.”  In other words, we shape our communication to fit the constraints of the medium.  The follow up question then becomes, “do we adapt to these limitations and carry them over into other fields of discourse?”  Scocca provocatively suggests that if a computer ends up passing the Turing Test, it will not be because of an advance in computer language capability, but because of a retrogression in the way humans use language.

Keep in mind that you don’t have to be a professional writer working for a popular web magazine to experience machine mediated communication.  In fact, my guess is that a great deal, perhaps the majority, of our interaction with other people is routinely machine mediated, and in this sense we are already living in post-human age.

The mock dialog also suggests yet another adaptation of Jackobson’s principle, this time focused on the economic conditions at play within a digital media platform.  Tracking more closely with Jackobson’s original formulation, this adaptation might go something like this:  the languages of digital media platforms differ essentially in what their economic environment dictates they must convey.  In the case of Scocca, he has been trained to mention Obama for the purposes of search engine optimization, and this, of course, to drive traffic to his blog because traffic generates advertising revenue.  Not only do the constraints of the platform shape the content of communication, the logic of the wider economic system disciplines the writing as well.

None of this is, strictly speaking, necessary.  It is quite possible to creatively, and even aesthetically communicate within the constraints of a given digital media platform.  Any medium imposes certain constraints; what we do within those constraints remains the question.  Some media, it is true, impose more stringent constraints on human communication than others; the telegraph, for example, comes to mind.  But the wonder of human creativity is that it finds ways of flourishing within constraints; within limitations we manage to be ingenious, creative, humorous, artistic, etc.  Artistry, humor, creativity and all the rest wouldn’t even be possible without certain constraints to work with and against.

Yet aspiring to robust, playful, aesthetic, and meaningful communication is the path of greater resistance.  It is easier to fall into thoughtless and artless patterns of communication that uncritically bow to the constraints of a medium thus reducing and inhibiting the possibilities of human expression.  Without any studies or statistics to prove the point, it seems that the path of least resistance is our default for digital communication.  A little intentionality and subversiveness, however, may help us flourish as fully human beings in our computer-mediated, post-human times.

Besides, it would be much more interesting if a computer passed the Turing Test without any concessions on our part.

Oh, and sorry for the title, just trying to optimize my search engine results.

Second Thoughts on “Growing Up Digital”

A few days ago I posted some reflections on Matt Ritchel’s NY Times article, “Growing Up Digital,” and committed to posting some further thoughts.  So here they are.  But first some clarification.  I closed the last post with the following:

Parent missing the point:

“If you’re not on top of technology, you’re not going to be on top of the world.”

Insightful students who know what is really going on:

“Video games don’t make the hole; they fill it.”

“Facebook is amazing because it feels like you’re doing something and you’re not doing anything. It’s the absence of doing something, but you feel gratified anyway.”

Jim Wilson/The New York Times

Two things always strike me when I hear parents talking about their kids and technology.  The first is a palpable anxiety about their kids getting left behind in a world of rapidly changing technology.  But this is a misplaced fear, or rather, it is a fear particular to the digital immigrant, not the digital native.  Part of the skill set that comes with having grown up digital is a certain facility with new technologies.  It comes “naturally.”  Try to remember the last time you witnessed someone under the age of 30 reading an instruction manual.  Exactly.

The second is the reduction of technology to a means of achieving financial security. I take this to be what the parent quoted above meant by “being on top of the world” (or else they’ve watched Titanic one time too many).  But students recognize that there is something deeper going on.  Their ubiquitous technologies are nothing short of accessories to their humanity.  The intensity of the withdrawal symptoms experienced when these tools are for some reason taken away or are disconnected suggests that without these tools those who have grown up digital have little idea of how to be in the world.  Or rather, it is as if world is no longer the one they know and are comfortable inhabiting. You might as well be cutting off their oxygen.  Reducing the significance of technology to some silly “you’ll need these skills to get a good job” pep talk does not come close to doing justice to the place these tools have in student’s lives.

On to the new stuff:

Students say that their parents, worried about the distractions, try to police computer time, but that monitoring the use of cellphones is difficult. Parents may also want to be able to call their children at any time, so taking the phone away is not always an option . . .

He says he sometimes wishes that his parents would force him to quit playing and study, because he finds it hard to quit when given the choice.

Two things here.  This is an instance of what Thomas de Zengotita has labeled “Justin’s Helmet Principle.” Sure Justin looks ridiculous riding down the street with his training wheels on, more pads than a lineman, and a helmet that makes him look like Marvin the Martian, but do I want the burden of not decking out Justin in this baroque assemblage of safety equipment, have him fall, and seriously injure himself?  No probably not.  So on goes the safety crap.  Did we sense that there was something a little off when we started sending off our first graders to school with cell phones, just a fleeting moment of incongruity perhaps?  Maybe.  Did we dare risk not giving them the cell phone and have them get lost or worse without a way of getting help?  Nope.  So there goes Johnny with the cell phone.

Then there’s this matter about not being able to quit, even wishing parents would impose limits.  Your instinct may be to say, “Get over it, find the off button, and get to work.”  Right, cut off the oxygen and tell them to breathe.  Easier said than done.  I’m not interested in eliminating personal responsibility, nor do I believe that these tools are by themselves the cause of the problem as if they were conscious agents.  But . . . embodied creatures that we are, our mind is not simply an organ of disembodied, spontaneous will.  This is to say that our will is intertwined with the action of our body in such a way that habituated action shapes our disposition and ability to make choices.  We shape our will by repeated and then habitual practices.  This is not new information — Aristotle knew this in his own way — although it is being reinforced by recent cognitive scientific research.

Sam Crocker, Vishal’s closest friend, who has straight A’s but lower SAT scores than he would like, blames the Internet’s distractions for his inability to finish either of his two summer reading books.

“I know I can read a book, but then I’m up and checking Facebook,” he says . . . He concludes: “My attention span is getting worse.”

Internet use and attention span is a big issue so I’ll simply point you to a recent interview of Linda Stone on Henry Jenkins’ blog and an important essay on the issue by N. Katherine Hayles.  Something is going on with our brains and our attention; it seems fair to say that much.  What exactly and why may not yet be entirely clear.  But we should remember, as Hayles points out, deep attention is probably not the biological default.  More likely it was a learned behavior associated with the advent of literacy.  A different form or style of attention is likely emerging along with our immersion in digital media environments.  Ritchel cites a couple of studies exploring this development:

The researchers looked at how the use of these media affected the boys’ brainwave patterns while sleeping and their ability to remember their homework in the subsequent days. They found that playing video games led to markedly lower sleep quality than watching TV, and also led to a “significant decline” in the boys’ ability to remember vocabulary words. The findings were published in the journal Pediatrics . . .

In that vein, recent imaging studies of people have found that major cross sections of the brain become surprisingly active during downtime. These brain studies suggest to researchers that periods of rest are critical in allowing the brain to synthesize information, make connections between ideas and even develop the sense of self.

Researchers say these studies have particular implications for young people, whose brains have more trouble focusing and setting priorities . . . . Like Dr. Rich, he says he believes that young, developing brains are becoming habituated to distraction and to switching tasks, not to focus.

Back to the optimistic principal:

Mr. Reilly says that the audio class provides solid vocational training and can get students interested in other subjects.

“Today mixing music, tomorrow sound waves and physics,” he says. And he thinks the key is that they love not just the music but getting their hands on the technology. “We’re meeting them on their turf.”

Mr. Reilly hopes that the two can meet — that computers can be combined with education to better engage students and can give them technical skills without compromising deep analytical thought.

As I indicated last time, this is the hope.  Sometimes I share it.  In my own teaching, I’ve sought to avoid the introduction of technology for technology’s sake, but I have also experimented with class blogs, Wikis, multi-media presentations, Facebook related projects, etc.  Results have been decidedly . . . mixed.

More often than not, I tend to think that immersion in our digital media environment may very well erode (or more dramatically, cannibalize) the skills and dispositions associated with print so that it cannot be merely a matter of adding one skill set to the other.

. . . in Vishal’s case, computers and schoolwork seem more and more to be mutually exclusive.

This is not the final word, of course.  We have still to ask what difference does this make?  The answer to that question will be relative to the ends we are interested in pursuing, to the vision of the good life and human flourishing that animates us.   In other words, actually to borrow Keith Thomas’ words, “We cannot determine the purpose of the universities without first asking, “What is the purpose of life?”

Likewise, I would suggest that we cannot determine the purpose of technology in education without first asking, “What is the purpose of life?”