Technology in the Classroom: How To Do It Right

Matt Ritchell raises a series of crucial questions regarding technology in the classroom in his NY Times piece this weekend, “In Classroom of Future, Stagnant Scores.” More precisely, Ritchell raises questions regarding the seemingly uncritical push to deploy technology in the classroom regardless of costs and ambivalent results. In fact, he hits almost every topic of concern I would think to mention if I were writing a similar article, including the influence of those who do stand to unambiguously prosper from the implementation of technology in the classroom, those who make and sell the technology.

If you’re interested in technology and education, I encourage you to read the article. Here are a few excerpts. You’ll notice, I think, that even the endorsements come off as rather suspect:

“This is such a dynamic class,” Ms. Furman says of her 21st-century classroom. “I really hope it works.” Hope and enthusiasm are soaring here. But not test scores.

“The data is pretty weak. It’s very difficult when we’re pressed to come up with convincing data,” said Tom Vander Ark, the former executive director for education at the Bill and Melinda Gates Foundation and an investor in educational technology companies. When it comes to showing results, he said, “We better put up or shut up.”

Critics counter that, absent clear proof, schools are being motivated by a blind faith in technology and an overemphasis on digital skills — like using PowerPoint and multimedia tools — at the expense of math, reading and writing fundamentals. They say the technology advocates have it backward when they press to upgrade first and ask questions later.

“My gut is telling me we’ve had growth,” said David K. Schauer, the superintendent here. “But we have to have some measure that is valid, and we don’t have that.” It gives him pause. “We’ve jumped on bandwagons for different eras without knowing fully what we’re doing. This might just be the new bandwagon,” he said. “I hope not.”

“Rather than being a cure-all or silver bullet, one-to-one laptop programs may simply amplify what’s already occurring — for better or worse,” wrote Bryan Goodwin … Good teachers, he said, can make good use of computers, while bad teachers won’t, and they and their students could wind up becoming distracted by the technology.

“In places where we’ve had a large implementing of technology and scores are flat, I see that as great,” she said. “Test scores are the same, but look at all the other things students are doing: learning to use the Internet to research, learning to organize their work, learning to use professional writing tools, learning to collaborate with others.”

“Even if he doesn’t get it right, it’s getting him to think quicker,” says the teacher, Ms. Asta. She leans down next to him: “Six plus one is seven. Click here.”

Clearly, the push for technology is to the benefit of one group: technology companies.

I’m not suggesting technology in the classroom is useless, although it can sometimes be worse than useless. What we ought to take issue with is the blind embrace of technology for technology’s sake. Technologies such as laptops and Smartboards are deployed as if their mere presence in the classroom augmented the educational value of what goes on inside. This is not education, it is superstition; the tool becomes a talisman. Or, worse yet, under the assumption that the medium is essentially neutral, old task are assigned on new technologies to little, or possibly counterproductive, effect. This is naive at best.

What students need to learn with regard to technology is not how to use countless (often inane) tools like Powerpoint. Rather technology in education should be introduced in order to teach the students how to engage with technology critically and intelligently, and, ultimately, toward human ends.

Teach the history of technology, teach the sociology of new media, teach media theory, teach philosophy and ethics of technology, teach students how to program and code.

Help students become meta-critically savvy about their tools.

Teach students to become critics of texts and applications — Twelfth Night and Twitter.

Use technology to transcend the curricular divide between the sciences and the humanities.

Teach them not only the possibilities opened up by new technologies, but also their limitations.

Do not parade new technologies before students like so many idols sent to deliver us from our darkness who must be unfailingly acquiesced and mollified.

Don’t merely teach students to do what a new tool enables them to do, help them to question whether we ought do what the tool enables or whether we ought to do with the tool other than its makers considered.

The problem with all of this, of course, is that we must first teach the teachers and administrators.

Making Sense Out of Life: Early Modern and Digital Reading Practices

When I wrote the About page for this blog I cited an article by Alan Jacobs from several years ago in which he likened blogs to commonplace books. Commonplace books, especially popular during the sixteenth century when printing first began to yield an avalanche of relatively affordable books, served as a means of ordering and making sense out of the massive amounts of information confronting early modern readers. As is frequently noted, the dismay and disorientation they experienced is not altogether unlike the angst that sometimes accompanies our recent and ongoing digital explosion of available information. And so, taking a cue from Jacobs, I intended for this blog to be something akin to a commonplace book.

As it turned out, the analogy was mostly suggestive. Much that I write here does not quite fit the commonplace genre. Nonetheless, something of the spirit, if not the law, persists. The commonplace genre would find a nearer kin in Tumblr than in traditional blogs.

In a 2000 essay reprinted in The Case for Books (2009), historian of the book Robert Darnton also reflects on commonplace books and the scholarly attention they attracted. The attention was not misplaced.  Commonplace books offered a window into the reading practices and mental landscape of their users; and for an era in which they were widely kept, they could offer a glimpse at the mental landscape of whole segments of society as well.  In the spirit of the commonplace book, here are some excerpts from Darnton’s essay with a few reflections.

Describing the practice of commonplacing:

“It involved a special way of taking in the printed word. Unlike modern readers, who follow the flow of a narrative from beginning to end (unless they are digital natives and click through texts on machines), early modern Englishmen read in fits and starts and jumped from book to book.  They broke texts into fragments and assembled them into new patterns by transcribing them in different sections of their notebooks.

Then they reread the copies and rearranged the patterns while adding more excerpts. Reading and writing were therefore inseparable activities. They belonged to a continuous effort to make sense of things, for the world was full of signs: you could read your way through it; and by keeping an account of your readings, you made a book of your own, one stamped with your own personality.”

What is only a parenthetical aside in Darnton’s opening paragraphs was for me a key insight. Darnton’s description of commonplacing could easily be applied to the forms of reading practiced with digital texts, all the way down to the personalization. What is missing, of course, and this is no small thing, is the public or social dimension.

On what commonplace books reveal:

“By selecting and arranging snippets from a limitless stock of literature, early modern Englishmen gave free play to a semi-conscious process of ordering experience. The elective affinities that bound their selections into patterns reveal an epistemology at work below the surface.”

That last sentence could easily function as a research paradigm for analysis of social media. Map the “elective affinities” of what Facebook or Twitter or Google+ users link and post and the emergent patterns will be suggestive of underlying epistemologies. Although here again the social dimension complicates the matter considerably. The “elective affinities” on display in social networking sites are performative in a way that private commonplacing was not, thus injecting a layer of distorting self-reflexivity.  But, then, that performative dimension is interesting on its own terms.

Commonplacing as reading for action:

“But they read in the same way — segmentally, by concentrating on small chunks of text and jumping from book to book, rather than sequentially, as readers did a century later, when the rise of the novel encouraged the habit of perusing books from cover to cover. Segmental reading compelled its practitioners to read actively, to exercise critical judgment, and to impose their own pattern on their reading matter. It was also adapted to ‘reading for action,’ an appropriate mode for men like Drake, Harvey, [etc.] and other contemporaries, who consulted books in order to get their bearings in perilous times, not to pursue knowledge for its own sake or to amuse themselves.”

Again the resemblance between early modern reading practices as described by Darnton and digital reading practices is uncanny. The rise of sustained, linear reading is often attributed to the appearance of printing. Darnton, however, would have us connect sustained, cover-to-cover reading with the later rise of the novel. In this case, the age of the novel stands as an interlude between early modern and digital forms of reading which are more similar to one another than either is to reading as practiced in the age of the novel.

The idea of “reading for action” is also compelling as it suggests the agonistic character of both early modern English politics and early 21st century American politics. I suspect that a good deal of online reading today is done in the spirit of loading a gun. At least this is often the ethos of the political blogosphere.

Nonetheless, Darnton would have us see that this form of reading, at least in its early modern manifestation, had its merits in what it required from the reader as an active agent.

Finally, on reading and the attempt to make sense of out of experience:

“… we may pay closer attention to reading as an element in what used to be called the history of mentalities — that is, world views and ways of thinking. All the keepers of commonplace books, from Drake to Madan, read their way through life, picking up fragments of experience and fitting them into patterns. The underlying affinities that held those patterns together represented an attempt to get a grip on life, to make sense of it, not by elaborating theories but by imposing form on matter.”

Early modern Britons and those of us who are living through the digital revolution (an admittedly overplayed phrase) share a certain harried and anxious disposition. It was, after all, the early modern poet John Donne, who wrote of his age, “Tis all in pieces, all coherence gone.” Early moderns deployed the commonplace book as a means of collecting some of the pieces and putting them together once more. If we follow the analogy, and this is always a precarious move, it would suggest that the impulses at work in contemporary digital commonplacing practices — which have not only written information, but lived experience as the field from which fragments are culled — are deeply conservative. They would amount to an effort to impose order on the chaotic flux of live.

McLuhan: 100

The medium is the message … five words, plump and alliterative though they may be, are wildly inadequate … he was born in Edmonton, Alberta on July 21, 1911 … He speaks in canned riddles … Speech as organized stutter is based on time. What does speech do to space? … “Clear prose indicates the absence of thought” … Watching McLuhan, you can’t quite decide whether he was a genius or just had a screw loose … he gave us language that made “media” into a thing …

It feels wistful to imagine a time when people didn’t go about their daily routine with the assumption that at any moment another massive media technology will be dumped on us by some geek in California … “I’m going to be a computer when I grow up” …

“What if he is right”? … “Instead of the book as a fixed package of repeatable and uniform character suited to the market with pricing, the book is increasingly taking on the character of a service … and the book as an information service is tailor-made and custom-built” … First of all – and I’m sorry to have to repeat this disclaimer – I’m not advocating anything … “The next medium, whatever it is – it may be the extension of consciousness – will include television as its content, not as its environment” …

an alchemical mix of his vast historical and literary knowledge, his bombastic personality and a range of behaviors we might now place on the very mild end of the autistic spectrum … McLuhan’s mind was probably situated at the mild end of the autism spectrum. He also suffered from a couple of major cerebral traumas …

First, that McLuhan never made arguments, only assertions … a fixture of culture both nerd and pop, which are increasingly the same thing. He is the patron saint of Wired … what mattered was merely the fact that you were watching TV. The act of analysing the content of TV – or of other mediums – is either sentimental or it’s beside the point … Annie Hallthe fastest brain of anyone I have ever met, and I never knew whether what he was saying was profound or garbage… He wanted his words to knock readers out of their intellectual comfort zones, to get them to entertain the possibility that their accepted patterns of perception might need reordering ..McLuhan was an information canary …

“He writes by paradox — that makes him hard to read (or hard on the reader),” wrote McLuhan … he loved Chesterton’s rhetorical flourishes, imbibed his playfulness, turned his impulse to try out new combinations of ideas into the hallmark of the McLuhan method … He became a daily Mass-goer …

There is absolutely no inevitability … what will be the psychic fallout of these technologies on our inner lives? … Like Marx and Freud, he was an intellectual agitator, a conceptual mind expander, the yeast in the dough …  James Joyce and Ezra Pound especially … The web. The web, with its feeds and flows and rivers and streams … That kaleidoscopic, almost psychedelic style … In that Playboy interview … a celebrity-seeking charlatan …

lost all hope “that the world might become a better place with new technology” …  people who classify McLuhan as a techno-utopian aren’t simply making stuff up … Resenting a new technology will not halt its progress … Many people seem to think that if you talk about something recent, you’re in favor of it … And so eschatological hope appears as nothing more than an early manifestation of cyber-utopianism … Look at what these media are doing to our souls … “Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left” …

Your question reflects the usual panic of people confronted with unexplored technologies. I’m not saying such panic isn’t justified … merely that such reactions are useless and distracting … “Man the food-gatherer reappears incongruously as information-gatherer” … But an understanding of media’s effects constitutes a civil defense against media fallout … someone who didn’t just have strong ideas but who invented a whole new way of talking … all a teacher can ever do is get people to think …

outlived his fame … he died in a state of wordlessness …

That’s what McLuhan did.

_______________________________________________________________________

In case it is not apparent, only a very few of these words are mine.  Sources:

Webs and whirligigs:  Marshall McLuhan in his time and ours by Megan Garber
Why McLuhan’s chilling vision still matters today by Douglas Coupland
McLuhan at 100 and McLuhan on the Cloud by Nicholas Carr
Why Bother With Marshall McLuhan by Alan Jacobs
Divine Inspiration by Jeet Heer
Marshall McLuhan:  Escape into Understanding by W. Terrence Gordon
McLuhan, Chesterton, and the Pursuit of Joy
McLuhan as Teacher by Walter Ong

Anonymity is Authenticity?

Chris Poole is among the 21 New Media Innovators recently profiled by New York Magazine.  They bestowed upon him the title of “Meme Generator” and provided this short bio:

Chris Poole (handle: “moot”) founded the anonymous message board 4Chan when he was just 15. It’s grown into the breeding ground for some of the web’s most pervasive memes, as well as some of its more ominous movements. In the last two years, Poole has raised more than $3 million in venture funding for a new image-centric site called Canvas, which is similar to but separate from 4chan, and he’s become an advocate for web privacy. At this year’s South by Southwest conference, for instance, he had this to say: “Zuckerberg’s totally wrong on anonymity being total cowardice. Anonymity is authenticity.”

It was that last line that caught my attention.  I’ve lately been wrestling with the relative virtues and vices of anonymity and personalization.  A while ago I argued that Zuckerberg was indeed wrong about identity, and self-servingly so.  I wasn’t interested in defending anonymity in that case, but rather the more natural gradual and contingent self-disclosure that characterizes ordinary human relationships.

In the past several days I’ve returned to the theme arguing first that Google+ through its Circles attempts to address Facebook’s “all friends are equal” model.  Then I suggested that the trend toward personalization and away from the anonymity of the early web better (but, imperfectly) fitted our social impulse and operated to some degree as what  sociologists call a mediating structure. I followed that up with a post commenting on Morozov’s lament for the loss of the early Internet’s communal character which, I suggested, sat uncomfortably with Morozov’s privileging of privacy over personalization.  Human communities don’t ordinarily function on those terms.  Oddly, I argued in the comment thread of that same post for the desirability of anonymity in some instances.

So now I come across Poole’s claim — “anonymity is authenticity” — and feel primed to comment.  My initial response is this:  If anonymity is authenticity, then it is a Pyrrhic authenticity.

There is a certain plausibility to Poole’s claim, it suggests that we are most ourselves when we know that we will not be made to answer for what we are doing or saying; that the public self is a restrained and inhibited, and thus not authentic, version of the true self.  It tracks with the point of the Ring of Gyges story in Plato’s Republic.  The ring made the owner invisible, and so the argument went, revealed the true character of the owner (or better, the superficiality of virtue).

But if it is a plausible account of the human condition, it is also an incomplete one.  It is a Pyrrhic authenticity because it eliminates the possibility of appearing authentically before others who acknowledge our presence.  It thus suggests that we are most ourselves at the point at which it does not really matter what we are.  Now one may adopt a Simon & Garfunkle, “I am a rock, I am an island” attitude at this juncture and insist that they don’t need the acknowledgement and recognition, to say nothing of the love and care, that comes from human relationships; if so, then this post is probably not the place to argue otherwise.  I’m going to count on the fact that most of us will rather resonate with Hannah Arendt when she writes, “To live an entirely private life means above all to be deprived of things essential to a truly human life: to be deprived of the reality that comes from being seen and heard by others.”

What good is it to finally be myself, if I am myself alone? Granted there may be some extraordinary circumstances when, in fact, remaining authentically oneself simultaneously requires a great solitude. Ordinarily, however, we answer to both a need to cultivate the inner self with a measure of independence, and to find fulfillment in meaningful relationships with and among others.

Even I feel that over the course of these posts I have been trying to hit a moving target.  This reflects the complexity of lived human experience.  It is a complexity that is hard to account for online when platforms and interfaces seek to reduce that complexity to either the indiscriminately social or the misanthropically private. We are complex creatures and the great danger is that we will end up reducing our complexity to fit the constraints of life mediated through one platform or another.

The Fog of Life: “Google,” Memory, and Thought

Last week a study suggesting that the ability to Google information is making it less likely that we commit information to memory garnered a decent amount of attention and discussion, including a few of my own thoughts in my last post.  In addition to writing a post on the topic I did something I almost never do for the sake of sanity, I followed the comment thread on a few websites that had posted articles on the story.  That was an instructive experience and has led to a few observations, comments, and questions which I’ll list briefly.

  • Google functions as a synecdoche for the Internet in way that no other company does. So when questions like “Is Google Making Us Stupid?” or “Is Google Ruining Our Memory?” are posed, what is really meant is more like “Is the Internet Making Us Stupid?”, etc.
  • Of course, “Google” is not an autonomous agent, but it has generated and made plausible a certain rhetoric that rather imprudently dismisses the need to remember.
  • People still get agitated by claims that the Internet is either bad or good for you.  Stories are framed in this way in the media, and discussion assumes this binary form.  Not much by way of nuance.
  • On the specific question of memory in relation to this study and the subsequent discussion, it is never quite clear what sort of memory is in view, although it appears that memory for facts or some variation of that is what most people are assuming in their comments.
  • The computer model of the brain is alive and well in people’s imagination.  How else could we explain the recurring claim that by offloading our memory to “Google” we are “freeing up space” in our memory so that our “processing” runs more efficiently.
  • Does anyone really believe that we, members of present society, are generally in danger of reaching the limits of our capacity for memory?
  • There is a concern that tying up memory on the retention of “trivial” facts will hamper our ability to perform higher order tasks such as critical and creative thinking.
  • “Trivial” is relative.  Phone numbers are often given as an example, but while knowing some obscure detail about the human cardio-vascular system might be “trivial” to me, it wouldn’t be so to a cardiovascular surgeon in the midst of an operation.
  • Why are we opposing two forms of knowledge or “intelligence” anyway?  Aren’t most of the people who are able to think critically and creatively about a topic or discipline the same people who have attained a mastery of the details of that same topic or discipline? Isn’t remembering the foundation of knowing, or are not the two at least intimately related?
  • Realizing that total recall of all pertinent facts in most cases is too high a bar, wouldn’t it at least be helpful not to rhetorically oppose facts to thinking?
  • The denigration of memory for facts seems — be warned this is impressionistic — aligned with a slide toward an overarching cloud of vagueness settling over our experience.  Not simply the vagueness by comparison with print disciplined speech that accompanies a return to orality, but a vagueness, distractedness, or inattentiveness  about immediate experience in general.
  • Will we know nothing in particular because we know where to find everything in general?

On that last note, consider Elizabeth Spires’ poem, “A Memory of the Future,” published in The Atlantic, and make of it what you will:

I will say tree, not pine tree.
I will say flower, not forsythia.
I will see birds, many birds,
flying in four directions.

Then rock and cloud will be
lost. Spring will be lost.
And, most terribly,
your name will be lost.

I will revel in a world
no longer particular.
A world made vague,
as if by fog. But not fog.

Vaguely aware,
I will wander at will.
I will wade deeper
into wide water.

You’ll see me, there,
out by the horizon,
an old gray thing,
who finally knows

gray is the most beautiful color.