Don’t Offload Your Memory Quite Yet: Cognitive Science, Memory, and Education

Google may or may not being making us stupid, but it does appear to render human memory obsolete. By now most of us have probably heard someone suggest that with Google functioning as our reliable and ubiuqitous prosthetic memory, it is no longer necessary to waste time or mental effort memorizing facts. This is usually taken to be a positive development since our brains are now assumed to be free from the menial work of remembering to do the more serious work of creative and critical thinking.

This sounds plausible enough and gains a certain credence from our own experience. Some time ago we began noticing that we no longer know anyone’s phone number. We’re doing well if we can remember our own. Ever since cell phones started storing numbers, we stopped remembering them. And for the most part we’re no worse for it, except, of course, for those instances when we need someone’s number but we can’t access our phones for whatever reason. But those situations tend to be few and far between and on the whole we have little reason to begin memorizing all the numbers in our directories.

We should, however, think twice about extending that line of reasoning much beyond phone numbers. In his 2009 book, Why Don’t Students Like School, Daniel Willingham, a cognitive scientist at the University of Virginia challenges this popular line of reasoning and argues instead for the importance of storing factual knowledge in our long term memory.

Willingham notes that disparaging the need to memorize facts has a long and distinguished history in American education that predates the emergence of the Internet and of Google as a verb form. He cites, for example, J. D. Everett who in 1873 warned that,

There is a great danger in the present day lest science-teaching should degenerate into the accumulation of disconnected facts and unexplained formulae, which burden the memory without cultivating the understanding.

The modern day version of this concern leads to the notion that “instead of learning facts, it’s better to practice critical thinking, to have students work at evaluating all the information available on the Internet rather than trying to commit some small part of it to memory.” Contrary to this fashionable assumption, Willingham argues that that “very processes that teachers care about most — critical thinking processes such as reasoning and problem solving — are intimately intertwined with factual knowledge that is stored in long-term memory (not just found in the environment).”

The problem is that we tend to conceive of thinking analogously to how we imagine a computer works and we abstract processes from data. We treat “critical thinking” as a process that can be taught independently of any specific data or information. On the contrary, according to Willingham, the findings of cognitive science suggest that “[c]ritical thinking processes are tied to background knowledge” and “we must ensure that students acquire background knowledge parallel with practicing critical thinking skills.”

Willingham offers three main points in support of his claim listed here with a few explanatory excerpts:

1. Background knowledge stored in long term memory is essential to reading comprehension

a. it provides vocabulary

b. it allows you to bridge logical gaps that writers leave

c. it allows chunking, which increases room in working memory and thereby makes it easier to tie ideas together

d. it guides the interpretation of ambiguous sentences

2. Background knowledge is necessary for cognitive skills

a. “Memory is the cognitive process of first resort. When faced with a problem, you will first search for a solution in memory …”

b. Chunking facilitated by available knowledge in long term memory enhances reasoning as well as reading comprehension.

c. “Much of what experts tell us they do in the course of thinking about their field requires background knowledge, even if it’s not described that way.”

3. Factual knowledge improves your memory

a. “when you have background knowledge your mind connects the material you’re reading with what you already know about the topic, even if you’re not aware that it’s happening”

b. “having factual knowledge in long-term memory makes it easier to acquire still more factual knowledge”

Among the implications for the classroom that Willingham explores, the following are worth mentioning:

1. “Shallow knowledge is better than no knowledge”: You can’t have deep knowledge about everything, but shallow knowledge of some areas, while not ideal, is better than nothing. It’s at least something to build on and work with.

2. “Do whatever you can to get kids to read”: In Willingham’s view, nothing helps build a wide knowledge base  better than reading and lots of it.

3. “Knowledge must be meaningful”: Memorizing random lists of disconnected facts is not only harder, but ultimately less helpful.

That last point acknowledges that mere rote memorization and incessant drilling can be fruitless and counterproductive. Of course, that much should be obvious. What is no longer obvious, and what Willingham compellingly demonstrates, is that storing up factual knowledge in long term memory is not the enemy of thought, but rather its necessary precondition.

It would appear that the rhetoric of offloaded memory ignores the findings of cognitive science and leads to ineffective educational practice.

__________________________________________________________

Related posts: Offloaded Memory and Its Discontents and The Fog of Life.

The Internet, the Body, and Unconscious Dimensions of Thought

Thinking What We Are Doing

Part One of Three (projected).

Writing near the midpoint of the last century, Hannah Arendt worried that we were losing the ability “to think and speak about the things which nevertheless we are able to do.” The advances of science were such that representing what we knew about the world could be done only in the language of mathematics, and efforts to represent this knowledge in a publicly meaningful and accessible manner would become increasingly difficult, if not altogether impossible.  Under such circumstances speech and thought would part company and political life, premised as it is on the possibility of meaningful speech, would be undone.  Consequently, “it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking.”

Arendt was nearly prescient.  She clearly believed this to be a dystopian scenario that would result in the enslavement of humanity, not so much to our machines, but to one narrow constituent element of our humanity – our “know-how,” that is our ability to make tools.  What Arendt did not imagine was the possibility that digitally, and thus artificially, augmented human thought might avert the very enslavement she foresaw.

On the eve of the 21st century, similar concerns were articulated by Paul Virilio who believed that our technologies, particularly the Internet, created a situation in which a total and integral accident was possible – an accident unlike anything we have heretofore experienced and one that we could not, as of yet, imagine.  Virilio termed this possibility the general accident.  Like Arendt, Virilio believed that the emerging shape of our technological society threatened the possibility of politics; and if politics failed, Virilio claimed, the general accident would be inevitable. Again, like Arendt, Virilio too seems unable to imagine that the way forward may lay through, not against technology, particularly the Internet.

If the concerns expressed by both Arendt and Virilio continue to resonate, it is because the structure of the challenge they articulated remains intact.  The pace of technological development outstrips our ability to think through its attendant social and ethical implications; moreover, the political sphere appears so captivated by the ensuing spectacle that it is ensnared by the very problems we call upon it to solve.  We are confronted, then, with a technologically induced failure of thought and politics, along the lines anticipated by Arendt and Virilio.

Gregory Ulmer is likewise concerned about the challenges presented to our thinking and our politics by technology, specifically the Internet; but Ulmer is more sanguine about the possibility of inventing new forms of thought adequate to our circumstances.  Electracy, according to Ulmer, will be to the digital age what literacy has been to the age of print: an apparatus of thought and practice directed toward the perennial question:  “why do things go wrong?”

Ulmer further elaborates the function of electracy in reference to subjectivity:

If the literate apparatus produced subjectivation in the mode of individual selves organized collectively in democratic nation-states, electracy seems to allow the possibility of a group subjectivation with a self-conscious interface between individual and collective . . .

Ulmer begins Electronic Monuments with a discussion of Paul Virilio’s general accident because, in Ulmer’s view, Virilio has “most forcefully” articulated concerns about “the Internet as the potential source of a general accident.” Unlike Virilio, however, Ulmer believes the best response to the potential of the general accident lies not in opposition to Internet, but through the possibilities created by the Internet.

In The Human Condition, Arendt set for society a very straightforward goal:  “What I propose, therefore, is very simple:  it is nothing more than to think what we are doing.” While Arendt goes on to help the reader understand “what we are doing,” the matter of thinking what we are doing remains an elusive task.

Ulmer attributes our inability to think what we are doing to the blindness that plagues us, both individually and collectively, and he draws on a combination of Greek tragedy and psychoanalysis to frame and theorize this blindness.  Reflecting on Greek tragedy, an “oral-literate hybrid” bridging oral and literate forms of problem recognition, Ulmer explains, “The aspect of tragedy of most interest in our context is (in Greek) ATH (até in lowercase), which means ‘blindness’ or ‘foolishness’ in an individual, and ‘calamity’ or ‘disaster’ in a collectivity.”

The sources of ATH, according to Ulmer, are “those circumstances already in place and into which we are thrown at birth, providing the default moods enforcing in us the institutional construction of identity.” Marshall McLuhan captures a similar point in characteristically pithy fashion when he observes that, “Environments are invisible. Their groundrules, pervasive structure and overall patterns elude easy perception.”

In the concluding chapter of Electronic Monuments, Ulmer further clarifies the concept of ATH with reference to Jacques Lacan’s exposition of Antigone:  “Lacan is interested in ATH as showing that exterior that is at the heart of me, the intersubjective nature of human identity.” Ulmer also refers to the intersubjective nature of human identity in describing the Internet as a “prosthesis of the unconscious (intersubjective) mind.” On more than one occasion, Ulmer identifies this metaphor – the Internet as prosthesis of the unconscious – as one of the key assumptions informing his development of the apparatus of electracy.

Taking Ulmer’s discussions of ATH, intersubjectivity, and the unconscious together, the following picture emerges:  For Ulmer the unconscious is not necessarily a realm of repressed trauma or libidinal desire, but rather is shorthand for the countless, unarticulated ways in which subjectivity is constructed by the social world it inhabits.  From one angle, Ulmer has given Freud, not a semiotic spin as Lacan had done, but a sociological spin.  The unconscious names the group subject – the exteriority at the heart of me.

The Internet is a prosthesis of this unconscious in the sense that it is a virtually limitless digital repository of all of the features of the social world that have imprinted themselves on the subject.  On Youtube, to take one example, a viewer can locate the toy commercial from their childhood that is still vaguely remembered, and then have links provided for a multitude of other more forgotten commercials, themes songs, and cartoons that, once seen, are remembered, and whose significance can be startling. Like T. S. Eliot’s “unknown, unremembered gate” in “Little Gidding,” the Internet operating as a prosthesis of the unconscious allows the user to “arrive where we started/And know the place for the first time.”

This collective element of group subjectivity, until it is made accessible through the practices of electracy Ulmer develops, functions as a blind spot (ATH).  It is a source of judgment and action that remains hidden from conscious thought analogously to the traditional psychoanalytic unconscious.  This blindness, therefore, presents a powerful obstacle to Arendt’s plea, that we think what we are doing.  Ulmer’s project, then, may be understood as an attempt to employ the Internet in an effort to make conscious thought aware of the way in which it has been constructed by the social.

Weekend Reading, 9/9/2011

Social media and online identity, mind control, self-control, and figuring out what an education is for. All of that is on tap this week. Enjoy. Feel free to comment on the links or just to let me know what sort of pieces you’d most like to see in these weekly round-ups. Have a great weekend.

“Rethinking Privacy and Publicity on Social Media,” Part 1 and Part 2, by Nathan Jurgenson at Cyborgology: Engaging posts on the creative dance between what is revealed and simultaneously concealed on social networks. Jurgenson’s dissertation research on self-documentation and social media yields compelling insights and analysis; you can keep up with his work at Cyborgology.

“Brainwave Controllers” from The Economist’s Technology Quarterly: “The idea of moving objects with the power of the mind has fascinated mankind for millennia.” Overview of non-invasive brain-computer interface technology and its various uses, current and potential.

“Focusing on Focus” and “The Will Power Circuit” by Jonah Lehrer at The Frontal Cortex: A mini-theme within this week’s selection: the science of willpower. Standard Lehrer pieces: Describe a series of interesting neurological experiments and what they tell us about focus and self-control.

“The Sugary Secret of Self-Control” by Steven Pinker in the NY Times: Theme continued in a review by Steven Pinker of Roy Baumeister and John Tierney’s Willpower: Rediscovering the Greatest Human Strength, a look at the science of self-control and how it can be trained.

“Who Are You and What Are You Doing Here” by Mark Edumdson in the Oxford American: Longish essay aimed at incoming college freshman. Edmundson addresses some of the problems with higher ed, but unlike many who do so, he does not resort to carelessly disparaging either faculty or students. Also, it seemed to me, he manages to encourage without being didactic or preachy. Draws on Freud and Emerson.

Technology in the Classroom: How To Do It Right

Matt Ritchell raises a series of crucial questions regarding technology in the classroom in his NY Times piece this weekend, “In Classroom of Future, Stagnant Scores.” More precisely, Ritchell raises questions regarding the seemingly uncritical push to deploy technology in the classroom regardless of costs and ambivalent results. In fact, he hits almost every topic of concern I would think to mention if I were writing a similar article, including the influence of those who do stand to unambiguously prosper from the implementation of technology in the classroom, those who make and sell the technology.

If you’re interested in technology and education, I encourage you to read the article. Here are a few excerpts. You’ll notice, I think, that even the endorsements come off as rather suspect:

“This is such a dynamic class,” Ms. Furman says of her 21st-century classroom. “I really hope it works.” Hope and enthusiasm are soaring here. But not test scores.

“The data is pretty weak. It’s very difficult when we’re pressed to come up with convincing data,” said Tom Vander Ark, the former executive director for education at the Bill and Melinda Gates Foundation and an investor in educational technology companies. When it comes to showing results, he said, “We better put up or shut up.”

Critics counter that, absent clear proof, schools are being motivated by a blind faith in technology and an overemphasis on digital skills — like using PowerPoint and multimedia tools — at the expense of math, reading and writing fundamentals. They say the technology advocates have it backward when they press to upgrade first and ask questions later.

“My gut is telling me we’ve had growth,” said David K. Schauer, the superintendent here. “But we have to have some measure that is valid, and we don’t have that.” It gives him pause. “We’ve jumped on bandwagons for different eras without knowing fully what we’re doing. This might just be the new bandwagon,” he said. “I hope not.”

“Rather than being a cure-all or silver bullet, one-to-one laptop programs may simply amplify what’s already occurring — for better or worse,” wrote Bryan Goodwin … Good teachers, he said, can make good use of computers, while bad teachers won’t, and they and their students could wind up becoming distracted by the technology.

“In places where we’ve had a large implementing of technology and scores are flat, I see that as great,” she said. “Test scores are the same, but look at all the other things students are doing: learning to use the Internet to research, learning to organize their work, learning to use professional writing tools, learning to collaborate with others.”

“Even if he doesn’t get it right, it’s getting him to think quicker,” says the teacher, Ms. Asta. She leans down next to him: “Six plus one is seven. Click here.”

Clearly, the push for technology is to the benefit of one group: technology companies.

I’m not suggesting technology in the classroom is useless, although it can sometimes be worse than useless. What we ought to take issue with is the blind embrace of technology for technology’s sake. Technologies such as laptops and Smartboards are deployed as if their mere presence in the classroom augmented the educational value of what goes on inside. This is not education, it is superstition; the tool becomes a talisman. Or, worse yet, under the assumption that the medium is essentially neutral, old task are assigned on new technologies to little, or possibly counterproductive, effect. This is naive at best.

What students need to learn with regard to technology is not how to use countless (often inane) tools like Powerpoint. Rather technology in education should be introduced in order to teach the students how to engage with technology critically and intelligently, and, ultimately, toward human ends.

Teach the history of technology, teach the sociology of new media, teach media theory, teach philosophy and ethics of technology, teach students how to program and code.

Help students become meta-critically savvy about their tools.

Teach students to become critics of texts and applications — Twelfth Night and Twitter.

Use technology to transcend the curricular divide between the sciences and the humanities.

Teach them not only the possibilities opened up by new technologies, but also their limitations.

Do not parade new technologies before students like so many idols sent to deliver us from our darkness who must be unfailingly acquiesced and mollified.

Don’t merely teach students to do what a new tool enables them to do, help them to question whether we ought do what the tool enables or whether we ought to do with the tool other than its makers considered.

The problem with all of this, of course, is that we must first teach the teachers and administrators.

Weekend Reading, 9/2/2011

As you may have noticed, posting has been light this week, and by light I mean non-existent. The fall semester has commenced and I’m already swamped. I’ll try to keep up the posting, but in the mean time here are some items to keep you busy. Three weeks in row!

Cornel University’s Chatbots on Youtube: This is just interesting. Cornell University researcher has two chatbots talk to each other and they have an intriguing conversation. I’ll let you decide what to make of it. (Update: I forgot to include a link to Kevin Kelly’s exchange with the programmers and his observations on his blog.)

Collaborative Learning for the Digital Age by Cathy N. Davidson in The Chronicle: In defense of online, technologically mediated education. Some good points, but I’m not quite convinced with the tenor of the whole. Would love to hear your thoughts.

When Cursive Cried Wolf by Elissa Lerner at The Book Bench: On the reemergence of handwriting as a creative niche and its benefits.

The Haimish Line by David Brooks in The NY Times: Wisdom regarding the simple, happy life with a Yiddish twist.

A Walk to Remember to Remember by Jesse Miller at Full Stop: This is a lovely reflection on the virtues of walking in a digital age. If you’re only going to read one of these, make it this one.