What I Might Have Written About If I Had More Time

Some blogs have a regular post each week (sometimes cleverly titled, unlike mine) in which they list a bunch of links that they think readers might enjoy visiting.  Without being regular about it, I have in the past put up similar posts, and now may be good time to revisit the format.  So here are some items that, had I more time, may have generated something more than passing reference, but, as it stands …

  • At the New York Times, Maureen Dowd considers the Catholic Church’s efforts to minister through the Internet in “Forgive Me, Father, For I Have Linked”.  Here’s her clever (depending on your mood) rendition of the Lord’s Prayer:

Our Father, who art in pixels,
linked be Thy name,
Thy Web site come, Thy Net be done,
on Explorer as it is on Firefox.
Give us this day our daily app,
and forgive us our spam,
as we forgive those
who spam against us,
and lead us not into aggregation,
but deliver us from e-vil. Amen.

  • In The New Yorker, Adam Gopnik has a long review essay of 2010 books that addressed the impact of the Internet on our thinking including, of course, Nick Carr’s The Shallows and Clay Shirky’s Cognitive Surplus

Two pieces that will either excite you or depress you depending on your disposition:

And, finally, on publishing revolutions old and new:

Memory, Knowledge, Identity, Technology

Memory, knowledge, identity, technology — these are intimately bound together and it would be difficult to disentangle one from the others.  What is it to know something if not to remember it?  Beyond the biological facts of my existence what constitutes my identity more significantly than my memory?  What could I remember without technologies including writing, books, pictures, videos, and more?  Or to put it in a more practical way, what degree of panic might ensue if your Facebook profile were suddenly and irrevocably deleted?  Or if your smart phone were to pass into the hands of another?  Of if you lost your flash drive?  Pushing the clock back just a little, we might have similarly asked about the loss of a diary or photo albums.

The connection among these four, particularly memory and technology, is established as early as the Platonic dialogs, most famously the Phaedrus in which Socrates criticizes writing for its harmful effects on internal memory and knowledge.  What we store in written texts (or hard drives, or “the cloud”) we do not remember ourselves and thus do not truly know it.  The form of this debate recurs throughout the subsequent history of technology all the way to the present debates over the relative merits of computers and the Internet for learning and education.  And in these debates it is almost de rigueur to begin by citing Plato’s Phaedrus either the reinstall or dismiss the Socratic critique.  Neil Postman began his book, Technopoly: The Surrender of Culture to Technology, with reference to Phaedrus, and Phaedrus appears as well in Nicholas Carr’s now (in)famous Atlantic essay, “Is Google Making Us Stupid?”.

The rejoinder comes quickly though:  Surely Socrates failed to appreciate the gains in knowledge that writing would make possible.  And what if I offload information to external memory, this simply frees my mind for more significant tasks. There is, of course, an implicit denigration of mere memory in this rebuttal to Socrates.

Yet some tension, some uneasiness remains.  Otherwise the critique would not continue resurfacing and it wouldn’t elicit such strong push back when it did.  In other words, the critique seems to strike at a nerve, a sensitive one at that, and when again we consider the intimate interrelationship of memory with our ideas about knowledge and education and with the formation and maintenance of our identities it is not surprising at all.  A few posts down I cited Illich’s claim that

What anthropologists distinguish as ‘cultures’ the historian of mental spaces might distinguish as different ‘memories.’  The way to recall, to remember, has a history which is, to some degree, distinct from the history of the substance that is remembered.

I’m wondering now whether it might also be true that a history of personal identity or of individuality could be told through a history of memory and its external supports.  Might we be able to argue that individualism is a function of technologies of memory that allow a person to create and sustain his own history apart from that of the larger society?

In any case, memory has captured my attention and fascinating questions are following hard.  What is memory anyway, what is it to remember a name, a look, a person, a fact, a feeling, where something is, how to do something, or simply to do something?  What do we remember when we remember?  How do we remember?  Why do we remember?  And, of course, how have the answers to all of these questions evolved along with the development of technology from the written word to the external hard drive?

On that last note, I wonder if our choice to call a computer’s capacity to store data “memory” has not in turn affected how we think of our own memory.  I’m especially thinking of a flash drive that we hold in hand and equate with stored memory.  In this device I keep my pictures, my documents, my videos, my memories — memory, or a certain conception of it, is objectified, reified.  Is memory merely mental storage?  Or has this metaphor atrophied our understanding of memory?

Of course, metaphors for memory are nothing new.  I’m beginning to explore some of these ideas with Paul Ricoeur’s Memory, History, Forgetting, and Ricoeur reminds us that in another Platonic dialog, the Theaetetus, Socrates offers the block of wax in our souls as a metaphor for our memory.  And Socrates suggests, “We may look at it, then, as a gift of Mnemosyne [Memory], the mother of the Muses.” I’ll keep you posted as the Muses urge.

 

Social Media: Good for Groups, Bad for Individuals?

Remember those IBM “You Make the Call” spots during NFL games that used to show you a controversial play and then ask you to make the call before revealing what was in fact the right call?

Well, here’s a variation:

A recent Pew survey has been widely taken to suggest that Internet use doesn’t kill healthy social life after all.

A recent Stanford study suggests to some that social media, Facebook in particular, is making us sad.

You make the call!

Admittedly, this is not exactly an either/or situation, it may even be both/and, I just felt like alluding to the vintage commercial.  (Follow that last link and you’ll also see some vintage Tandy and IBM computers.)

Here’s a little more from each.  Regarding the Pew survey:

Pew found that 80 percent of Internet users participate in groups, as compared with 56 percent of non-Internet users.

Twitter users were the most social. 85 percent of them were involved in group activity offline, followed by 82 percent of social networking users. The results from the survey identify the use of social media and online activities as helpful in the process of disseminating information and engaging group members.

“The virtual world is no longer a separate world from our daily life and lots of Americans are involved in more kinds of groups,” said Rainie.

From the Slate story about the Stanford study:

Facebook is “like being in a play. You make a character,” one teenager tells MIT professor Sherry Turkle in her new book on technology, Alone Together. Turkle writes about the exhaustion felt by teenagers as they constantly tweak their Facebook profiles for maximum cool. She calls this “presentation anxiety,” and suggests that the site’s element of constant performance makes people feel alienated from themselves. (The book’s broader theory is that technology, despite its promises of social connectivity, actually makes us lonelier by preventing true intimacy.)

With that excerpt I’m killing two birds with one stone by pointing you to Sherry Turkle’s most recent work which has drawn considerable attention over the last month or so. See Johah Lehrer’s review here and, in a lighter vein, watch Turkle on The Colbert Report here.

Also of interest in the Slate article is the differentiation between male and female use of Facebook:

Facebook oneupsmanship may have particular implications for women. As Meghan O’Rourke has noted here in Slate, women’s happiness has been at an all-time low in recent years. O’Rourke and two University of Pennsylvania economists who have studied the male-female happiness gap argue that women’s collective discontent may be due to too much choice and second-guessing–unforeseen fallout, they speculate, of the way our roles have evolved over the last half-century. As the economists put it, “The increased opportunity to succeed in many dimensions may have led to an increased likelihood in believing that one’s life is not measuring up.”

If you’re already inclined to compare your own decisions to those of other women and to find yours wanting, believing that others are happier with their choices than they actually are is likely to increase your own sense of inadequacy. And women may be particularly susceptible to the Facebook illusion. For one thing, the site is inhabited by more women than men, and women users tend to be more active on the site, as Forbes has reported. According to a recent study out of the University of Texas at Austin, while men are more likely to use the site to share items related to the news or current events, women tend to use it to engage in personal communication (posting photos, sharing content “related to friends and family”). This may make it especially hard for women to avoid comparisons that make them miserable. (Last fall, for example, the Washington Post ran a piece about the difficulties of infertile women in shielding themselves from the Facebook crowings of pregnant friends.)

Regarding the Pew survey, I’m wondering if it says as much as its proponents take it to say.  I’m not sure it necessarily says much about the quality of the social interaction involved, but more significantly, dividing the population between Internet users and non-Internet users seems less than helpful and may give us nothing deeper than mere correlation.

Regarding the Slate story, find the strong push back in the comment section from the woman who benefited from Facebook during a time of deep depression.  Generalizations will never be without exceptions, of course, and it may be more helpful to think of social media exacerbating rather than causing certain dispositions or emotional states.

Obama Talks With A Computer

[Correction:  Mr. Scocca informs me via email that the dialog in his piece was an actual transcript of a session with Eliza.  So nothing “mock” or “contrived” about it.  All the more interesting, read on.]

Over at Slate, Tom Scocca has staged a mock dialogue with Eliza to good, even if somewhat contrived, effect.  Eliza, in case you’re wondering, is a computer program designed in the 1960’s to simulate human interaction based on certain scripts (such as DOCTOR which simulates a particular style of psychotherapy).

About half way through the dialogue we get these few lines which more or less encapsulate Scocca’s main concern:

TS: On Twitter, I am having a hard time telling the bots from the humans.

Eliza: How long have you been having a hard time telling the bots from the humans?

TS: Weeks and weeks now.

Eliza: Please go on.

TS: It is not because the bots are fluent or facile.

Eliza: Do any other reasons not come to mind?

TS: It is because so much of the human conversation has come down to bot level.

And a few lines later he adds,

TS: That’s not really someone who likes my blog post. It’s a pharma ad-bot that presumably grabs anything with “Obama” in it.

Eliza: Why do you say your blog post its a pharma adbot that presumably grabs anything with obama in it?

TS: Well, of course, by now I have been trained to put “Obama” into the headlines myself, for SEO purposes.

Eliza: Can you elaborate on that?

TS: My actual transactions with human readers are so machine-mediated, it’s necessary for me to keep machine-logic in mind while I’m writing.

I’m taking these observations as rather useful illustrations of how the language (or, logic) of a digital media platform shapes our communication to fit within its own limitations.  Borrowing linguist Roman Jakobson’s maxim regarding languages, I suggested a few posts down that, “Languages of digital media platforms differ essentially in what they cannot (or, encourage us not to) convey and not in what they may convey.”  In other words, we shape our communication to fit the constraints of the medium.  The follow up question then becomes, “do we adapt to these limitations and carry them over into other fields of discourse?”  Scocca provocatively suggests that if a computer ends up passing the Turing Test, it will not be because of an advance in computer language capability, but because of a retrogression in the way humans use language.

Keep in mind that you don’t have to be a professional writer working for a popular web magazine to experience machine mediated communication.  In fact, my guess is that a great deal, perhaps the majority, of our interaction with other people is routinely machine mediated, and in this sense we are already living in post-human age.

The mock dialog also suggests yet another adaptation of Jackobson’s principle, this time focused on the economic conditions at play within a digital media platform.  Tracking more closely with Jackobson’s original formulation, this adaptation might go something like this:  the languages of digital media platforms differ essentially in what their economic environment dictates they must convey.  In the case of Scocca, he has been trained to mention Obama for the purposes of search engine optimization, and this, of course, to drive traffic to his blog because traffic generates advertising revenue.  Not only do the constraints of the platform shape the content of communication, the logic of the wider economic system disciplines the writing as well.

None of this is, strictly speaking, necessary.  It is quite possible to creatively, and even aesthetically communicate within the constraints of a given digital media platform.  Any medium imposes certain constraints; what we do within those constraints remains the question.  Some media, it is true, impose more stringent constraints on human communication than others; the telegraph, for example, comes to mind.  But the wonder of human creativity is that it finds ways of flourishing within constraints; within limitations we manage to be ingenious, creative, humorous, artistic, etc.  Artistry, humor, creativity and all the rest wouldn’t even be possible without certain constraints to work with and against.

Yet aspiring to robust, playful, aesthetic, and meaningful communication is the path of greater resistance.  It is easier to fall into thoughtless and artless patterns of communication that uncritically bow to the constraints of a medium thus reducing and inhibiting the possibilities of human expression.  Without any studies or statistics to prove the point, it seems that the path of least resistance is our default for digital communication.  A little intentionality and subversiveness, however, may help us flourish as fully human beings in our computer-mediated, post-human times.

Besides, it would be much more interesting if a computer passed the Turing Test without any concessions on our part.

Oh, and sorry for the title, just trying to optimize my search engine results.

Breaking the Spell

In the not too distant past there were a series of Visa Check Card commercials which presented some fantastical and whimsical shopping environment in which transactions were processed efficiently, uninterruptedly, and happily thanks to the quick, simple swipe of the check card.  Inevitably some one would pull out cash or attempt to use a check and the whole smooth and cheerful operation would grind to a halt and displeasure would darken the faces of all involved.  For example:

Cynic that I tend to be, I read the whole campaign as a rather transparent allegory of our absorption into inhuman patterns of mindless, mechanized, and commodified existence.  But let’s lay aside that gloominess for the moment, it is near Christmas time after all and why draw unnecessary attention to the banality of our crass … okay, no, I’m done really.

But one other, less snarky observation: These commercials did a nice job of illustrating the circuit of mind, body, machine, and world that we are all enmeshed in.  This circuit typically runs so smoothly that we hardly notice it at all. In fact, we often tend to lose sight of how deeply integrated into our experience of reality our tools have become and how these tools mediate reality for us.  The emergence of ubiquitous wireless access to the Internet promises (or threatens, depending on your perspective) to extend and amplify this mediation exponentially.  To put it in a slightly different way, our tools become the interface through which we access reality.  Putting it that way also illustrates how our tools even begin to provide the metaphors by which we interpret reality.

Katherine Hayles drew attention to this circuit when, discussing the significance of embodiment, she writes,

When changes in [embodied] practices take place, they are often linked with new technologies that affect how people use their bodies and experience space and time.  Formed by technology at the same time that it creates technology, embodiment mediates between technology and discourse by creating new experiential frameworks that serve as boundary markers for the creation of corresponding discursive systems.

Translation:  New technologies produce new ways of using and experiencing our bodies in the world.  With our bodies we make technology and this technology then shapes how we understand our bodies and this interaction generates new ways of talking and thinking about the world.

But as in the commercial, this often unnoticed circuit through which we experience the world is sometimes disrupted by some error in the code or glitch in the system.  We often experience such disruptions as annoyances of varying degrees.  But because our tools are an often unnoticed link in the circuit encompassing world, body, and mind, disruptions emanating from our tools can also elicit flashes of illumination by disrupting habituated patterns of thought and action.  Hayles again, this time writing about one of the properties of electronic literature:

. . . unpredictable breaks occur that disrupt the smooth functioning of thought, action, and result, making us abruptly aware that our agency is increasingly enmeshed within complex networks extending beyond our ken and operating through codes that are, for the most part, invisible and inaccessible.

Thinking again about Arthur C. Clarke’s Third Law, “any sufficiently advanced technology is indistinguishable from magic,” we might say that disruptions and errors break the spell.  And depending upon your estimation of the enchantment, this may be a very good thing indeed, at least from time to time.