The Language(s) of Digital Media Platforms

What follows is a thought experiment.  Comments/criticisms welcome.

In an influential 2001 book, The Language of New Media, theorist Lev Manovich presented his “attempt at both a record and a theory of the present” with regards to digital media.  He explains that his “aim” is to “describe and understand the logic driving the development of the language of new media.”  But he is quick to add,

I am not claiming that there is a single language of new media.  I use “language” as an umbrella term to refer to a number of various conventions used by designers of new media objects to organize data and structure the user’s experience.

The final product is an engaging and provocative study.  For the moment, however, I want to reflect on the notion of a “language” of digital media — it’s a suggestive metaphor.  Early in the book, Manovich explains his rationale for the term,

I do not want to suggest that we need to return to the structuralist phase of semiotics in understanding new media.  However, given that most studies of new media and cyberculture focus on their sociological, economic, and political dimensions, it was important for me to use the word language to signal the different focus of this work:  the emergent conventions, recurrent design patterns, and key forms of new media.

Manovich states explicitly that he is not claiming that there is a single, monolithic language of new media.  At a recent conference, media anthropologist John Postill made a similar point.  We do not have, he suggested,

a totalising ephocal ‘logic’ but rather ever more differentiated Internet ‘technologies, practices, contexts’ ([Miller and Slater] 2000: 3). The evidence provided in the reviewed texts strongly suggests that the Internet – and indeed the world – is becoming ever more plural and that no universal ‘logic of practice’ … is gaining ascendancy at the expense of all other logics.

I take his “logics” to be roughly parallel to Manovich’s “language,” although Postill is focusing on the practices that emerge from digital media, less so on the internal logic of the given platform.  The two, however, are surely interrelated.  So while we do not have a single language of digital media, we may still speak of languages or logics of particular platforms or interfaces.  Now in an associative leap, I want to connect this with the recent conversations surrounding Guy Deutscher’s Through the Language Glass: Why the World Looks Different in Other Languages.  Judging from reviews and interviews, I have not yet read the book, Deutscher has written a fascinating study.  More specifically though, it is his defense of linguist Roman Jakobson’s maxim concerning the difference languages make that I want to think with here.  According to Jakobson, “Languages differ essentially in what they must convey and not in what they may convey.”  In other words, languages do not necessarily constrain a native speaker’s ability to think or comprehend certain concepts, but languages do force their speakers to make certain things explicit.  In Deutscher’s words,

Languages differ in what types of information they force the speakers to mention when they describe the world. (For example, some languages require you to be more specific about gender than English does, while English requires you to be more specific about tense than some other languages. Some require you to be more specific about color differences, and so on.) And it turns out that if your language routinely obliges you to express certain information whenever you open your mouth; it forces you to pay attention to certain types of information and to certain aspects of experience that speakers of other languages may not need to be so attentive to. These habits of speech can then create habits of mind that go beyond mere speech, and affect things like memory, attention, association, even practical skills like orientation.

Now what if we press the language of digital media platforms/interfaces metaphor and ask if the Jakobson principle holds?  My initial thought is that something like the inverse of Jakobson’s principle ends up being more useful.  I could be wrong here, this is just an initial refection, but what seems most interesting about a particular platform is its specific limitations and how the user is constrained to work (often imaginatively) within those constraints.  Consider as an example Twitter’s 140 character limit or the limited symbols available for text messages.  Facebook allows greater flexibility and more media options for communication, but it is still limited.  Second Life has its own logic or language with its own particular possibilities and limitations.  And so on.

These limits are, of course, inevitable.  Every medium has its limits, nothing new there.  Yet it is worth asking what these limits are because there is always an implicit risk in becoming habituated to communication with a given medium and internalizing these limitations.  Both Manovich and Deutscher allude to this possibility.  In the excerpt above, Deutscher suggests that, “These habits of speech can then create habits of mind that go beyond mere speech, and affect things like memory, attention, association, even practical skills like orientation.”  For his part Manovich, considering the way the “language” of new media objectifies the mind’s operations, concludes,

. . . we are asked to follow pre-programmed, objectively existing associations.  Put differently, in what can be read as an updated version of French philosopher Louis Althusser’s concept of “interpellation,” we are asked to mistake the structure of somebody else’s mind for our own . . . . The cultural technologies of an industrial society — cinema and fashion — asked us to identify with someone else’s bodily image.  Interactive media ask us to identify with someone else’s mental structure.  If the cinema viewer, male and female, lusted after and tried to emulate the body of the movie star, the computer user is asked to follow the mental trajectory of the new media designer.

So to sum up:

Digital media platforms exhibit something like a particular language or logic.

Borrowing and tweaking Jakobson’s maxim, “Languages of digital media platforms differ essentially in what they cannot (or, encourage us not to) convey and not in what they may convey.”

For consideration:  What assumptions and limitations are internalized by the habitual use of particular digital media platforms?  What communicative structures could we be internalizing and what are their limitations?  Do we then import these limitations into other areas of our thinking and communication in the world?

Comments welcome.

Clever Blog Post Title

Reading about Jon Stewart’s recent D. C. rally, I noticed a picture of a participant wearing a “V for Vendetta” mask and carrying a sign.  It wasn’t the mask, but the words on the sign that caught my eye.  The sign read, “ANGRY SIGN.”

I wouldn’t have paid much attention to it had I not recently crossed paths with a guy wearing a T-shirt that read:  “Words on a T-shirt.”  Both of these then reminded me of the very entertaining, “Academy Award Winning Trailer” — a self-referential spoof of movie trailers embedded at the end of this post.  The “trailer” opens with a character saying, “A toast — establishing me as the wealthy, successful protagonist.”  And on it goes in that vein.  The comments section on the clip’s Youtube page likewise yielded a self-referential parody of typical Youtube comments.  Example:  “Quoting what I think is the funniest part of the video and adding “LOL” at the end of the comment.”

There is a long tradition of self-reference in the art world, Magritte’s pipe is classic example, but now it seems the self-referentiality that was a mark of the avant-garde is an established feature of popular culture.  Call it vulgar self-referentiality if you like.  Our sophistication, in fact, seems to be measured by the degree of our reflexivity and self-awareness — “I know, that I know, that I know, etc.” — which elides with a mood of ironic detachment.   Earnestness in this environment becomes a vice.  We are in a sense “mediating” or re-presenting our own selves through this layer of reflexivity, and we’re pretty sure everyone else is too.

On the surface perhaps, there is a certain affinity with the Socratic injunction, “Know thyself.”  But this is meta-Socrates, “Knowingly know thyself.”  At issue for some is whether there is  a subject there to know that would localize and contain the knowing, or whether in the absence of a subject all there is to know is a process of knowing, knowing itself.

Leaning on Thomas de Zengotita’s Mediated: How the Media Shapes Your World and the Way You Live in It and Hannah Arendt’s The Human Condition, here’s a grossly oversimplified thumbnail genealogy of our heightened self-consciousness.  On the heels of the Copernican revolution, we lost confidence in the ability of our senses to apprehend the world truthfully (my eyes tell me the world is stationary and the sun moves, turns out my eyes lied), but following Descartes we sought certainty in our inner subjective processes — “I think, therefore I am” and all that.  Philosophically then our attention turned to our own thinking — from the object out there, to the subject in here.

Sociologically, modernity has been marked by an erosion of cultural givens; very little attains a taken-for-granted status relative to the pre-modern and early modern world.  In contrast to the medieval peasant, consider how much of your life is not pre-determined by cultural necessity.  Modernity then is marked by the expansion of choice, and choice ultimately foregrounds the act of choosing which yields an attendant lack of certainty in the choice – I am always aware that I could have chosen otherwise.  In other words, identity grounded in the objective (reified) structures of traditional society is superseded by an identity which is the aggregate of the choices we make — choice stands in for fate, nature, providence, and all the rest.  Eventually an awareness of this process throws even the notion of the self into question; I could have chosen otherwise, thus I could be otherwise.  The self, as the story goes, is decentered.  And whether or not that is really the case, it certainly seems to be what we feel to be the case.

So the self-referentialism that marked the avant-garde and the theories of social constructivism that were controversial a generation ago are by now old news.  They are widely accepted by most people under 35 who didn’t pick it all up by visiting the art houses or reading Derrida and company, but by living with and through the material conditions of their society.

First the camera, then the sound recorder, the video recorder, and finally the digitization and consequent democratization of the means of production and distribution (cheap cameras/Youtube, etc.) created a society in which we all know that we know that we are enacting multiple roles and that no action yields a window to the self, if by self we mean some essential, unchanging core of identity.  Foucault’s surveillance society has produced a generation of method actors. In de Zengotita’s words,

Whatever the particulars, to the extent that you are mediated, your personality becomes an extensive and adaptable tool kit of postures . . . You become an elaborate apparatus of evolving shtick that you deploy improvisationally as circumstances warrant.  Because it is all so habitual, and because you are so busy, you can almost forget the underlying reflexivity.

Almost.

There seems to be a tragically hip quality to this kind of hyper-reflexivity, but it is also like looking into a mirror image of a mirror — we get mired in an infinite regress of our own consciousness.  We risk self-referential paralysis, individually and culturally, and we experience a perpetual crisis of identity.  My sense is that a good deal of our cynicism and apathy is also tied into these dynamics.  Not sure where we go from here, but this seems to be where we are.

Or maybe this is all just me.

“The Things We Make, Make Us”

A while ago I posted about a rather elaborate Droid X commercial which featured a man’s arm morphing into a mechanical, cyborg arm from which the Droid phone then emerges.  This commercial struck me as a useful and vivid illustration of an important theoretical metaphor, deployed by Marshall McLuhan among others, to help us understand the way we relate to our technologies:  technology as prosthesis.

This weekend I came across another commercial that once again captured, intentionally or otherwise, an important element of our relationship with technology.  This time it was a commercial (which you can watch below) for the 2011 Jeep Grand Cherokee.  The commercial situates the new Grand Cherokee within the mythic narrative of American technology.  “The things that make us Americans,” we are told in the opening lines, “are the things we make.”  In 61 seconds flat we are presented with a series of iconic American technologies:  the railroad, the airplane, the steel mill, the cotton gin, the Colt .45, the skyscraper, the telegraph, the light bulb, and, naturally, the classic Jeep depicted in World War II era footage.  As if to throw in the mythical/American kitchen sink, at one point the image of a baseball hitting a catcher’s mitt is flashed on the screen.

(Never mind that a rather dark tale could be woven out of the history of the development and deployment of many of these technologies and that their sacred quality was lost on some rather consequential Americans including Nathaniel Hawthorne and Herman Melville.  For more on the place of technology in American history and culture take a look at David Nye’s American Technological Sublime and Leo Marx’s The Machine in the Garden: Technology and the Pastoral Ideal in America.)

In any case, the commercial closes with the line, “The things we make, make us.”  I suspect that this is an instance of someone speaking better than they know.  My guess is that the intended meaning is something like, “Making and inventing is part of what it means to be an American.”  We build amazing machines, that’s who we are.  But there is a deeper sense in which it is true that “the things we make, make us.”  We are conditioned (although, I would want to argue, not wholly determined) creatures.  That is to say that we are to some degree a product of our environment and that environment is shaped in important respects by the tools and technologies encompassed by it.

Nothing new here, this is a point made in one way or another by a number of observers and critics.  For example consider the argument advanced by Walter Ong in Orality and Literacy.  The technology of writing, Ong contends, transforms oral societies and the way its members experience the world.  Ong and others have explored the similar significance of printing to the emergence of modern society and modern consciousness.  Lewis Mumford famously suggested that it is to the invention of the clock that we owe the rise of the modern world and the particular disposition toward time that characterizes it.  Historians and social critics have also explored the impact of the steam engine, the car, the telephone, the radio, the television, and, most recently, the Internet on humans and their world.  Needless to say, we are who we are in part because of the tools that we have made and that now are in turn making us.  And as I’ve noted before, Katherine Hayles (and she is not alone) goes so far as to suggest that as a species we have “codeveloped with technologies; indeed, it is no exaggeration,” she writes in Electronic Literature, “to say modern humans literally would not have come into existence without technology.”

Now this may be a bit more than what Jeep had in mind, but thanks to their commercial we are reminded of an interesting and important facet of the human condition.

Medium Matters

“The medium is the message.” Or so Marshall McLuhan would have it.  The idea behind the catchy line is simple:  the medium is at least as significant, if not more so, as the content of a message.  In Understanding Media, McLuhan puts it this way:

Our conventional response to all media, namely that it is how they are used that counts is the numb stance of the technological idiot.  For the “content” of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.  (UM, 18)

Or, in case that wasn’t straightforward enough,

The content or message of any particular medium has about as much importance as the stenciling on the casing of an atomic bomb. (The Essential Mcluhan, 238)

This has remained one of media studies guiding principles.  However, earlier this week, in a post titled “Content Matters”, Jonah Lehrer offers the following comments on an article in the journal Neuron:

One of the recurring themes in the article is that it’s that very difficult to generalize about “technology” in the abstract. We squander a lot of oxygen and ink worrying about the effects of “television” and the “internet,” but the data quickly demonstrates that these broad categories are mostly meaningless. When it comes to changing the brain, content is king. Here are the scientists:

In the same way that there is no single effect of ‘‘eating food,’’ there is also no single effect of ‘‘watching television’’ or ‘‘playing video games.’’ Different foods contain different chemical components and thus lead to different physiological effects; different kinds of media have different content, task requirements,and attentional demands and thus lead to different behavioral effects.

You can read the study, “Children, Wired: For Better or for Worse,” online.  The article makes the case that different content presented by the same medium will impact children in different ways.  So, for example, children who watch Sesame Street test better for literacy than do children who watch Teltubbies.  The report also concluded that while media that was intended to be educational, such as Baby Einstein videos, can some times have detrimental consequences, media that were intended for entertainment, such as action video games, could sometimes yield positive educational outcomes.  On that note, Lehrer quoted the following excerpt:

A burgeoning literature indicates that playing action video games is associated with a number of enhancements in vision, attention, cognition, and motor control. For instance, action video game experience heightens the ability to view small details in cluttered scenes and to perceive dim signals, such as would be present when driving in fog (Green and Bavelier, 2007; Li et al., 2009). Avid players display enhanced top-down control of attention and choose among different options more rapidly (Hubert-Wallander et al., 2010; Dye et al., 2009a). They also exhibit better visual short-term memory (Boot et al., 2008; Green and Bavelier, 2006), and can more flexibility switch from one task to another (Boot et al., 2008; Colzato et al., 2010; Karle et al., 2010).

Now perhaps I’m being somewhat of a curmudgeon, but it seems to me that, a heightened ability to drive in the fog notwithstanding, most of this amounts to saying that people who play video games get better at the skills needed to play video games.  All in all, I think we might prefer that people learn to make certain kinds of decision more deliberately, rather than more rapidly.  In any case, the article goes on to conclude that more research is needed and that researchers are just now beginning to get their footing in the field.

The point Lehrer seizes on, that content matters, is true enough.  I don’t know too many people who would argue that all content on any given media is necessarily equal.  However, this is not to say that the content is all that matters.  The studies cited by the article focused on different content within the same medium, but what of those who don’t use the medium at all compared to those who do regardless of the content they receive.  In other words, is there more of a difference between those who grow up watching television and those who don’t than there is between those who watch two different kinds of television programs?  Unless I missed something, the article (and the studies it cites) does not really address that issue.

By way of contrast, in “How to Raise Boys That Read,” Thomas Spence cites a study that seems to get at that question:

Dr. Robert Weis, a psychology professor at Denison University, confirmed this suspicion in a randomized controlled trial of the effect of video games on academic ability. Boys with video games at home, he found, spend more time playing them than reading, and their academic performance suffers substantially. Hard to believe, isn’t it, but Science has spoken.

The secret to raising boys who read, I submit, is pretty simple—keep electronic media, especially video games and recreational Internet, under control (that is to say, almost completely absent). Then fill your shelves with good books.

Ignore the unfortunate “Science has spoken” bit — I’m not sure what the capitalization is supposed to suggest anyway — and notice that this study is considering not differences in content within a medium (which is not insignificant), but differences between media.

To use a taxonomy coined by Joshua Meyrowitz, the first study focuses on media as conduits or vessels that merely transmit information.  On this model the vessel is less important than the content being transmitted.  There is certainly a place for this kind of analysis, but there is usually more going on.  Meyrowitz encourages us to look at media not only as conduits, but as environments that have significant consequences beyond the particular effects of the content.  As Meyrowitz puts it,

Of course media content is important, especially in the short term. Political, economic, and religious elites have always attempted to maintain control by shaping the content of media . . . But content questions alone, while important, do not foster sufficient understanding of the underlying changes in social structures encouraged or enabled by new forms of communication.

Content matters, but so does the medium (arguably more so).

“The storm is what we call progress”

Via Alan Jacobs at Text Patterns, I read the following excerpt from Arikia Millikan’s short piece “I Am a Cyborg and I Want My Google Implant Already” on The Atlantic’s web site:

By the time I finished elementary school, writing letters to communicate across great distances was an archaic practice. When I graduated middle school, pirating music on Napster was the norm; to purchase was a fool’s errand. At the beginning of high school, it still may have been standard practice to manually look up the answer to a burning question (or simply be content without knowing the answer). Internet connection speeds and search algorithms improved steadily over the next four years such that when I graduated in the class of 2004, having to wait longer than a minute to retrieve an answer was an unbearable annoyance and only happened on road trips or nature walks. The summer before my freshman year of college was the year the Facebook was released to a select 15 universities, and almost every single relationship formed in the subsequent four years was prefaced by a flood of intimate personal information.

Now, I am always connected to the Web. The rare exceptions to the rule cause excruciating anxiety. I work online. I play online. I have sex online. I sleep with my smartphone at the foot of my bed and wake up every few hours to check my email in my sleep (something I like to call dreamailing).

But it’s not enough connectivity. I crave an existence where batteries never die, wireless connections never fail, and the time between asking a question and having the answer is approximately zero. If I could be jacked in at every waking hour of the day, I would, and I think a lot of my peers would do the same. So Hal, please hurry up with that Google implant. We’re getting antsy.

Well, hard to beat honesty I suppose.  I did find it slightly ironic that the Google executive who is interviewed for this piece was named Hal.

Jacobs aptly titled his post “The saddest thing I have read in some time,” and he added simply, “There’s a name for this condition: Stockholm Syndrome.”  Well put, of course.

Perhaps it was reading that piece that prepared me to read Walter Benjamin’s IX Thesis on the Philosophy of History later on that day with a certain melancholy resonance:

A Klee painting named “Angelus Novus” shows an angel looking as though he is about to move away from something he is fixedly contemplating.  His eyes are staring, his mouth is open, his wings are spread.  This is how one pictures the angel of history.  His face is turned toward the past.  Where we perceive a chain of events, he sees one single catastrophe  which keeps piling wreckage upon wreckage and hurls it in front of his feet.  The angel would like to stay, awaken the dead, and make whole what has been smashed.  But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them.  This storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward.  The storm is what we call progress.

In any case, I tend to agree with Jacobs — it was rather sad.