This past Friday I took the first of my comprehensive exams. I think it went well, well enough anyway. If my committee agrees, I’ll have two more to go, which I’ll be taking within the next three months or so.
The first exam was over selections from my program’s core list of readings. The list features a number of authors that I’ve mentioned before including Walter Ong, Lev Manovich, Jerome McGann, N. Katherine Hayles, and Gregory Ulmer. There were also a number of “classic” theorists as well: Benjamin, Barthes, Foucault, Baudrillard.
Additionally, there were a few titles that were new to me or that I had never gotten around to reading. I thought it would be worthwhile to briefly note a few of these.
The most important title that I encountered was Bruno Latour’s Reassembling the Social: An Introduction to Actor-Network-Theory. It’s a shame that it has taken me so long to finally read something from Latour. I heartily recommend it. I’ll be giving it another read soon and may have something to say about it in the future, but for now I’ll simply note that I found it quite enlightening.
Before I say anything else, let me say this: I get where this is coming from, really, I do.
This is a recent xkcd comic that you may have seen in the past day or two. It makes a valid point, sort of. The point that I’d say is worth taking from this comic is this: both unbridled enthusiasm and apocalyptic fear of any new technology are probably misguided. Some people need to hear that.
That said, this comic is similar in spirt to an earlier xkcd offering that Alan Jacobs rightfully picked apart with some searching questions. Although it is more pronounced in the earlier piece, both exhibit a certain cavalier “nothing-to-see-here-let’s-move-it-along” attitude with regards to technological change. In fact, one might be tempted to conclude that according to the xkcd philosophy of technology, technology changes nothing at all. Or, worse yet, that with technology it is always que sera, sera and we do well to stop worrying and enjoy the ride wherever it may lead.
In truth, I might’ve let the whole thing go without comment were it not for that last entry on the chart. It’s a variation of a recurring, usually vacuous rhetorical move that latches on to any form of continuity in order to dismiss concerns, criticism, etc. Since we have always already been X, this new form of X is inconsequential. It is as if reality were an undifferentiated, uncaused monistic affair in which the consequences of every apparent change are always contained and neutralized by its antecedents. But, of course, the fact that human beings have always died does not suggest to anyone in their right mind that we should never investigate the causes of particular deaths so as to prevent them when it is reasonable and possible to do so.
Similarly, pointing out that human beings have always used technology is perhaps the least interesting observation one could make about the relationship between human beings and any given technology. Continuity of this sort is the ground against which figures of discontinuity appear, and it is the figures that are of interest. Alienation may be a fact of life (or maybe that is simply the story that moderns tell themselves to bear it), but it has been so to greater and lesser extents and for a host of different reasons. Pointing out that we have always been alienated is, consequently, the least interesting and the least helpful thing one could say about the matter.
[Note: This post first appeared on Medium in July. At the time, I mentioned it on this blog and provided a link. I'm now republishing the post here in its entirety.]
Sometimes a cigar is just a cigar, Freud reportedly quipped, and sometimes technology is just a tool. But sometimes it becomes something more. Sometimes technology takes on symbolic, or even religious significance.
In 1900, Paris welcomed the new century by hosting the Exposition Universelle. It was, like other expostions and worlds’ fairs before it, a showcase of both cultural achievement and technological innovation. One of the most popular exhibits at the Exposition was the Palace of Electricity which displayed a series of massive dynamos powering all the other exhibition halls.Among the millions of visitors that came through the Palace of Electricity, there was an American, the historian and cultural critic Henry Adams, who published a memorable account of his experience. Adams was awestruck by the whirling dynamos and, perhaps because he had recently visited the cathedral at Chartres, he drew an evocative comparison between the dynamo and the power of the Virgin in medieval society. Speaking in the third person, Adams wrote,
As he grew accustomed to the great gallery of machines, he began to feel the forty-foot dynamos as a moral force, much as the early Christians felt the Cross. The planet itself seemed less impressive, in its old-fashioned, deliberate, annual or daily revolution, than this huge wheel, revolving within arm’s length at some vertiginous speed, and barely murmuring — scarcely humming an audible warning to stand a hair’s-breadth further for respect of power — while it would not wake the baby lying close against its frame. Before the end, one began to pray to it; inherited instinct taught the natural expression of man before silent and infinite force.
Writing in the early 1970s, Harvey Cox revisited Adams’ meditation on the Virgin and the Dynamo and concluded that Adams saw “what so many commentators on technology since then have missed. He saw that the dynamo … was not only a forty-foot tool man could use to help him on his way, it was also a forty-foot-high symbol of where he wanted to go.”
Cox looked around American society in the early 70s, and wondered how Adams would read the symbolic value of the automobile, the jet plane, the hydrogen bomb, or a space capsule. These too had become symbols of the age and they invited a semiology of the “symbolism of technology.”
“Technological artifacts become symbols,” Cox wrote, “when they are ‘iconized,’ when they release emotions incommensurate with their mere utility, when they arouse hopes and fears only indirectly related to their use, when they begin to provide elements for the mapping of cognitive experience.”
Take the airplane, for example. In his classic study, The Winged Gospel: America’s Romance with Aviation, Joseph Corn summarized a remarkable article that appeared in 1916 about the future of flight. In it, the author predicted trans-oceanic flights by 1930 and, by 1970,the emergence of “traffic rules of the air” necessitated by the heavy volume of airplane traffic. Then the timeline leaps forward to the year 3000. At this point “superhumans” would’ve evolved and they would “live in the upper stratas of the atmosphere and never come down to earth at all.” By the year 10000, “two distinct types of human beings” would have appeared: “Alti-man” and “ground man.” Alti-man would live his entire life in the sky and would have no body, he would be an “ethereal” being that would “swim” in the sky like we swim in the ocean.
As Corn put it, “Alti-man was nothing if not a god. He epitomized the winged gospel’s greatest hope: mere mortals, mounted on self-made mechanical wings, might fly free of all earthly contraints and become angelic and divine.”
This may all sound tremendously hoaky to our ears, but Corn’s book is full of only slightly less implausible aspirations that attached themselves to the airplane throughout the early and mid-twentieth century. And it wasn’t just the airplane. In American Technological Sublime, historian David Nye chronicled the near-religious reverence and ritual that gathered around the railroad, the first skyscrapers, the electrified cityscape, the Hoover Dam, atomic bombs, and Saturn V rockets.
Taking an even broader historical perspective, the late David Noble argued that the modern technological project has always been shot through with religious and quasi-spiritual aspirations. He traced what he called the “religion of technology” back from the late medieval era through pioneering early modern scientists to artificial intelligence and genetic engineering.
The symbolism of technology, however, does not always crystalize society’s hopes and ambitions. To borrow Cox’s phrasing, it does not always embody where we want to go. Sometimes it is a symbol of fears and anxieties. In The Machine in the Garden, for instance, Leo Marx meticulously detailed how the locomotive became a symbol that collected the fears and anxieties generated by the industrial revolution in nineteenth century America.
As late as 1901, long since the railroad had become an ordinary aspect of American life, novelist Frank Norris describes it in The Octopus as “a terror of steam and steal,” a “symbol of vast power, huge, terrible, flinging the echo of its thunder, over all the reaches of the valley,” and a “leviathan, with tentacles of steel clutching into the soil, the soulless Force, the iron-hearted Power, the monster, the Collussus, the Octopus.”
The sublime experience accompanying the atomic bomb also inspired fear and trepidation. This response was most famously put into words by Rober Oppenheimer when, after the detonation of the first atomic bomb, he quoted the Bhagavad Gita: “I am become Death, the destroyer of worlds.”
This duality is not surprising given what we know about religious symbols generally. Drawing on sociologist Emile Durkheim, Cox noted that sacred symbols “are characterized by a high degree of power and ofambiguity. They arouse dread and gratitude, terror and rapture. The more central and powerful a symbol is for a culture the more vivid the ambiguity becomes.” The symbolism of technology shares this interplay between power and ambiguity. Our most powerful technologies both promise salvation and threaten destruction.
So what are the symbolic technologies of our time? The recent farewell tour by the space shuttle fleet evoked something approaching Nye’s technological sublime, and so too did Curiosity’s successful Mars landing. Neither of these, however, seem to rise to the level of technological symbolism described by Cox. They are momentarily awe-inspiring, but they are not quite symbols. The Singularity movement certainly does contain strong strands of Noble’s “religion of technology,” and it explicitly promises one of humanity’s long sought after dreams, immortality. But the movement’s ambitions do not easily coalesce around one particular technology or artifact that could collect its force into a symbol.
Here’s my candidate: Google Glass.
I can’t think of another recent technology whose roll-out has occasioned such a strong and visceral backlash. You need only scroll through a few months worth of posts at Stop the Cyborgs to get a feel for how all manner of fears and anxieties have gathered around Glass. Here are some recent post titles:
Google Won’t Allow Face Recognition on Glass Yet
Überveillance | Think of it as big brother on the inside looking out
Consent is not enough: Privacy Self-Management and the Consent Dilemmas
Stigmatised | Face recognition as human branding
The World of Flawed Data and Killer Algorithms is Upon Us…
Google Glass; Making Life Efficient Through Privacy Invasion
Glass has appeared at a moment already fraught with anxiety about privacy, and that was the case even before recent revelations about the extent and ubiquity of NSA surveillance. In other words, just when the fear of being watched, monitored, or otherwise documented has swelled, along comes a new technology in the shape of glasses, our most recognizable ocular technology, that aptly serves as an iconic representation of those fears. If our phones and browsers are windows into our lives, Glass threatens to make our gaze and the gaze of the watchers one and the same.
But remember the dual nature of potent symbols: we have other fears to which Glass may present itself as a remedy. We fear missing out on what transpires online, and Glass promises to bring the Internet right in front of our eyes so we will never have to miss anything again. We fear experiences may pass by without our documenting them, and Glass promises the power to document our experience pervasively. If we fear being watched, Glass at least allows us to feel as if we can join the watchers. And behind these particular fears are more primal, longstanding fears: the fear of loneliness and isolation, the fear of death, the fear of insecurity and vulnerability. Glass answers to these as well.
Interestingly, the website I cited earlier was not called, “Stop Google Glass”; it was called, “Stop the Cyborgs.” Perhaps Google Glass is the icon the Singularity project has been looking for. Glass is not quite an implant, but something about its proximity to the body or about how it promises to fade from view and become the interface through which our consciousness experiences reality … something about it suggests the blurring of the line between human and machine. Perhaps that is the greatest fear and highest aspiration of our age. The fears of those who would preserve humanity as they know it, and the aspirations of those who are prepared, as they see it, to trascend humanity are embodied in Glass.
Long before he visited the Exposition Universelle, Henry Adams wrote to his brother:
You may think all this nonsense, but I tell you these are great times. Man has mounted science and is now run away with. I firmly believe that before many centuries more, science will be the master of man. The engines he will have invented will be beyond his strength to control. Some day science may have the existence of mankind in its power, and the human race commit suicide by blowing up the world.
We might think all that nonsense, but it wasn’t that long ago that fears of a nuclear winter gripped our collective imagination. More recently, other technological scenarios have fueled our popular cultural nightmares: biogenetically cultivated global epidemics, robot apocalypses, or climate catastrophes. In each case, the things we have made “become Death, destroyer of worlds.” With Glass, the fear is not that we will blow up the world or unleash a global catastrophe. It is that we will simply innovate the humanity out of ourselves. Remembering how the story turned out, we might put the temptation this way: If we will place Glass before our eyes, they will be opened, and we will become as gods.
Of course, reading the symbolism of technology is not quite like reading palms or tea leaves. The symbols necessitate no particular future in themselves. But they are cultural indicators and as such they reveal something about us, and that is valuable enough.
I’m ordinarily reluctant to complain. This is partly a function of personality and partly a matter of conviction. I’m reticent by nature, and I tend to think that most complaining tends to be petty, self-serving, unhelpful, and tiresome.
That said, I’ve found myself complaining recently. I’m thinking of two separate incidents in the last week or so. In one exchange, I wrote to a friend that I was “Well enough, in that stretched-so-thin-people-can-probably-see-through-me kind of way.” In another conversation, I admitted that what annoyed me about my present situation, the situation that I’ve found myself in for the past few years, was that I was attempting to do so many things simultaneously I could do none of them well.
I teach in a couple of different settings, I’m trying to make my way through a graduate program, I’ve got a writing project that’s taken me much too long to complete, and I’d like to be a half-way decent husband. I could list other demands, but you get the idea. And, of course, those of you with children are reading this and saying, “Just you wait.” And that’s the thing: most people “feel my pain.” What I’m describing seems to be what it feels like to be alive for most people I know.
I was reminded of Isaiah Berlin’s famous discussion of the fox and the hedgehog. Expounding on an ancient Greek saying — “the fox knows many things, the hedgehog knows one big thing” — Berlin went on to characterize thinkers as either foxes or hedgehogs. Plato, Hegel, Nietzsche, for example, were hedgehogs; they had one big idea by which they interpreted the whole of experience. Aristotle, Montaigne, and Goethe were foxes; they were more attuned to the multifarious particularities of experience.
Berlin had intellectual styles in mind, but, if I may re-apply the proverb to the realm of action in everyday life, I find myself wanting to be a hedgehog. I want to do one thing and do it well. Instead, I find myself having to be a fox.
The more I thought about it, the more it seemed to me that this was, in fact, a pretty good way of thinking about the character of contemporary life and competing responses to the dynamics of digital culture.
Clearly, there are forces at play that predate the advent of digital technologies. In fact, part of the unsettled, constantly shifting quality of life I’m getting at is what sociologist Zygmunt Bauman has called “liquid modernity.” The solid structures of pre-modern, and even early modern society, have in our late-modern (postmodern, if you prefer) times given way to flux, uncertainty, and instability. (If you survey the titles of Bauman’s books over the last decade or so, you’ll quickly notice that Bauman has something of the hedgehog in him.)
The pace, structure, and dynamism of digital communication technologies have augmented these trends and brought their demands to bear on an ever larger portion of lived experience. In other words, multi-tasking, continuous partial attention, our skimming way of thinking, the horcrux-y character of our digital devices, the distraction/attention debates — all of this can be summed up by saying that we are living in a time where foxes are more likely to flourish than hedgehogs. Or, put more temperately, we are living in a time where foxes are more likely to feel at home than hedgehogs. This is great for foxes, of course, and may they prosper.
But what if you’re a hedgehog?
You cope and make due, of course. I don’t, after all, mean to complain.
It’s been a while since I’ve had occasion to point out a Borg Complex case, but the folks at Google have seen fit to help me remedy that situation.
At MIT’s EmTech conference last Thursday, the head of the display division at Google-X, Mary Lou Jepsen, gave us a few gems.
Speaking of Google Glass and its successors, Jepsen explained, “It’s basically a way of amplifying you. I’ve thought for many years that a laptop is an extension of my mind. Why not have it closer to my mind, and on me all the time?”
Why not, indeed.
In any case, her division is hard at work. They are “maybe sleeping three hours a night to bring the technology forward.”
“It’s coming,” she added. “I don’t think it’s stoppable.” Then why, I ask, lose so much sleep over it. One really ought not wear oneself ragged over something that’s bound to come to pass inevitably.
But, as per Mr. Brin’s directives, she wasn’t saying much about what exactly was coming. Whatever the next iteration of wearable computing looks like, Jepsen tells us “you become addicted to the speed of it, and it lets you do more fast and easily.”
Concerns? Never you mind. Remember Mr. Schmidts’s comforting assurances: “Our goal is to make the world better. We’ll take the criticism along the way, but criticisms are inevitably from people who are afraid of change or who have not figured out that there will be an adaptation of society to it.”
Silly fearful critics. Don’t they know resistance is futile, society will be assimi … er … will adapt.
You have likely already heard two things about the space-epic, Gravity. You have heard that it is a visually stunning, anxiety-inducing thriller that immediately absorbs you into its world and does not release you until the credits roll. That is largely correct. You have also heard that Gravity is not really about space. It is really about the inner struggles of the main character, Dr. Ryan Stone played by Sandra Bullock. This is also true enough. But what exactly is Gravity trying to tell us about this inner human struggle being played out against sublimely rendered vistas of earth and space?
As Matt Thomas astutely noted, Gravity trades in both the natural sublime and the technological sublime. The first of these is a common enough notion: it is the sense of awe, wonder, and fear that certain natural realities can inspire in us. Gravity gives us plenty of opportunities to experience the natural sublime as our gaze alternates from the meticulously wrought surface of the earth to the starry, cavernous darkness of the space which envelops it.
The technological sublime is a concept developed by the historian David Nye to describe the analogous feelings of awe, wonder, and fear that we experience in the presence of certain man-made realities. Nye documented a series of human technologies that inspired this kind of response when they were first developed. These included the Hoover Dam, skyscrapers, the electrified city-scape, the atomic bomb, and the Saturn V rockets. In Gravity, depictions of the space shuttle, the International Space Station, and the Hubble telescope — even after they have been pummeled and shredded by space debris — also manage to evoke this experience of the technological sublime.
Against this double sublime, Gravity unfolds its plot of disaster and survival. [Yes, spoilers ahead.] Within minutes, Dr. Stone and Matt Kowalski (played by George Clooney) find themselves adrift after a field of space debris strikes the shuttle and kills the rest of the crew. Kowalski is preternaturally calm in the face of this unthinkable catastrophe. After he recovers Dr. Stone, the pair begin making their way to the ISS in hopes of using the station’s Soyuz capsule to return to earth. Had that initial plan worked, of course, it would have been a very short film.
Upon arriving at the ISS, Kowalski is lost in an act of heroic self-sacrifice and the capsule turns out to be too damaged to survive re-entry. Before he is lost, Kowalski lays out a plan of action for Dr. Stone. She is to take the battered capsule to the Chinese space station and then use their emergency capsule for the journey home. Stone manages to follow this plan, but one near catastrophe after another ensues maintaining a feverish pitch of suspense, or, as some critics have noted, threatening to steer the film into melodrama.
It is in the midst of one of these crisis that Stone is tempted to give up altogether. She finds that the Soyuz capsule does not have enough fuel to get to the Chinese station and so she opts to turn off the capsule’s life support systems and float off into that other dark abyss. This scene is pivotal. We have already learned that Stone lost a daughter in a painfully random playground accident. Ever since, she has lost herself in her work and driven aimlessly at night to assuage her sorrow. She is alone in space now, but she realizes that she is alone on earth as well. No one would mourn her loss, she realizes.
As she begins to doze into unconsciousness, however, Kowalski reappears, chatty and calm as always. He acknowledges that there is something appealing about the escape she is about to make, but he encourages her to reconsider and suggests a strategy that she had not yet considered. And then he disappears. We realize that she had been hallucinating, but she rallies nonetheless and determines to not give up on the hope of return quite yet.
If Kant and Nye help us to describe the sublime scenery against which Stone’s struggle is set, I’d like to suggest that Wendell Berry and G.K. Chesterton can help us make sense of the struggle itself.
Stone must learn to see life on earth — with both its heartache and tragedy, its joys and delights — as a gift, and it is through a kind of death that her perception begins to be realigned. Wendell Berry has beautifully captured this dynamic in his reflections on a wonderfully poignant scene in Shakespeare’s King Lear involving the blinded Earl of Gloucester and his son, Edgar. Gloucester is in despair, and seeks to take his life. The life he thought himself the master of has unraveled, and he has compounded this troubles by falsely accusing Edgar and driving him into exile. But Edgar, disguised as a beggar, has returned to his father to lead him out of despair so that the old man may die in the proper human position, “Twixt two extremes of passion, joy and grief …”
As Berry puts it, “Edgar does not want his father to give up on life. To give up on life is to pass beyond the possibility of change or redemption.” So when Gloucester asks to be taken to the cliffs of Dover so that by a leap he may end his life, his disguised son only pretends to do so. The stage directions then indicate that Gloucester, “Falls forward and swoons.” When he awakes, his son now pretends to be a man who has seen Gloucester survive the great fall. Gloucester is dismayed. “Away, and let me die,” he says. But Edgar, narrates what he has “seen” and proclaims, “Thy life’s a miracle. Speak yet again.”
It is that line, that realization that brings Gloucester back from despair. Having passed through a kind of fictive death, he has been brought once more to see his life as a gift. And so it was with Stone. Kowalski plays Edgar to her Gloucester and having flirted with death, she is recalled to life. No longer does she long for the escape into death that the dark, harshness of space represents in this film. She now determines to find her place again on earth, with all of its attendant sorrows and joys.
So much for Berry, what of Chesterton? Chesterton famously came to faith through an experience of profound gratitude for the sheer gratuity of being. While she ponders the possibility of death and laments that there will be no one to mourn her, Stone also says that there will be no one to pray for her. She confesses that she cannot pray for herself. She was never taught. When, finally, she has reentered the earth’s atmosphere and her capsule splashes down in a murky lake, Stone must make one more fight for her life. The capsule is inundated and she must swim out for air, but she is forced to struggle with her space suit, which threatens to sink her. Once she has fought free of this last obstacle, she makes her way to a muddy shore and, cheek to clay, she exhales the words, “Thank you.” She has, I take it, learned to pray.
This film about space ends on earth as Stone struggles to her feet under the pull of gravity. But by the composure of her posture and the joy of her expression, we are encouraged to conclude that now, finally, Stone is not only on earth, but she is also at home on earth. She no longer seeks an escape. She is prepared to live with both grief and joy. She knows too that for all of the sublime splendor of space, it is her life that is the most profound miracle for which the instinctive response can only be gratitude.
In the past month, I’ve made exactly two new posts. A mere ten of any consequence since May. This is no way to run a blog, I realize. I’m sure you’ve not been losing any sleep over my relative silence, but I thought I’d check in just to let everyone know that I’m still alive and this site hasn’t gone dark.
The truth is that posting will probably remain light. The school year is back in full swing, and my teaching responsibilities keep me pretty busy. On top of that, I’m getting back to work on my doctoral program after a year’s respite. The next hoop I’ll be jumping through will be comprehensive exams, which I hope to knock out between this semester and next. Following that I’ll be entering the dread dissertation stage, which may or may not drag on indefinitely. Hopefully not. I’ll be focusing my research on what I’ve called, in a more popular vein on this site, a Borg Complex. Just in case you were wondering. It turns out to be a good time to be working on the Borg Complex. It’s on display quite a bit lately, especially in discussions about MOOCs and the future of education, robotics and automation, Google Glass, drones, and surveillance technology. For an example of the last of these, see my comment on this thread.
Given all of this, I’ve been less active online than I have been in the past. In fact, I began to wonder where those who participate more actively online find the time. I’ve not looked at the existing research on this, but I’m curious about the demographics of Twitter in particular. I tend to follow academics and folks who are in the tech writing business. Naturally, these folks tend to spend a lot of time sitting in front of a screen as part of their daily activities. It is their work in some regards to be active online. It’s easier for them to participate online regularly throughout the day. This was the case for me when I was a full time grad student for a semester or two and only working a minimal number of hours otherwise. It struck me, then, that Twitter, even if you work hard to avoid creating a filter bubble, is still a kind of socio-economic bubble by default, at least when it comes to those who participate actively since they will tend to be those whose work and family circumstances allow a certain degree of temporal freedom. I may be completely off about that, as I’m just extrapolating from my own experience. All I know is that my particular schedule leaves very little time for social media participation.
Time constraints alone, however, have not accounted entirely for my relative silence of late. You may remember a post from mid-July in which I laid out 11 practical steps I was taking to achieve a relatively healthy and productive relationship with “the Internet.” I’ve been fairly good about sticking to those guidelines and, as a kind of side effect, I’ve found myself a little less eager to write blog posts, post links on tumblr, or participate on Twitter.
In fact — and yes, this is strictly anecdotal and subjective — I have found that the better I stuck to these 11 guidelines that I set up for myself, the better I’ve felt generally. I found there to be a noticeable difference to the feel of a day in which I stuck to the guidelines, particularly after I’d done so for two or three days running, and the feel of a day in which, for whatever reason, I didn’t. And, in my estimation, the difference was a positive one as you may have already assumed.
One last consideration: I’ve also made a decision to focus what time I do have for writing on projects for other sites and journals.
All of this to say that, while I’m not abandoning this site, the posting will likely remain light. Of course, I’ve said that in the past only to then find myself suddenly posting more frequently. Who knows.
Whatever the case, thanks as always for dropping by. I hope all is well in your little corner of the world, wherever that happens to be.
How will Google Glass transform professional football? Oakland Raiders punter Chris Kluwe is on the case. He is the NFL’s first Google Glass Explorer, a cadre of early adopters hand-picked by Google based on their response to the prompt “If I had Glass …”
Kluwe has had limited experience with Glass so far, mainly using Glass to record drills, but it’s been enough to give Kluwe a lot of ideas about how Glass could be deployed in the future. Alex Konrad of Forbes interviewed Kluwe and described part of what the punter has envisioned so far:
In Kluwe’s future NFL, players will wear clear visors that that can project to them the next play to run as they are getting back into position from the last one. Quarterbacks can get a flashing color when a receiver is very open or which area is about to become a good place to look. Running backs could be alerted that a new path to run has just opened up.
Here’s the striking thing about this entirely plausible development. For years, video games have been striving to capture the look and feel of the game as it’s played on the field. What Kluwe has described is a reversal of roles in which now the game as it is played on the field strives to capture the look and feel of playing a video game.
The closest analogy to the experience of the world through Google Glass may be the experience of playing a first-person video game.
This little insight carries wide-ranging implications that are not limited to the experience of professional athletes. A generation that has grown up playing first person shooters and role-playing video games is on the verge of receiving a tool that will make the experience of everyday life feel more like the experience of playing a game. This brings an entirely new meaning to gamification and it raises all sorts of intriguing, serious, and possibly disturbing possibilities.
As early as 1981, the philosopher Jean Baudrillard claimed that images and simulations, which had traditionally copied reality, were now beginning to precede and determine reality. Recalling a famous story by Jorge Louis Borges in which an empire commissions a map that is to be a faithful 1:1 representation of its territory, Baudrillard believed that now the map preceded the territory. Glass is poised create yet another realization of Baudrillard’s critique, except that now it is the game that will precede the real-world experience.
UPDATE: Nick Carr adds the following observation in the comments below, “You might argue that this reversal is already well under way in warfare. Video war games originally sought to replicate the look and feel of actual warfare, but now, as more warfare becomes automated via drones, robots, etc., the military is borrowing its interface technologies from the gaming world. War is becoming more gamelike.”
It’s fair to say that when I write about the Internet or digital devices, my tone tends toward the cautionary, and that’s probably understating the case. But, as my wife would be quick to confirm, I don’t always practice what I preach.
I wanted to do something about this, so I created a list of digital disciplines (obviously couldn’t resist the alliteration) that I’ll be trying to stick to in a serious, but not quite puritanical fashion.
Of course, I don’t think these digital disciplines will be universally applicable. You may find them entirely implausible given your own circumstances, or you may find them altogether unnecessary. All I’m claiming for them is this: given my priorities and my circumstances, I’ve found it helpful to articulate and implement these disciplines in order to achieve what I would characterize as a healthy relationship to Internet culture.
[Aside: I'm using the awkward expression "Internet culture" as shorthand for the whole range of diverse artifacts and practices that accumulate around the Internet and the devices we use to access it. I realize that the very idea of "the Internet" is contested¹, but trying to delineate it here in a rigorous "academic" manner would be even more tedious than this aside.]
Before getting to the digital disciplines, though, let me first lay out some basic underlying assumptions. You’ll probably find some of these debatable, but at least you’ll know where I’m coming from.
Time is a limited resource, and I would rather treat it as a gift than as an enemy.
While I have no interest in denying the authenticity, much less the reality, of online experience, I do privilege face-to-face experience (or “fully embodied experience,” which is not to say that online experience is disembodied), all things being equal.
Relatedly, we are not less than our bodies; so how our bodies, not just our minds, interact with the Internet and Internet-enabled devices matters.
While it may be difficult to articulate a precise theoretical distinction between online and offline experience, the terms attempt to get at real distinctions with practical consequences.
Trying to “keep up” online is a joyless, Sisyphean undertaking that is best abandoned in principle.
“All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” (Blaise Pascal)
I don’t go in for the whole trans-/post-human/cyborg thing. As Douglas Rushkoff recently put it, “I’m on team human here. Call that egotistical, but it’s the only team I know.”
“A life spent entirely in public, in the presence of others, becomes, as we would say, shallow. While it retains visibility, it loses the quality of rising into sight from some darker ground which must remain hidden if it is not to lose its depth in a very real, non-subjective sense.” (Hannah Arendt)
Make of those what you will. Here, finally, is the list. Remember, I am the primary audience for this advice.
1. Don’t wake up with the Internet. Have breakfast, walk the dog, read a book, whatever … do something before getting online. Think of it as a way of preparing – physically, mentally, emotional, morally, etc. – for all that follows.
2. Don’t remain ambiently connected to your email account. Close the email tab/app. Check in two or three times a day for a fixed period of time. The same holds for FB, Twitter, etc.
3. Sit on a link for a few hours or even a day before sharing. If it’s not worth sharing then, it probably wasn’t worth sharing in the first place. Don’t add to the noise.
4. Don’t take meals with the Internet. Log off, leave devices behind, and enjoy your meal as an opportunity recoup, physically and mentally. If you’re inside all day, take your lunch outside. Enjoy the company of others, or take the chance to sit in silence for a few minutes.
6. Do one thing – one whole, complete thing – at a time whenever it’s reasonable to do so. If writing an email, write it all at once. If reading an article, read it straight through. If a task can’t be completed in one sitting, at least work on it for a reasonable amount of time without interruption. Resist, in other words, the allure of the multitasking myth. It’s the siren song of our age, and it will shipwreck your mind.
7. Clear the RSS feed at the end of each day. If it didn’t get read, life will go on. This is a hard one for me; I want to read it all, stay on top of things, etc. If I don’t clear the feed, though, I end up with a pile of information that eventually snowballs to unmanageable proportions anyway. What’s more, deleting potentially interesting, unread items each day functions as a happy, cathartic gesture of liberation.
8. Turn off all notifications that threaten to interrupt or distract. Mentally, we tend to respond to these with Pavlovian alacrity. Emotionally, they are not unlike our own little versions of Gatsby’s green light. In either case, it’s a ruinous habit.²
9. Turn devices off when spending time with others. Also, shut the laptop when speaking to another person. This may seem quaint or reactionary or nostalgic or antiquated or judgmental or curmudgeonly. I see it as a way of remaining minimally civil and decent, whether or not I’m accorded the same civility and decency in return. If you must attend to a call or text, politely indicate as much and do so. Better that than surreptitiously attending to your device while still attempting to give off the impression of attentiveness. That’s a meaningless charade, and everyone involved knows it.
10. Log-off of social media sites after visiting them. The extra step to log-in makes it slightly less tempting to click over when a craving for distraction strikes. Don’t underestimate the effectiveness of these little digital speed-bumps.
A few years ago, Umberto Eco said, “We like lists because we don’t want to die.” Perhaps that’s a bit too melodramatic for this particular list. Certainly, I’ve attached no death-defying hopes to it. But I do think following through on these digital disciplines will help me make better use of this life and take more pleasure in it.
If you’ve got your own similar list of digital disciplines, share them in the comments below. If they’re useful to you, chances are others would find them useful too.
Long before the Internet and the smartphone, Henry David Thoreau – who, incidentally, was born this day, 1817 – decided to go off the grid. Granted, there was no “grid” at the time in the modern sense and he didn’t go very far in any case, but you get the point. Overwhelmed and not a little disgusted with the accoutrements of civilization, Thoreau set out to live a simple life in accord with nature.
In the first chapter of Walden, Thoreau explained:
The very simplicity and nakedness of man’s life in the primitive ages imply this advantage, at least, that they left him still but a sojourner in nature. When he was refreshed with food and sleep, he contemplated his journey again. He dwelt, as it were, in a tent in this world, and was either threading the valleys, or crossing the plains, or climbing the mountain-tops. But lo! men have become the tools of their tools.
Thoreau is the patron saint of all of those who have ever thought about quitting the Internet and lobbing Smartphones into a pond if only we could find one. This means most of us have probably lit a candle to Thoreau at some point in our relentlessly augmented lives.
Within the last few days, stories about the Walden-esque Camp Grounded, sponsored by a group called Digital Detox, have been popping up. Already, you’ve guessed that Camp Grounded was an opportunity to spend some time off the proverbial grid: no cellpones, no tablets, no computers, and no watches.
Writing about the event at Gigamon, Matthew Ingram offered what has by now become a predictable three-step response to these sorts of events or programs, or to most any critique of the Internet or digital devices.
Step 1: Illustrate how similar concerns have always existed. People have felt this way before, etc.*
Step 2: Deconstruct the line between offline and online activity. Online activities are no less “real” or “genuine” than offline activities …**
Step 3: Locate the “real” problem somewhere else altogether. It’s not the Internet, it’s _______________.***
Pivoting on the case of Paul Miller, a writer for Verge who spent a year sans Internet only to discover that there are also analog forms of wasting time, Ingram concluded,
Is it good to disconnect from time to time? Of course it is. And there’s no question that the pace of modern life has accelerated over the past decade, with so many sources of real-time activity that we feel compelled to participate in, either because our friends and family are there or because our jobs require it. But disconnecting from all of those things isn’t going to magically transform us into better people somehow — all it will do is reveal us as we really are.
But here’s the thing. While there are limits to the malleability of our character and personality, who “we really are” is a work in progress. We are now who we have been becoming. And who we are becoming is, in part, a product of habits and dispositions that arise out of our daily actions and practices, including our digitally mediated activities.
The problem with the three step strategy outlined above is that it doesn’t really erase the difference between online and offline experience. While it waves a rhetorical hand to that effect, it nevertheless retains the distinction but dismisses the significance of online experience and digital devices.
But online experience and digital devices, precisely because they are “real,” matter. Ingram is right to say that “disconnecting … isn’t going to magically transform us into better people somehow.” But for some people, under certain conditions, it may in fact be an important step in that direction.
Long before the Internet or even Walden, Aristotle taught that the mean between two excesses was the ideal to aim for. So, between the excesses of gluttony and ascetic deprivation, there was the ideal use of food for both pleasure and nutrition. This ideal use stood between two extremes, but it wasn’t simply a matter of splitting the difference. According to Aristotle, the mean was relative: it depended on each person’s particular circumstances.
But the point wasn’t simpy to hit some artificial middle ground for its own sake — it was to learn how to use things without being used by them. Or, as Thoreau put it, to learn how not to become the tool of our tools.
That’s what we’re looking for in our relationship to the Internet and our digital devices: the sense that we are using them toward good, healthy, and reasonable ends. Instead, many people feel as if they are being used by their devices. The solution is neither reactionary abstinence, nor thoughtless indulgence. What’s more, there’s no one answer for it that is universally applicable. But for some people at some times, taking extended of periods of time away from the Internet may be an important step toward using, rather than being used by it. The three-step rhetorical strategy used to dismiss those who raise questions about our digital practices won’t help at all.
*Sometimes these historical analogies are misleading, sometimes they are illuminating. Never are they by themselves cause to dismiss present concerns. That would be sort of like saying that people have always been struck by diseases, so we shouldn’t really be too worried about it.
**A lot of pixels have already been spent on this one. E.g., here and here.
***I’ve actually written something similar, but the key is not to lose sight of how devices do play a role in the phenomenon.