Using, Rather Than Being Used by Our Devices

walden

Long before the Internet and the smartphone, Henry David Thoreau – who, incidentally, was born this day, 1817 – decided to go off the grid. Granted, there was no “grid” at the time in the modern sense and he didn’t go very far in any case, but you get the point. Overwhelmed and not a little disgusted with the accoutrements of civilization, Thoreau set out to live a simple life in accord with nature.

In the first chapter of Walden, Thoreau explained:

The very simplicity and nakedness of man’s life in the primitive ages imply this advantage, at least, that they left him still but a sojourner in nature. When he was refreshed with food and sleep, he contemplated his journey again. He dwelt, as it were, in a tent in this world, and was either threading the valleys, or crossing the plains, or climbing the mountain-tops. But lo! men have become the tools of their tools.

Thoreau is the patron saint of all of those who have ever thought about quitting the Internet and lobbing Smartphones into a pond if only we could find one. This means most of us have probably lit a candle to Thoreau at some point in our relentlessly augmented lives.

Within the last few days, stories about the Walden-esque Camp Grounded, sponsored by a group called Digital Detox, have been popping up. Already, you’ve guessed that Camp Grounded was an opportunity to spend some time off the proverbial grid: no cellpones, no tablets, no computers, and no watches.

Writing about the event at Gigamon, Matthew Ingram offered what has by now become a predictable three-step response to these sorts of events or programs, or to most any critique of the Internet or digital devices.

Step 1: Illustrate how similar concerns have always existed. People have felt this way before, etc.*

Step 2: Deconstruct the line between offline and online activity. Online activities are no less “real” or “genuine” than offline activities …**

Step 3: Locate the “real” problem somewhere else altogether. It’s not the Internet, it’s _______________.***

Pivoting on the case of Paul Miller, a writer for Verge who spent a year sans Internet only to discover that there are also analog forms of wasting time, Ingram concluded,

Is it good to disconnect from time to time? Of course it is. And there’s no question that the pace of modern life has accelerated over the past decade, with so many sources of real-time activity that we feel compelled to participate in, either because our friends and family are there or because our jobs require it. But disconnecting from all of those things isn’t going to magically transform us into better people somehow — all it will do is reveal us as we really are.

But here’s the thing. While there are limits to the malleability of our character and personality, who “we really are” is a work in progress. We are now who we have been becoming. And who we are becoming is, in part, a product of habits and dispositions that arise out of our daily actions and practices, including our digitally mediated activities.

The problem with the three step strategy outlined above is that it doesn’t really erase the difference between online and offline experience. While it waves a rhetorical hand to that effect, it nevertheless retains the distinction but dismisses the significance of online experience and digital devices.

But online experience and digital devices, precisely because they are “real,” matter. Ingram is right to say that “disconnecting … isn’t going to magically transform us into better people somehow.” But for some people, under certain conditions, it may in fact be an important step in that direction.

Long before the Internet or even Walden, Aristotle taught that the mean between two excesses was the ideal to aim for. So, between the excesses of gluttony and ascetic deprivation, there was the ideal use of food for both pleasure and nutrition. This ideal use stood between two extremes, but it wasn’t simply a matter of splitting the difference. According to Aristotle, the mean was relative: it depended on each person’s particular circumstances.

But the point wasn’t simpy to hit some artificial middle ground for its own sake — it was to learn how to use things without being used by them. Or, as Thoreau put it, to learn how not to become the tool of our tools.

That’s what we’re looking for in our relationship to the Internet and our digital devices: the sense that we are using them toward good, healthy, and reasonable ends. Instead, many people feel as if they are being used by their devices. The solution is neither reactionary abstinence, nor thoughtless indulgence. What’s more, there’s no one answer for it that is universally applicable. But for some people at some times, taking extended of periods of time away from the Internet may be an important step toward using, rather than being used by it. The three-step rhetorical strategy used to dismiss those who raise questions about our digital practices won’t help at all.

_______________________________________

*Sometimes these historical analogies are misleading, sometimes they are illuminating. Never are they by themselves cause to dismiss present concerns. That would be sort of like saying that people have always been struck by diseases, so we shouldn’t really be too worried about it.
**A lot of pixels have already been spent on this one. E.g., here and here.
***I’ve actually written something similar, but the key is not to lose sight of how devices do play a role in the phenomenon.

Relay Failures

Years ago I heard someone say that many arguments would be averted if only we would use the word merely more often. Case in point: I titled my last post “Don’t Be a Relay in the Network.” My point could have been more appropriately stated, “Don’t Be Merely a Relay in the Network.”

The difference is not insignificant. Being a relay in a network is not necessarily problematic. We receive and pass along information, digital and otherwise, all of the time. Problems arise when we function merely as a relay, or, even better, when we function merely as a passive relay.

My concern stems from the habits I felt taking shape as a result of my own online reading practices. I found myself reading not for the enjoyment or value of reading, but simply to have read. I owe this formulation to Alan Jacobs, who, in The Pleasures of Reading in an Age of Distraction, observes that we sometimes read simply to be able to claim that we have read something.

Jacobs had in mind lure of the prestige attached to being known as the sort of person who has read War and Peace or In Search of Lost Time or whatever happens to be trendy at the moment. There’s that, to be sure, but I have in mind the desire to have read which translates into seeing my RSS feed at zero. It is reading motivated by the pressures of keeping up with the digital news feed or fear of missing out.

When this sort of reading is coupled with the desire, variously motivated, to share what has been read, then it becomes reading to have shared. And it’s not just reading, of course. All forms of online content are subject to this dynamic. When this sort of dynamic drives our experience with online content, then we are acting merely as relays in a network. There’s something of Eliot’s “Hollow Men” in this dynamic: we’re shaped by the network, but we have no form of our own. Stuff passes through us, but we remain hollow.

Thoughtless passivity is one of the problems that attends being a mere relay in a network. The pattern of habitual receiving and sharing within the temporal horizons of digital culture tends to preclude the possibility of internalizing the information so that it is appropriated as genuine personal knowledge. Apart from this sort of internalization that is, in part, a product of time and method, what we are left with is a vague, generic awareness that we have at some point come into contact with some information. “Oh, I think I read about that a few days ago” or “I saw a link to that on Twitter” or “Didn’t someone post that on Facebook not too long ago.”

I would go so far as to suggest that the apathy or inaction in contemporary culture that many lament is partially a function of this kind of ambient awareness that does not quite sink in and become personal knowledge. Involvement and action are a product of personal knowledge. The ambient awareness that comes from functioning as mere relays of information lacks the power to motivate, inspire, outrage, etc.

The other danger is what we could call the conditioned passivity of being merely a relay. This describes the subtle temptation to pass along information that will be well-received by your audience. This is not unlike the “filter bubble” problem in which our personalized streams of digital information enclose us within a filter bubble or echo chamber that mirrors and reinforces our prejudices and blind spots. The risk of being passively conditioned when acting as a mere relay arises from the temptation to share and disseminate what will resonate or play well with the audience we’ve fashioned for ourselves. The temptation, in other words, is to tacitly bend to the shape of our bubbles or tune our online voice so as to achieve maximum echo.

None of this necessarily follows from reading online or sharing information through social media; but it is a temptation and it is worth resisting.

Don’t Be a Relay in the Network

Back when the Machine was the dominant technological symbol, a metaphor arose to articulate the fear that individual significance was being sacrificed to large-scale, impersonal social forces: it was the fear of becoming “a cog in the machine.”

The metaphor is in need of an update.

This train of thought (speaking of archaic metaphors) began when I read the following paragraph from Leon Wieseltier’s recent commencement address at Brandeis University:

In the digital universe, knowledge is reduced to the status of information. Who will any longer remember that knowledge is to information as art is to kitsch-–that information is the most inferior kind of knowledge, because it is the most external? A great Jewish thinker of the early Middle Ages wondered why God, if He wanted us to know the truth about everything, did not simply tell us the truth about everything. His wise answer was that if we were merely told what we need to know, we would not, strictly speaking, know it. Knowledge can be acquired only over time and only by method.

It was that last phrase that stayed with me: knowledge can only be acquired by time and method. I was already in fundamental agreement with Wieseltier’s distinction between information and knowledge, and his prescription of time and method as the path toward knowledge also seemed just about right.

It also seemed quite different than what ordinarily characterized my daily encounter with digital information. For the most part, I’m doing well if I keep on top of all that comes my way each day through a variety of digital channels and then pass along – via this blog, Twitter, FB, or now Tumblr – items that I think are, or ought to be of interest to the respective audiences on each of those platforms. Blog, reblog. Like, share. Tweet, retweet. Etc., etc., etc.

Read, then discard or pass along. Repeat. That’s my default method. It’s not, I suspect, what Wieseltier had in mind. There is, given the sheer volume of information one takes in, a veneer of learnedness to these habits. But there is, in fact, very little thought involved, or judgment. Time under these circumstances is not experienced as the pre-condition of knowledge, it is rather the enemy of relevance. The meme-cycle, like the news-cycle is unforgivingly brief. And method – understood as the deliberate, sustained, and, yes, methodical pursuit of deep understanding of a given topic – is likewise out of step with the rhythms of digital information.

Of course, there is nothing about digital technology that demands or necessitate’s this kind of relationship to information or knowledge. But while it is not demanded or necessitated, it is facilitated and encouraged. It is always easier to attune oneself to the dominant rhythms than it is to serve as the counterpoint. And what the dominant rhythm of digital culture encourages is not that we be cogs in the machine, but rather relays in the network.

We are relays in a massive network of digital information. Information comes to me and I send it out to you and you pass it along to someone else, and so on, day in and day out, moment by moment. In certain circles it might even be put this way: we are neurons within a global mind. But, of course, there is no global mind in any meaningful sense that we should care about. It is a clever, fictive metaphor bandied about by pseudo-mystical techno-utopians.

The minds that matter are yours and mine, and their health requires that we resist the imperatives of digital culture and re-inject time and method into our encounters with information. It begins, I think, with a simple “No” to the impulse to quickly skim-read and either share or discard. May be even prior to this, we must also renounce the tacit pressure to keep up with it all (as if that were possible anyway) and the fear of missing out. And this should be followed by a willingness to invest deep attentiveness, further research, and even contemplation over time to those matters that call for it. Needless to say, not all information justifies this sort of cognitive investment. But all of us should be able to transition from the nearly passive reception and transmission of information to genuine knowledge when it is warranted.

At their best, digital technologies offer tremendous resources to the life of the mind, but only if we cultivate the discipline to use these technologies against their own grain.

Et in Facebook ego

In Nicolas Poussin’s mid-seventeenth century painting, Et in Arcadia ego, shepherds have stumbled upon an ancient tomb on which the titular words are inscribed. Understood to be the voice of death, the Latin phrase may be roughly translated, “Even in Arcadia there am I.” Because Arcadia had come to symbolize a mythic pastoral paradise, the painting suggested the ubiquity of death. To the shepherds, the tomb was a momento mori: a reminder of the inescapability of death.

Nicolas Poussin, Et in Arcadia ego, 1637-38
Nicolas Poussin, Et in Arcadia ego, 1637-38

Poussin was not alone among artists of the period in addressing the certainty of death. During the seventeenth and eighteenth century, vanitas art flourished. The designation stems from the Latin phrase vanitas vanitatum omni vanitas, a recurring refrain throughout the biblical book of Ecclesiastes:  “vanity of vanities, all is vanity.” Paintings in the genre were still lifes depicting an assortment of objects which represented all of that we might pursue in this life. In their midst, however, one would also find a skull and an hour glass:  symbols of death and the brevity of life. The idea, of course, was to encourage people to make the most of their living years.

Edwart Collier, 1690
Edwart Collier, 1690

For the most part, we don’t go in for this sort of thing anymore. Few people, if any, operate under the delusion that we might escape death (excepting the Singularity folks I guess), but we do a pretty good job of forgetting what we know about death. We keep death out of sight and, hence, out of mind. We’re certainly not going out of our way to remind ourselves of death’s inevitability. And, who knows, maybe that’s for the better. Maybe all of those skulls and hourglasses were morbidly unhealthy. I honestly don’t know.

But while vanitas art has gone out of fashion, a new class of memento mori has emerged: the social media profile.

I’m one of those on again, off again Facebook users. Lately, I’ve been on again, and recently I noticed one of those birthday reminders Facebook places in the left hand column where it puts all of the things Facebook would like you to do and click on. It was for a high school friend whom I had not spoken to in over eight years. It was in that respect a very typical Facebook friendship:  the sort that probably wouldn’t exist at all any longer were it not for Facebook. And that’s not necessarily a knock on the platform. I appreciate being able to maintain at least a minimal connection with people I’d once been quite close to. In this case, though, it demonstrated just how weak those ties can be.

Upon clicking over to their profile, I read a few odd birthday notes, and very quickly it became obvious that my high school friend had died over a year ago. It was a shock, of course. It had happened while I was off of Facebook and news had not reached me by any other channel. But there it was. Out of nowhere and without warning my browser was haunted by the very real presence of death. Momento mori.

Just a few days prior I logged on to Facebook and was greeted by the tragic news that a former student had unexpectedly passed away. Because we had several mutual connections, photographs of the young man found their way into my news feed for several days. It was odd and disconcerting and terribly sad all at once. I don’t know what I think of social media mourning. It makes me uneasy, but I won’t criticize what might bring others solace. In any case, it is, like death itself, an unavoidable reality of our social media experience. Death is no digital dualist.

Facebook sometimes feels like a modern-day Arcadia. It is a carefully cultivated space in which life appears Edenic. The pictures are beautiful, the events exciting, the face are always smiling, the children always amusing, the couples always adoring. Certain studies even suggest that comparing our own experience to these immaculately curated slices of life leads to envy, discontent, and unhappiness. Naturally … if we assume that these slices of life are comprehensive representations of the lives people lead. Of course, they are not.

But there, alongside the pets and witty status updates and wedding pictures and birth announcements, we will increasingly find our social media platforms haunted by the digital, disembodied presence of the dead.

In that dreary opening chapter of The Scarlett Letter, Hawthorne wrote, “The founders of a new colony, whatever Utopia of human virtue and happiness they might originally project, have invariably recognized it among their earliest practical necessities to allot a portion of the virgin soil as a cemetery, and another portion as the site of a prison.”

And so it is with our digital utopias, our virtual Arcadias.

Et in Facebook ego.

Thinking and Its Rhetorical Enemies

In one short post, Alan Jacobs targeted several Borg Complex symptoms. The post was triggered by his frustration with an xkcd comic which simply strung together a series of concerns about technological developments expressed in the late 19th and early 20th century. The implicit message was abundantly clear: “Wasn’t that silly and misguided? Of course it was. Now stop your complaining about technology today.”

Jacobs raised four salient points in response:

1. “Why do we just assume that their concerns were senseless?”

2. While we may endorse the trade-offs new technologies entail, “it would be ridiculous to say that no trade has been made.”

3. “Moreover, even if people were wrong to fear certain technologies in the past, that says absolutely nothing about whether people who fear certain other technologies today are right or wrong.”

4. This sort of thing presents “an easy excuse not to think about things that need to be thought about.”

Exactly right on all counts.

In partial response to Jacobs’ question, I’d suggest that when living memory of a lost state of affairs also perishes, so to does the existential force of the loss and its plausibility. What we know is that life went on – here we are after all – and that seems to be the only bright line of consequence. All that is established by this, of course, is that we eventually acclimated to the new state of affairs. That we eventually get used to a state of affairs tells us nothing about its quality or desirability, nor that of the state of affairs that was displaced. To assume that it does is a future-tense extension of the naturalistic fallacy: simply because something comes to be the case, it does not follow that it ought to be the case.

The second point above recalls Neil Postman’s discussion of (yes, you guessed it) Phaedrus, Plato’s famous dialog in which Socrates tells the story of Thamus and Theuth. The god Theuth presents Thamus, king of Egypt, with a number of inventions including writing. Theuth is understandably excited about his creations, but Thamus is less sanguine. He warns that writing, among other things, will destroy memory. Learning to cite this story and dismiss it scornfully must be the first thing they teach you in tech-punditry school. But, as Jacobs points out, Thamus was not wrong. Here is Postman’s take:

“[Thamus’ error] is not in his claim that writing will damage memory and create false wisdom. It is demonstrable that writing has had such an effect. Thamus’ error is in his believing that writing will be a burden to society and nothing but a burden. For all his wisdom, he fails to imagine what writing’s benefits might be, which, as we know, have been considerable.”

“Every technology,” Postman goes on to say, “is both a burden and a blessing; not either-or, but this-and-that.” Those who see only blessing Postman labels “zealous Zeuths, one-eyed prophets who see only what new technologies can do and are incapable of imagining what they will undo.” Postman grants, of course, that there are also one-eyed prophets who speak only of the burdens of technology. It is best then to open both eyes.

Jacobs’ third point reminds us that the one-eyed prophets of technological blessing, those who dismiss the silly fears of previous generations, take Chicken Little as their normative story: the sky never, ever falls. As I’ve written before, the tale of the boy who cried wolf serves better. Even if earlier alarms proved false, it does not follow that the wolf never comes.

Finally, it is the fourth point that bears reiterating most emphatically. We need to think more, not less. It is that simple. There are many problems with Borg Complex rhetoric; that it undermines thinking and judgement may be the most disturbing and damaging.