The Road Makes All the Difference

“There is a subtle but fundamental difference between finding direction … and ambiently knowing direction.” This was Evgeny Morozov’s recent encapsulation of Tristan Gooley’s The Natural Navigator. According to Gooley, natural navigation, that is navigation by the signs nature yields, is a dying art, and its passing constitutes a genuine loss.

Even if you’ve not read The Natural Navigator, or Morozov’s review, your thoughts have probably already wandered toward the GPS devices we rely on to help us find direction. It’s a commonplace to note how GPS has changed our relationship to travel and place. In “GPS and the End of the Road,” Ari Schulman writes of the GPS user: “he checks first with the device to find out where he is, and only second with the place in front of him to find out what here is.” One may even be tempted to conclude that GPS has done to our awareness of place what cell phones did to our recall of phone numbers.

(Of course, this is not a brief against GPS, much less against maps and compasses and street signs. These have their place, of course. And there are a number of other qualifications which could be offered, but I’ll trust to your generosity as readers and assume that these are understood.)

From the perspective of natural navigation, however, GPS is just one of many technologies designed to help us find our way that simultaneously undermine the possibility that we might also come to know our way or, we might add, our place. Navigational devices, after all, enter into our phenomenological experience of place and do so in a way that is not without consequence.

When I plug an address into a GPS device, I expect one thing: an efficient and unambiguous set of directions to get me from where I am to where I want to go. My attention is distributed between the device and the features of the place. The journey is eclipsed, the places in between become merely space traversed. In this respect, GPS becomes a sign of the age in its struggle to consider anything but the accomplishment of ends with little regard for the means by which the ends are accomplished. We seem to forget, in other words, that there are goods attending the particular path by which a goal is pursued that are independent of the accomplishment of that goal.

It’s a frame of mind demanded, or to put it in less tech-derministic terms, encouraged by myths of technological progress. If I am to enthusiastically embrace every new technology, I must first come to believe that means or media matter little so long as the end is accomplished. A letter is a call is an email is a text message. Consider this late nineteenth century French cartoon imagining the year 2000 (via Explore):

From our vantage point, of course, this seems silly … but only in execution, not intent. Our more sophisticated dream replaces the odd contraption with the Google chip. Both err in believing that education is reducible to the transfer of data. The means are inconsequential and interchangeable, the end is all that matters, and it is a vastly diminished end at that.

Natural navigation and GPS may both get us where we want to go, but how they accomplish this end makes all the difference. With the former, we come to know the place as place and are drawn into more intimate relationship with it. We become more attentive to the particulars of our environment. We might even find that a certain affection insinuates itself into our experience of the place. Affective aspects of our being are awakened in response. When a new technology promises to deliver greater efficiency by eliminating some heretofore essential element of human involvement, I would bet that it also effects an analogous alienation. In such cases, then, we have been carelessly trading away rich and rewarding aspects of human experience.

Sometimes it is the road itself that makes all the difference.

What Would Thoreau Do?

Yesterday, July 12th, was Henry David Thoreau’s 195th birthday, or 195th anniversary of his birth, or however that is best put when the person in question is no longer alive. In any case, Thoreau is best remembered for two things. The first is his experiment in living simply and in greater communion with nature in a cabin on the outskirts of Concord, Massachusetts. The cabin was situated on Walden Pond and Thoreau’s reflections on his “experiment” were later published as Walden.

Thoreau is also remembered for making a better pencil. It seems that Thoreau is actually not generally remembered for this, but it is nonetheless true. His family owned a pencil factory at which Thoreau worked on and off throughout his life. Thanks to his study of German pencil making techniques, Thoreau helped design the best American pencil of its day. Apparently, in the early 19th century, there remained significant technical challenges to the making of a durable pencil, mostly having to do with the sturdiness of the graphite shaft and fitting it into a casing. Among Thoreau’s many accomplishments was the development of a process of manufacturing the pencil that solved these engineering problems.

I thought of Thoreau yesterday not only because it was the anniversary of his birth, but also because I had come across an article titled, “Tweets From the Trail: Technology Can Enhance Your Wilderness Experiences” (h/t to Nathan Jurgenson). The author, novelist Walter Kirn of Montana, had the temerity to suggest that maybe there is something to be gained by brining your technology out into nature with you, rather than venturing into nature in order to escape technology. As you might imagine, many of Kirn’s Montana nature-enthusiast friends were less than pleased.

Now, we should note that these distinctions we make — nature/technology, for example — are a bit complicated. To illustrate here is the opening of a recent, relevant post by Nick Carr:

A couple of cavemen are walking through the woods. One sighs happily and says to the other, “I’m telling you, there’s nothing like being out in nature.” The other pauses and says, “What’s nature?”

It’s 1972. A pair of lovers go camping in a wilderness area in a national park. They’re sitting by a campfire, taking in the evening breezes. “Honey,” says the woman, “I have to confess I really love being offline.” The guy looks at her and says, “What’s offline?”

You see the point. Our idea of “nature” owes something to the advance of technology just as our idea of “offline” necessitates the emergence of online. But back to Kirn’s article. He discovered that his writing flourished when he set up a work station on an old wooden telephone wire spool under the big, blue Montana sky with badgers and gophers scampering all about. Subsequently he made a habit of screening movies on his iPad in “natural” settings such as the seaside or the shores of a river. Finally, he confesses to the manner in which being out in the wilderness inspires fits of creativity that he feels compelled to tweet and post. And here is his eloquent conclusion:

“To sever our experience of wilderness from our use of technology now seems to me an unnatural act, shortsighted and unimaginative. No one appreciates a ringing cell phone while they float on a muddy river through western badlands or stand in the saddle between two massive mountain ranges, but short of such rude interruptions of heavenly moments, technology has a mysterious way, at times, of providing the perfect contrast, the happy counterpoint to scenes and experiences and settings that are easy to take for granted or grow numb to. Along with harmony, contrast is one of the two great rules of art. It wakes the senses, jars the tired mind, breaks up routines that threaten to grow mechanical. If you don’t believe me, try it. Travel to that secluded spot you keep returning to, the one where you go to leave the world behind, and turn on some music, play a movie, capture a passing thought and send it onward, out of the forest, out into society, and then wait, while the wind blows and the treetops sway and the clouds pile up a mile above your head, for someone, some faraway stranger, to reply. Even when we’re alone, we’re not alone, this proves, and in the deepest heart of every wilderness lurks a miracle, often, the human mind.”

I can’t help but wonder, what would Thoreau think? I can’t pretend to know Thoreau well enough to answer that question. I suspect that present day technophile’s would suggest that Thoreau ought to approve, after all he took his pencil to Walden and that was a technology. Well, yes, but he didn’t string a telegraph wire to the cabin.

I wouldn’t discount the dynamic Kirn describes, particularly since it is measured (let’s do without the ringing cell phone) and it still recognizes the contrast. The juxtaposition of unlike things can be creatively stimulating, and if that is what you are after, then Kirn’s formula may indeed yield something for you.

But what if your aims are different? What if you’re seeking only to listen and not to speak? What if your goal is not to be inspired toward yet another act of self-expression? We may carry technology with us into nature, in fact, we may carry it within us. But this does not mean that we ought always to answer to its prerogatives. Nor does it mean that we should always assume the posture toward reality that technology enables and the frame of mind that it encourages. And, of course, different technologies enable and encourage differently. It is the difference between the pencil and the telegraph and the smartphone.

I am not against human civilization (which is a silly thing to have to say), and the human mind, as Kirn puts it, is a “miracle” indeed. But the miracle of the human mind lies not only in its ability to create and to build and to express itself and impose its own symbolic order on the world. The miracle lies also in its ability to listen and to receive and to contemplate and to be itself re-ordered; to be taken in by the world as well as to take the world in. Perceiving the value of such a stance draws us into an awareness of the various ethical or philosophical frames that inform our evaluations. I cannot sort all of those out, but I can acknowledge that for a wide array of people the point would not be to speak, but to be spoken to. Or perhaps, even to find that we are not addressed at all.

An even greater array of people would likely agree that our posture toward this world ought to be more than merely instrumental. Human civilization must advance, but it does so best when it abandons Promethean aspirations and acknowledges its finitude along with its power.

I suppose all of this is a way of saying that beauty resides not only in what we make and say, but also in what we find and encounter. But shouldn’t this found beauty be shared? Maybe. But perhaps not before it has done its work on us. Perhaps not before we have allowed it to speak to us and to transform us. The space in which beauty can do its work is precious, and it would seem that the logic of our technologies would have us collapse that space in the service of sharing, commodification, self-expression, capturing, publicizing, and the like.

I don’t want to speak for Thoreau, but I would venture to guess that he might have us preserve that precious space where beauty has its way.

“Is the Internet Driving Us Mad?”: Attempting a Sane Response

It’s Newsweek’s turn. Perhaps feeling a bit of title envy, the magazine has shamelessly ripped off The Atlantic’s formula with a new cover story alternatively titled “Is the Internet Driving Us Mad?” or “Is the Internet Making Us Crazy?”

You can probably guess what follows already. Start with a rather dramatic and telltale anecdote — in this case Jason Russell’s “reactive psychosis” in the wake of his “Kony 2012” fame — and proceed to cite a number of studies and further anecdotes painting a worrisome picture that appears somehow correlated to heavy Internet use. You’re also likely to guess that Sherry Turkle is featured prominently. For the record, I intend no slight by that last observation, I rather appreciate Turkle’s work.

Before the day is out, this article will be “Liked” and tweeted thousands of times I’m sure. It will also be torn apart relentlessly and ridiculed. Unfortunately, the title will be responsible for both responses. A more measured title would likely have elicited a more sympathetic reading, but also less traffic.

There is much in the article that strikes an alarmist note and many of the anecdotes, while no doubt instances of real human suffering, seem to describe behaviors that are not characteristic of most people using the Internet. I’m also not one to go in for explanations that localize this or that behavior in this or that region of the brain. I’m not qualified to evaluate such conclusions, but they strike me as a tad reductionistic. That said, this does ring true:

“We may appear to be choosing to use this technology, but in fact we are being dragged to it by the potential of short-term rewards. Every ping could be social, sexual, or professional opportunity, and we get a mini-reward, a squirt of dopamine, for answering the bell. “These rewards serve as jolts of energy that recharge the compulsion engine, much like the frisson a gambler receives as a new card hits the table,” MIT media scholar Judith Donath recently told Scientific American. “Cumulatively, the effect is potent and hard to resist.”

Of course, it all hinges on correlation and causation. The article itself suggests as much right at the end. “So what do we do about it?” the author, Tony Dokoupil, asks. “Some would say nothing,” he continues, “since even the best research is tangled in the timeless conundrum of what comes first.” He adds:

“Does the medium break normal people with its unrelenting presence, endless distractions, and threat of public ridicule for missteps? Or does it attract broken souls?

But in a way, it doesn’t matter whether our digital intensity is causing mental illness, or simply encouraging it along, as long as people are suffering.”

Dokoupil concludes on a note of fateful optimism: “The Internet is still ours to shape. Our minds are in the balance.” In truth, whatever we make of the neuroscience involved, there is something to be said for the acknowledgement that we have choices to make about our relationship to the Internet.

After we cut through the hype and debates about causation, it would seem that there is a rather commonsensical way of approaching all of this.

If the article resonates with you because it seems to describe your experience, then it is probably worth taking it as a warning of sorts and an invitation to make some changes. So, for example, if you are sitting down to play video games for multiple hours without interruption and thus putting yourself in danger of developing a fatal blood clot … you may want to rethink your gaming habits.

If you are immediately reaching for your Internet device of choice before you have so much as opened your eyes in the morning, you may want to consider allowing yourself a little time before diving into the stream. Pour some coffee, have a nice breakfast. Give yourself a little time with your thoughts, or your loved ones. If you find that your thoughts are all incessantly begging you to get online and your loved one’s faces are turning disconcertingly into Facebook icons, then perhaps you have all the more reason to restore a little a balance to your Internet use.

If you take an honest inventory of your emotions and feelings, and you find that you’re anchoring a great deal of your self-worth on the replies you get to your status updates, tweets, or blog posts, then perhaps, just maybe, you might want to consider doing a little soul searching and re-prioritizing.

If “‘the computer is like electronic cocaine,’ fueling cycles of mania followed by depressive stretches” somehow resonates with your own experience, then it may be time to talk to your friends about getting a better grip on the way you’re ordering your life.

None of this is rocket science (or neuroscience). I’d like to think that most of us have a pretty good sense of when certain activities are tending toward the counterproductive and unhealthy end of the spectrum. There’s no need to get all apocalyptic about it, we can leave that to the media. Instead, take inventory of what is truly important — you know, the same old sappy but precious and indispensable stuff — and then ask yourself how your Internet use relates to these and take action accordingly. If you find that you are unable to implement the action you know you need to take, then certainly get some help.

In Search of the Real

While advancing age is no guarantee of advancing self-knowledge, I have found that growing up a bit can be enlightening. Looking back, it now seems pretty clear to me that I have always been temperamentally Arcadian – and I’m grateful to W. H. Auden for helping me come to this self-diagnosis. In the late 1940s, Auden wrote an essay distinguishing the Arcadian and Utopian personalities. The former looks instinctively to the past for truth, goodness, and beauty; the latter searches for those same things in the unrealized future.

Along with Auden, but in much less distinguished fashion, I am an Arcadian; there is little use denying it. When I was on the cusp of adolescence, I distinctly recall lamenting with my cousin the passing of what we called the “good old days.” Believe it; it is sadly true. The “good old days” incidentally were the summer vacations we enjoyed not more than two or three years earlier. If I am not careful, I risk writing the grocery list elegiacally. I believe, in fact, that my first word was a sigh. This last is not true, alas, but it would not have been out of character.

So you can see that this presents a problem of sorts for someone who writes about technology. The temptation to criticize is ever present and often difficult to resist. With so many Utopians about, one can hardly be blamed. In truth, though, there are plenty of Arcadians about as well. The Arcadian is the critic of technology, the one whose first instinct is to mourn what is lost rather than celebrate what is gained. It is with this crowd that I instinctively run. They are my kindred spirits.

But Auden knew enough to turn his critical powers upon his own Arcadianism. As Alan Jacobs put it in his Introduction to Auden’s “The Age of Anxiety,” “Arcadianism may have contributed much to Auden’s mirror, but he knew that it had its own way of warping reflections.” And so do I, at least in my better moments.

I acknowledge my Arcadianism by way of self-disclosure leading into a discussion of Nathan Jurgenson’s provocative essay in The New Inquiry, “The IRL Fetish.” IRL here stands for “in real life,” offline experience as opposed to the online or virtual, and Jurgenson takes aim at those who fetishize offline experience. I can’t be certain if he had Marx, Freud, or Lacan in view when he chose to describe the obsession with offline experience as a fetish. I suspect it was simply a rather suggestive term that connoted something of the irrational and esoteric. But it does seem clear that he views this obsession/fetish as woefully misguided at best and this because it is built on an erroneous conceptualization of the relationship between the online and the offline.

The first part of Jurgenson’s piece describes the state of affairs that has given rise to the IRL Fetish. It is an incisive diagnosis written with verve. He captures the degree to which the digital has penetrated our experience with clarity and vigor. Here is a sampling:

“Hanging out with friends and family increasingly means also hanging out with their technology. While eating, defecating, or resting in our beds, we are rubbing on our glowing rectangles, seemingly lost within the infostream.” [There is more than one potentially Freudian theme running through this piece.]

“The power of ‘social’ is not just a matter of the time we’re spending checking apps, nor is it the data that for-profit media companies are gathering; it’s also that the logic of the sites has burrowed far into our consciousness.”

“Twitter lips and Instagram eyes: Social media is part of ourselves; the Facebook source code becomes our own code.”

True. True. And, true.

From here Jurgenson sums up the “predictable” response from critics: “the masses have traded real connection for the virtual,” “human friends, for Facebook friends.” Laments are sounded for “the loss of a sense of disconnection,” “boredom,” and “sensory peace.” The equally predictable solution, then, is to log-off and re-engage the “real” world.

Now it does not seem to me that Jurgenson thinks this is necessarily bad counsel as far as it goes. He acknowledges that, “many of us, indeed, have been quite happy to occasionally log-off …” The real problem, according to Jurgenson, what is “new” in the voices of the chorus of critics is arrogant self-righteousness. Those are my words, but I think they do justice to Jurgenson’s evaluation. “Immense self-satisfaction,” “patting ourselves on the back,” boasting, “self-congratulatory consensus,” constructing “their own personal time-outs as more special” – these are his words.

This is a point I think some of Jurgenson’s critics have overlooked. At this juncture, his complaint is targeted rather precisely, at least as I read it, at the self-righteousness implicit in certain valorizations of the offline. Now, of course, deciding who is in fact guilty of self-righteous arrogance may involve making judgment calls that more often than not necessitate access to a person’s opaque intentions, and there is, as of yet, no app for that. (Please don’t tell me if there is.) But, insofar as we are able to reasonably identify the attitudes Jurgenson takes to task, then there is nothing particularly controversial about calling them out.

In the last third of the essay, Jurgenson pivots on the following question: “How have we come to make the error of collectively mourning the loss of that which is proliferating?” Response: “In great part, the reason is that we have been taught to mistakenly view online as meaning not offline.”

At this point, I do want to register a few reservations. Let me begin with the question above and the claim that “offline experience” is proliferating. What I suspect Jurgenson means here is that awareness of offline experience and a certain posture toward offline experience is proliferating. And this does seem to be the case. Semantically, it would have to be. The notion of the offline as “real” depends on the notion of the online; it would not have emerged apart from the advent of the online. The online and the offline are mutually constitutive as concepts; as one advances, the other follows.

It remains the case, however, that “offline,” only recently constituted as a concept, describes an experience that paradoxically recedes as it comes into view. Consequently, Jurgenson’s later assertion – “There was and is no offline … it has always been a phantom.” – is only partially true. In the sense that there was no concept of the offline apart from the online and that the online, once it appears, always penetrates the offline, then yes, it is true enough. However, this does not negate the fact that while there was no concept of the offline prior to the appearance of the online, there did exist a form of life that we can retrospectively label as offline. There was, therefore, an offline (even if it wasn’t known as such) experience realized in the past against which present online/offline experience can be compared.

What the comparison reveals is that a form of consciousness, a mode of human experience is being lost. It is not unreasonable to mourn its passing, and perhaps even to resist it. It seems to me that Jurgenson would not necessarily be opposed to this sort of rear-guard action if it were carried out without an attendant self-righteousness or aura of smug superiority. But he does appear to be claiming that there is no need for such rear-guard actions because, in fact, offline experience is as prominent and vital as it ever was. Here is a representative passage:

“Nothing has contributed more to our collective appreciation for being logged off and technologically disconnected than the very technologies of connection. The ease of digital distraction has made us appreciate solitude with a new intensity. We savor being face-to-face with a small group of friends or family in one place and one time far more thanks to the digital sociality that so fluidly rearranges the rules of time and space. In short, we’ve never cherished being alone, valued introspection, and treasured information disconnection more than we do now.”

It is one thing, however, to value a kind of experience, and quite another to actually experience it. It seems to me, in fact, that one portion of Jurgenson’s argument may undercut the other. Here are his two central claims, as I understand them:

1. Offline experience is proliferating, we enjoy it more than ever before.

2. Online experience permeates offline experience, the distinction is untenable.

But if the online now permeates the offline – and I think Jurgenson is right about this – then it cannot also be the case that offline experience is proliferating. The confusion lies in failing to distinguish between “offline” as a concept that emerges only after the online appears, and “offline” as a mode of experience unrecognized as such that predates the online. Let us call the former the theoretical offline and the latter the absolute offline.

Given the validity of claim 2 above, then claim 1 only holds for the theoretical offline not the absolute offline. And it is the passing of the absolute offline that critics mourn. The theoretical offline makes for a poor substitute.

The real strength of Jurgenson’s piece lies in his description of the immense interpenetration of the digital and material (another binary that does not quite hold up, actually). According to Jurgenson, “Smartphones and their symbiotic social media give us a surfeit of options to tell the truth about who we are and what we are doing, and an audience for it all, reshaping norms around mass exhibitionism and voyeurism.” To put it this way is to mark the emergence of a ubiquitous, unavoidable self-consciousness.

I would not say as Jurgenson does at one point, “Facebook is real life.” The point, of course, is that every aspect of life is real. There is no non-being in being. Perhaps it is better to speak of the real not as the opposite of the virtual, but as that which is beyond our manipulation, what cannot be otherwise. In this sense, the pervasive self-consciousness that emerges alongside the socially keyed online is the real. It is like an incontrovertible law that cannot be broken. It is a law haunted by the loss its appearance announces, and it has no power to remedy that loss. It is a law without a gospel.

Once self-consciousness takes its place as the incontrovertibly real, it paradoxically generates a search for something other than itself, something more real. This is perhaps the source of what Jurgenson has called the IRL fetish, and in this sense it has something in common with the Marxian and Freudian fetish: it does not know what it seeks. The disconnection, the unplugging, the logging off are pursued as if they were the sought after object. But they are not. The true object of desire is a state of pre-digital innocence that, like all states of innocence, once lost can never be recovered.

Perhaps I spoke better than I knew when I was a child, of those pleasant summers. After all, I am of that generation for which the passing from childhood into adulthood roughly coincided with the passage into the Digital Age. There is a metaphor in that observation. To pass from childhood into adulthood is to come into self-awareness, it is to leave naivety and innocence behind. The passage into the Digital Age is also a coming into a pervasive form of self-awareness that now precludes the possibility of naïve experience.

All in all, it would seem that I have stumbled into my Arcadianism yet again.

The Simple Life in the Digital Age

America has always been a land of contradictions. At the very least we could say the nation’s history has featured the sometimes creative, sometimes destructive interplay of certain tensions. At least one of these tensions can be traced right back to the earliest European settlers. In New England, Puritans established a “city on a hill,” a community ordered around the realization of a spiritual ideal.  Further south came adventurers, hustlers, and entrepreneurs looking to make their fortune. God and gold, to borrow the title of Walter R. Mead’s account of the Anglo-American contribution to the formation of the modern world, sums it up nicely.  Of course, this is also a rather ancient opposition. But perhaps we could say that never before had these two strands come together in quite the same way to form the double helix of a nation’s DNA.

This tension between spirituality and materialism also overlaps with at least two other tensions that have characterized American culture from its earliest days: The first of these, the tension between communitarianism and individualism, is easy to name. The other, though readily discernible, is a little harder to capture. For now I’m going to label this pair hustle and contemplation and hope that it conveys the dynamic well enough. Think Babbitt and Thoreau.

These pairs simplify a great deal of complexity, and of course they are merely abstractions. In reality, the oppositions are interwoven and mutually dependent. But thus qualified, they nonetheless point to recurring and influential types within American culture. These types, however, have not been balanced and equal. There has always seemed to be a dominant partner in each pairing: materialism, individualism, and hustle. But it would be a mistake to underestimate the influence of spirituality, communitarianism, and contemplation. Perhaps it is best to view them as the counterpoint to the main theme of American culture, together creating the harmony of the whole.

One way of nicely summing up all that is entailed by the counterpoints is to call it the pursuit of the simple life. The phrase sounds quaint, but it worked remarkably well in the hands of historian David E. Shi. In 1985, right in the middle of the decade that was to become synonymous with crass materialism – the same year Madonna released “Material Girl” – Shi published The Simple Life: Plain Living And High Thinking In American Culture. The audacity!

Shi weaves a variegated tapestry of individuals and groups that have advocated the simple life in one form or another throughout American history. Even though he purposely leaves out the Amish, Mennonites, and similar communities, he still is left with a long and diverse list of practitioners. Altogether they represent a wide array of motives animating the quest for the simple life. These include: “a hostility toward luxury and a suspicion of riches, a reverence for nature and a preference for rural over urban ways of life and work, a desire for personal self-reliance through frugality and diligence, a nostalgia for the past and a scepticism toward the claims of modernity, conscientious rather than conspicuous consumption, and an aesthetic taste for the plain and functional.”

This net gathers together Puritans and Quakers, Jeffersonians and Transcendentalists, Agrarians and Hippies, and many more. Perhaps if Shi were to update his work he might include hipsters in the mix. In any case, he would have no shortage of contemporary trends and movements to choose from. None of them dominant, of course, but recognizable and significant counterpoints still.

If I were tasked with updating Shi’s book, for example, I would certainly include a chapter on the critics of the digital age. Not all such critics would fit neatly into the simple life tradition, but I do think a good many would – particularly those who are concerned that the pace and rhythm of digitally augmented life crowds out solitude, silence, and reflection. Think, for example, of the many “slow” movements and advocates (myself included) of digital sabbaths. They would comfortably take their place alongside a many of the individuals and movements in Shi’s account who have taken the personal and social consequences of technological advance as their foil. Thoreau is only the most famous example.

Setting present day critics of digital life in the tradition identified by Shi has a few advantages. For one thing, it reminds us that the challenges posed by digital technologies, while having their particularities, are not entirely novel in character. Long before the dawn of the digital age, individuals struggled to find the right balance between their ideals for the good life and the possibilities and demands created by the emergence of new technologies.

Moreover, we may readily and fruitfully apply some of Shi’s conclusions about the simple life tradition to the contemporary criticisms of life in the digital age.

First, the simple life has always been a minority ethic. “Many Americans have not wanted to lead simple lives,” Shi observes, “and not wanting to is the best reason for not doing so.” But, in his view, this does not diminish the salutary leavening effect of the few on the culture at large.

Yet , Shi concedes, “Proponents of the simple life have frequently been overly nostalgic about the quality of life in olden times, narrowly anti-urban in outlook , and too disdainful of the benefits of prosperity and technology.” Better to embrace the wisdom of Lewis Mumford, “one of the sanest of all the simplifiers” in Shi’s estimation. According to Mumford,

“It is not enough to say, as Rousseau once did, that one has only to reverse all current practice to be right … If our new philosophy is well-grounded we shall not merely react against the ‘air-conditioned nightmare’ of our present culture; we shall also carry into the future many elements of quality that this culture actually embraces.”

Sound advice indeed.

If we are tempted to dismiss the critics for their inconsistencies, however, Shi would have us think again: “When sceptics have had their say, the fact remains that there have been many who have demonstrated that enlightened self-restraint can provide a sensible approach to living that can be fruitfully applied in any era.”

But it is important to remember that the simple life at its best, now as ever, requires a person “willing it for themselves.” Impositions of the simple life will not do. In fact, they are often counterproductive and even destructive. That said, I would add, though Shi does not make this point in his conclusion, that the simple life is perhaps best sustained within a community of practice.

Wisely, Shi also observes, “Simplicity is more aesthetic than ascetic in its approach to good living.” Consequently, it is difficult to lay down precise guidelines for the simple life, digital or otherwise. Moderation takes many forms. And so individuals must deliberately order their priorities “so as to distinguish between the necessary and superfluous, useful and wasteful, beautiful and vulgar,” but no one such ordering will be universally applicable.

Finally, Shi’s hopeful reading of the possibilities offered by the pursuit of the simple life remains resonant:

“And for those with the will to believe in the possibility of the simple life and act accordingly, the rewards can be great. Practitioners can gradually wrest control of their own lives from the manipulative demands of the marketplace and the workplace … Properly interpreted, such a modern simple life informed by its historical tradition can be both socially constructive and personally gratifying.”

Nathan Jurgenson has recently noted that criticisms of digital technologies are often built upon false dichotomies and a lack of historical perspective. In this respect they are no different than criticisms advanced by advocates of the simple life who were also tempted by similar errors. Ultimately, this will not do. Our thinking needs to be well-informed and clear-sighted, and the historical context Shi provides certainly moves us toward that end. At the very least, it reminds us that the quest for simplicity in the digital age had its analog precursors from which we stand to learn a few things.