What Could Go Right?

Critic and humorist Joe Queenan took aim at the Internet of Things in this weekend’s Wall Street Journal. It’s a mildly entertaining consideration of what could go wrong when our appliances, devices, and online accounts are all networked together. For example:

“If the wireless subwoofers are linked to the voice-activated oven, which is linked to the Lexus, which is linked to the PC’s external drive, then hackers in Moscow could easily break in through your kid’s PlayStation and clean out your 401(k). The same is true if the snowblower is linked to the smoke detector, which is linked to the laptop, which is linked to your cash-strapped grandma’s bank account. A castle is only as strong as its weakest portcullis.”

He goes on to imagine hackers reprogramming your smart refrigerator to order “thousands of gallons of banana-flavored soy milk every week,” or your music library to play only “Il Divo, Il Divo, Il Divo, 24 hours a day.” Queenan gives readers a few more of these humorously intoned, marginally plausible scenarios that, with a light touch, point to some of the ways the Internet of Things could go wrong.

In any case, after reading Queenan’s playful lampoon of the Internet of Things, it occurred to me that more often than not our worries about new technology center on the question, “What could go wrong?” In fact, we often ask that sarcastically to suggest that some new technology is obviously fraught with risk. For instance: Geoengineering. Global-scale interventions in the delicate, imperfectly understood workings of the earth’s climate with potentially massive and irreversible consequences … what could go wrong?

Of course, this is a perfectly reasonable question to ask. We ask it, and engineers and technologists respond by assuring us that safety measures are in place, contingencies have been accounted for, precautions have been taken, etc. Or, alternatively, that the risks of doing nothing are greater than the risks of proceeding with some technological project. In other words, asking what could go wrong tends to lock us in the technocratic frame of mind. It invites cost/benefit analysis, rational planning, technological fixes to technological problems, all mixed through and through with sprinklings or heaps of hubris.

Very often, despite some initial failures and, one hopes, not-too-tragic accidents, the kinks do get worked out, disasters are averted (or mostly so), and the new technology stabilizes. The voices of critics who worried about what could go wrong suddenly sound a lot like a chorus of boys crying wolf. Enthusiasts wipe the sweat from their brows, take a deep breath, and confidently proclaim, “I told you so.”

All well and good. There’s only one problem. Maybe asking “What could go wrong?” is a short-sighted way of thinking about new technologies. Maybe we should also be asking, “What could go right?”

What if this new technology worked just as advertised? What if it became a barely-noticed feature of our technological landscape? What if it was seamlessly integrated into our social life? What if it delivered on its promise?

Accidents and disasters get our attention, their possibility makes us anxious. The more spectacular the promise of a new technology, the more nervous we might be about what could go wrong. But, if we are focused exclusively on the accident, we lose sight of the fact that the most consequential technologies are usually those that end up working. They are the ones that reorder our lives, reframe our experience, restructure our social lives, recalibrate our sense of time and place. Etc.

In his recent review of Jordan Ellenberg’s How Not to Be Wrong: The Power of Mathematical Thinking (a title with a mildly hubristic ring, to be sure), Peter Pesic opens with an anecdote about problem solving during World War II. Given the trade-offs involved in placing extra amor on fighter planes and bombers–increased weight, decreased range–where should military airplanes be reinforced? Noticing that returning planes had more bullet holes in the fuselage than in the engine, some suggested reinforcing the fuselage. There was one, seemingly obvious, problem with this line of thinking. As the mathematician Abraham Wald noted, this solution ignored the planes that didn’t make it back, most likely because they had been shot in the engine.

This little anecdote–from what seems like a fascinating book, by the way–reminds us that where you look sometimes makes all the difference. A truism, certainly, but no less true because of it. If in thinking about new technologies (or those old ones, which are no less consequential for having lost the radiance of novelty) we look only at the potential accident, then we may miss what matters most.

As more than a few critics have noted over the years, our thinking about technology is often already compromised by a technocratic frame of mind. We are, in such cases, already evaluating technology on its own terms. What we need then is to recover ways of thinking that don’t already assume technological standards. Admittedly, this can be a challenging project. It requires our breaking long-engrained habits of thought–habits of thought which are all the more difficult to escape because they take on the cast of common sense. My point here is to suggest that one step in that direction is to get loose of the assumption that any well working, smoothly operating technology is ipso facto a good and unproblematic technology.

Our Little Apocalypses

An incoming link to my synopsis of Melvin Kranzberg’s Six Laws of Technology alerted me to a short post on Quartz about a new book by an author named Michael Harris. The book, The End of Absence: Reclaiming What We’’ve Lost in a World of Constant Connection, explores the tradeoffs induced by the advent of the Internet. Having not read the book, I obviously can’t say much about it, but I was intrigued by one angle Harris takes that comes across in the Quartz piece.

Harris’s book is focused on the generation, a fuzzy category to be sure, that came of age just before the Internet exploded onto the scene in the early 90s. Here’s Harris:

“If you were born before 1985, then you know what life is like both with the internet and without. You are making the pilgrimage from Before to After.”

“If we’re the last people in history to know life before the internet, we are also the only ones who will ever speak, as it were, both languages. We are the only fluent translators of Before and After.”

It would be interesting to read what Harris does with this framing. In any case, it’s something I’ve thought about often. This is my fifteenth year teaching. Over the years I’ve noticed, with each new class, how the world that I knew as a child and as a young adult recedes further and further into the murky past. As you might guess, digital technology has been one of the most telling indicators.

Except for a brief flirtation with Prodigy on an MS-DOS machine with a monochrome screen, the Internet did not come into my life until I was a freshman in college. I’m one of those people Harris is writing about, one of the Last Generation to know life before the Internet. Putting it that way threatens to steer us into a rather unseemly romanticism, and, knowing that I’m temperamentally drawn to dying lights, I want to make sure I don’t give way to it. That said, it does seem to me that those who’ve known the Before and After, as Harris puts it, are in a unique position to evaluate the changes. Experience, after all, is irreducible and incommunicable.

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of my unsolicited points of advice to tech writers.

Now while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

To be very clear and to avoid the pedantic, sanctimonious reply–although, in all honesty, I’ve gotten so little of that on this blog that I’ve come to think that a magical filter of civility vets all those who come by–let me affirm that yes, of course, I certainly would’ve made many trade-offs along the way, too. To recognize costs and losses does not mean that you always refuse to incur them, it simply means that you might incur them in something other than a naive, triumphalist spirit.

Around this time last year, an excerpt from Jonathan Franzen’s then-forthcoming edited work on Karl Krauss was published in the Guardian; it was panned, frequently and forcefully, and deservedly so in some respects. But the conclusion of the essay struck me then as being on to something.

“Maybe … apocalypse is, paradoxically, always individual, always personal,” Franzen wrote,

“I have a brief tenure on earth, bracketed by infinities of nothingness, and during the first part of this tenure I form an attachment to a particular set of human values that are shaped inevitably by my social circumstances. If I’d been born in 1159, when the world was steadier, I might well have felt, at fifty-three, that the next generation would share my values and appreciate the same things I appreciated; no apocalypse pending.”

But, of course, he wasn’t. He was born in the modern world, like all of us, and this has meant change, unrelenting change. Here is where the Austrian writer Karl Kraus, whose life straddled the turn of the twentieth century, comes in: “Kraus was the first great instance of a writer fully experiencing how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for personal apocalypse.” Perhaps. I’m tempted to quibble with this claim. The words of John Donne, “Tis all in pieces, all coherence gone,” come to mind. Yet, even if Franzen is not quite right about the historical details, I think he’s given honest voice to a common experience of modernity:

“The experience of each succeeding generation is so different from that of the previous one that there will always be people to whom it seems that the key values have been lost and there can be no more posterity. As long as modernity lasts, all days will feel to someone like the last days of humanity. Kraus’s rage and his sense of doom and apocalypse may be the anti-thesis of the upbeat rhetoric of Progress, but like that rhetoric, they remain an unchanging modality of modernity.”

This is, perhaps, a bit melodramatic, and it is certainly not all that could be said on the matter, or all that should be said. But Franzen is telling us something about what it feels like to be alive these days. It’s true, Franzen is not the best public face for those who are marginalized and swept aside by the tides of technological change, tides which do not lift all boats, tides which may, in fact, sink a great many. But there are such people, and we do well to temper our enthusiasm long enough to enter, so far as it is possible, into their experience. In fact, precisely because we do not have a common culture to fall back on, we must work extraordinarily hard to understand one another.

Franzen is still working on the assumption that these little personal apocalypses are a generational phenomenon. I’d argue that he’s underestimated the situation. The rate of change may be such that the apocalypses are now intra-generational. It is not simply that my world is not my parents’ world; it is that my world now is not what my world was a decade ago. We are all exiles now, displaced from a world we cannot reach because it fades away just as its contours begin to materialize. This explains why, as I wrote earlier this year, nostalgia is not so much a desire for a place or a time as it is a desire for some lost version of ourselves. We are like Margaret, who in Hopkins’ poem, laments the passing of the seasons, Margaret to whom the poet’s voice says kindly, “It is Margaret you mourn for.”

Although I do believe that certain kinds of change ought to be resisted–I’d be a fool not to–none of what I’ve been trying to get at in this post is about resisting change in itself. Rather, I think all I’ve been trying to say is this: we must learn to take account of how differently we experience the changing world so that we might best help one another as we live through the change that must come. That is all.

Preserving the Person in the Emerging Kingdom of Technological Force

What does Iceland look like through Google Glass? Turns out it looks kind of like Iceland. Consider this stunning set of photographs showcasing a tool built by Silica Labs which allows users to post images from Glass directly onto their WordPress blog. If you click over to see the images, you’ll notice two things. First, you’ll see that Iceland is beautiful, something you may already have known. Secondly, you’ll see that pictures taken with Glass look, well, just like pictures not taken with Glass.

There’s one exception to that second observation. When the user’s hands appear in the frame, the POV perspective becomes evident. Apart from that, these great pictures look just like every other set of great pictures. This isn’t a knock on the tool developed by Silica Labs, by the way. I’m not really interested in that particular app. I’m interested in the appeal of Glass and how users understand their experience with Glass, and these pictures, not markedly different from what you could produce without Glass, suggested a thesis: perhaps the appeal of Glass has less to do with what it enables you to do than it does with the way you feel when you’re doing it. And, as it turns out, there is recurring theme in how many early adopters described their experience of Glass that seems to support this thesis.

As Glass started making its first public appearances, reviewers focused on user experience; and their criticism typically centered on the look of Glass, which was consistently described as geeky, nerdy, pretentious, or silly. Clearly, Glass had an image problem. But soon the conversation turned to the experience of those in the vicinity of a Glass user. Mark Hurst was one of the first to redirect our attention in this direction: “The most important Google Glass experience is not the user experience – it’s the experience of everyone else.” Hurst was especially troubled by the ease with which Glass can document others and the effects this would have on the conduct of public life.

Google was sensitive to these concerns, and it quickly assured the public that the power of Glass to record others surreptitiously had been greatly exaggerated. A light would indicate when Glass was activated so others would know if they were being recorded and the command to record would be audible. Of course, it didn’t take long to circumvent these efforts to mitigate Glass’s creep factor. Without much regard for Google’s directives, hackers created apps that allowed users to take pictures merely by winking. Worse yet, an app that equipped Glass with face-recognition capabilities soon followed.

Writing after the deployment of these hacks, David Pogue echoed Hurst’s earlier concerns: “the biggest obstacle [facing Glass] is the smugness of people who wear Glass—and the deep discomfort of everyone who doesn’t.” After laying out his tech-geek bona fides, even Nick Bilton confessed his unease around people wearing Glass: “I felt like a mere mortal among an entirely different class of super-connected humans.” The defining push back against this feeling Glass engenders in others came from Adrian Chen who proclaimed unequivocally, “By donning Google Glass, you, the Google Glass user, are volunteering to be a foot soldier in Google’s asshole army.”

Hurst was on to something. He was right to direct attention to the experience of those in the vicinity of a Glass user (or, Glassholes, as they have been affectionately called by some). But it’s worth pivoting back to the experience of the Glass user. Set aside ergonomics, graphic interfaces, and design questions for a moment, though, and consider what users report feeling when they use Google Glass.

Let’s start with Evernote CEO Phil Libin. In a Huffington Post interview late in 2012, he claimed that “in as little as three years” it will seem “barbaric” not to use Google Glass. That certainly has a consciously hyperbolic ring to it, but it’s the follow-up comment that’s telling: ”People think it looks kind of dorky right now but the experience is so powerful that you feel stupid as soon as you take the glasses off…”

“The experience is so powerful” – there it is. Glass lets you check the Internet, visualize information in some interesting ways, send messages, take pictures, and shoot video. I’m sure I’m missing something, but none of those are in themselves groundbreaking or revolutionary. Clearly, though, there’s something about having all of this represented for the user as part of their perceptual apparatus that conveys a peculiar sense of empowerment.

Sergey Brin, co-founder of Google appearPhilbin was not the only one to report this feeling of power. Robert Scoble declared, “I will never live another day without wearing Google Glass or something like it. They have instantly become part of my life.” “The human body has a lot of limitations,” software developer Monica Wilkinson explained, “I see [Glass] as a way to enhance our bodies.” Writing about his Glass experience on The Verge, Joshua Topolsky was emphatic: “I won’t lie, it’s amazingly powerful (and more than a little scary) to be able to just start recording video or snapping pictures with a couple of flicks of your finger or simple voice commands.” A little further on he added, “In the city, Glass make you feel more powerful, better equipped, and definitely less diverted.” Then there’s Chris Barrett who captured the first arrest on Glass. Barrett witnessed a fight and came in close to film the action. He acknowledged that if he were not wearing Glass, he would not have approached the scene of the scuffle. Finally, there’s all that is implicit in the way Sergey Brin characterized the smartphone as he was introducing Glass: “It’s kind of emasculating.” Glass, we are to infer, addresses this emasculation by giving the user a sense of power. Pogue put it most succinctly: Glass puts its wearers in “a position of control.”

It is possible to make too much of these statements. Other have found that using Glass makes them feel self-conscious in public and awkward in interactions with others. But Glass has revealed to a few intrepid souls something of its potential power, and, if they’re to be trusted, the feeling has been intoxicating. But why is this?

Perhaps this is because the prosthetic effect is especially seamless, so that it feels as if you yourself are doing the things Glass enables rather than using a tool to accomplish them. When a tool works really well it doesn’t feel like your using a tool, it feels like you are acting through the tool. Glass seems to take it a step further. You are not just acting through Glass; you are simply acting. You, by your gestures or voice commands, are doing these things. Even the way audio is received from Glass contributes to the effect. Here’s how Gary Shteyngart described the way it feels to hear using Glass’s bone transducer: “The result is eerie, as if someone is whispering directly into a hole bored into your cranium, but also deeply futuristic.” That sounds to me as if you are hearing audio in the way that we might imagine “hearing” telepathy.

In other words, there is an alluring immediacy to the experience of interacting with the world through Google Glass. This seamlessness, this way that Glass has of feeling like a profound empowerment recalls nothing so much as the link between magic and technology so aptly captured in Arthur Clarke’s famous third law: ”Any sufficiently advanced technology is indistinguishable from magic.” Clarke’s pithy law recalls a fundamental, and historical, connection between magic and technology: they are both about power. As Lewis Mumford put it in Technic and Civilization, “magic was the bridge that united fantasy with technology: the dream of power with the engines of fulfillment.” Or consider how C. S. Lewis formulated the relationship: “For magic and applied science alike the problem is how to subdue reality to the wishes of men: the solution is a technique.” Sociologist Richard Stivers has concluded, “Without magic, technology would have no fatal sway over us.”

So it turns out that the appeal of Glass, for all of its futuristic cyborg pretensions, may be anchored in an ancient desire that has long animated the technological project: the desire for the experience of power. And, privacy concerns aside, this may be the best reason to be wary of the device. Those who crave the feel of power—or who having tasted it, become too enamored of it—tend not to be the sort of people with whom you want to share a society.

It is also worth noting what we might call a pervasive cultural preparation for the coming of Glass. In another context, I’ve claimed that the closest analogy to the experience of the world through Google Glass may be the experience of playing a first-person video game. To generation that has grown up playing first person shooters and role-playing video games, Glass promises to make the experience of everyday life feel more like the experience of playing a game. In a comment on my initial observations, Nick Carr added, “You might argue that this reversal is already well under way in warfare. Video war games originally sought to replicate the look and feel of actual warfare, but now, as more warfare becomes automated via drones, robots, etc., the military is borrowing its interface technologies from the gaming world. War is becoming more gamelike.”

If you can’t quite get passed the notion that Google Glass is nothing more than a white-tech-boy-fantasy, consider that this is Glass 1.0. Wearable tech is barely out of the toddler stage. Project this technology just a little further down the line–when it is less obtrusive, more seamless in its operation–and it may appear instead that Philbin, Scoble, and Topolsky have seen the future clearly, and it works addictively. Consider as well how some future version of Glass may combine with Leap Motion-style technology to fully deploy the technology-as-magic aesthetic or, also, the potential of Glass to interact with the much touted Internet of Things. Wave your hand, speak your command and things happen, the world obeys.

But letting this stand as a critique of Glass risks missing a deeper point. Technology and power are inseparable. Not all technologies empower in the same way, but all technologies empower in some way. And we should be particularly careful about technologies that grant power in social contexts. Power tends to objectify, and we could do without further inducements to render others as objects in our field of action.

In her wise and moving essay on the Iliad, Simone Weil characterized power’s manifestation in human affairs, what she calls force, as “the ability to turn a human being into a thing while he is still alive.” Power or force, then, is the ability to objectify. Deadly force, Weil observes, literally turns a person into a thing, a corpse. All less lethal deployments of force are derivative of this ultimate power to render a person a thing.

It is telling that the most vocal, and sometimes violent, opposition to Glass has come in response to its ability to document others, possibly without their awareness, much less consent. To be documented in such a way is to be objectified, and the intuitive discomfort others have felt in the presence of those wearing Glass is a reflection of an innate resistance to the force that would render us an object. In his excellent write up of Glass late last year, Clive Thompson noted that while from his perspective he was wearing a computer that granted quick, easy access to information, “To everyone else, I was just a guy with a camera on his head.” “Cameras are everywhere in public,” Thompson observes, “but one fixed to your face sends a more menacing signal: I have the power to record you at a moment’s notice, it seems to declare — and maybe I already am.”

Later on in her reflections on the Iliad, Weil observed, “The man who is the possessor of force seems to walk through a non-resistant element; in the human substance that surrounds him nothing has the power to interpose, between the impulse and the act, the tiny interval that is reflection.” Curiously, Google researcher and wearable-computing pioneer, Thad Starner, has written, “Wearables empower the user by reducing the time between their intention to do a task and their ability to perform that task.”

Starner, I’m certain, has only the best of intentions. In the same piece he writes compellingly about the potential of Glass to empower individuals who suffer from a variety of physical impairments. But I also believe that he may have spoken more than he knew. The collapse of the space between intention or desire on the one hand and action or realization on the other may be the most basic reality constituting the promise and allure of technology. We should be mindful, though, of all that such a collapse may entail. Following Weil, we might consider, at least, that the space between impulse and act is also the space for reflection, and, further, the space in which we might appear to one another as fully human persons rather than objects to be manipulated or layers of data to be mined.

The Stories We Tell About Technology

Michael Solana wants to put an end to dystopian science-fiction. Enough already. No more bleak, post-apocalyptic stories; certainly no more of these stories in which technology is somehow to blame for the disaster. Why? Because, as the title of his Wired opinion piece puts it, “It’s Making Us All Fear Technology.”

This is as good a time as any to drop the word flabbergasted–I’m genuinely flabbergasted. Granted, there’s a good chance Solana didn’t pick his title, but, in this case, it pretty much sums up the substance of his view. Solana really seems to believe that our cultural imagination is driven by Luddite fears. He really seems to believe that stories which present technology as a positive force in human affairs can be … ready for this … “subversive” and “daring.”

Like Alan Jacobs, I find myself wondering what world Solana inhabits:

“I have to say, it’s pretty cool to get a report from such a peculiar land. Where you and I live, of course, technology companies are among the largest and most powerful in the world, our media are utterly saturated with the prophetic utterances of their high priests, and people continually seek high-tech solutions to every imaginable problem, from obesity to road rage to poor reading scores in our schools. So, you know, comparative anthropology FTW.”

Indeed.

Interestingly, I found myself wondering much the same thing when I read Pascal-Emmanuel Gobry’s post, “Peter Thiel and the Cathedral” (more thoughts on which are eventually forthcoming). Gobry and Thiel, in the talk that inspired Gobry’s post, both lament what they seem to regard as an oppressive pessimism regarding technology and innovation that supposedly dominates our cultural imagination. To listen to Thiel, you would think that Silicon Valley was an island of hope and pragmatism in an ocean of fear and doubt. Gobry is particularly concerned with the prevalent pessimism about technology that he observes among Christians. Maybe French Catholics have a proclivity toward Luddism, I don’t know; but on this side of the Atlantic I see no great difference between Christians and non-Christians with regards to their attitudes toward technology. On the whole, they are generally enthusiastically positive.

In fact, I think media scholar Henry Jenkins is much closer to the mark, regarding communication technology at least, when he writes,

“Evangelical Christians have been key innovators in their use of emerging media technologies, tapping every available channel in their effort to spread the Gospel around the world. I often tell students that the history of new media has been shaped again and again by four key innovative groups – evangelists, pornographers, advertisers, and politicians, each of whom is constantly looking for new ways to interface with their public.”

I don’t want to be to snarky about this, but, honestly, I’m not entirely sure where you have to stand to get this kind of perspective on society. True, there have always been skeptics–Thoreaus, Postmans, Elluls, Toflers–but, historically, they have been the counterpoint, not the main theme. They have always been marginal figures, and they have never managed to stem the dominant tide of techno-enthusiasm. (Granted, the case may be different in Europe, where, for example, nuclear energy has been in retreat since the Fukushima disaster in 2011.) Perhaps we simply prefer to see ourselves, regardless of the actual state of affairs, as an embattled minority. And perhaps, I’m guilty of this, too.

In any case, the only evidence that Solana submits in defense of his claim that “people are more frightened of the future than they have ever been” is a decidedly non-scientific survey of attitudes toward artificial intelligence. You can follow the link to read the details, but basically the survey offered three choices:

1. Yes, I find the idea of intelligent machines frightening
2. No, I don’t find intelligent machines frightening
3. I’m not afraid of intelligent machines, I’m afraid of how humans will use the technology

Here are the results:

1. 16.7%
2. 27.1%
3. 56.3%

Set aside any methodological issues; the results as reported simply don’t support Solana’s assertion that “the average American is overwhelmingly afraid of artificial intelligence.” Given the phrasing of the third choice, selecting it hardly suggests irrational fear. In fact, it may just reflect a modicum of common sense. By contrast, consider this recent Pew Research survey which found that “When asked for their general views on technology’s long-term impact on life in the future, technological optimists outnumber pessimists by two-to-one.”

Now, all of this said, Solana’s underlying assumption is worth considering. Human-beings are the sorts of creatures that do make sense of their world by telling stories about it. Stories have the power to shape how we imagine the place of technology in society. Our attitudes toward technology often flow from a larger story we buy into about technology and society.

It’s worth asking ourselves what story frames our thinking about technology. Broadly speaking, there are utopian stories and dystopian stories about technology. Utopian stories tell of a future that gets better and better as a result of techno-scientific progress. Dystopian stories present technology as a source of disaster or societal disintegration. I’d suggest that there are also tragic stories, and that these are not the same as dystopian tales. The paradigmatic techno-tragedy is Mary Shelley’s Frankenstein. It is a classically tragic tale that recognizes the allure and power of the techno-scientific project, while also grappling with its inherent dangers, dangers which are ultimately a function of the human condition.

Of course, these stories need not be fictional. They might also be stories we tell about our national history or about individuals, such as inventors or entrepreneurs. Moreover, what we want to ultimately consider is not any one story, or even a set of stories, it’s the net effect or the cumulative “story” that becomes part of our tacit understanding of the world.

While it is easy to think of popular stories that frame technology as a source of trouble, it seems to me that the still-dominant narrative frames technology as a source of hope. When Solana writes, in the pages of Wired mind you, that “Artificial intelligence, longevity therapy, biotechnology, nuclear energy — it is in our power to create a brilliant world, but we must tell ourselves a story where our tools empower us to do it,” it seems to me that he is preaching to one massive choir.

With more time, I’d argue that our story about technology is not just a story. “Technology” is another name for the dominant myth of our time. This myth gives shape to our imagination. It sets the boundaries of the possible. It conditions our moral judgment. It is the arbiter of truth and the source of our hope. Is there a little anxiety wrapped up in all of this, even among the true believers? Sure. But as scholars of religion have long observed, what we hold sacred tends to invoke both wonder and fear.

So, what’s the view from where you stand? What do you perceive as the dominant cultural attitude toward technology? Have the Luddites really won the day, have I somehow missed this startling development?

Unplugged

I’m back. In fact, I’ve been back for more than a week now. I’ve been back from several days spent in western North Carolina. It’s beautiful country out there, and, where I was staying, it was beautiful country without cell phone signal or Internet connection. It was a week-long digital sabbath, or, if you prefer, a week-long digital detox. It was a good week. I didn’t find myself. I didn’t discover the meaning of life. I had no epiphanies, and I didn’t necessarily feel more connected to nature. But it was a good week.

I know that reflection pieces on technology sabbaths, digital detoxes, unplugging, and disconnecting are a dime a dozen. Slightly less common are pieces critical of the disconnectionists, as Nathan Jurgenson has called them, but these aren’t hard to come by either. Others, like Evgeny Morozov, have contributed more nuanced evaluations. Not only has the topic been widely covered, if you’re reading this blog I’d guess that you’re likely to be more or less sympathetic to these practices, even if you harbor some reservations about how they are sometimes presented and implemented. All of that to say, I’ve hesitated to add yet another piece on the experience of disconnection, especially since I’d be (mostly) preaching to the choir. But … I’m going to try your patience and offer just a few thoughts for your consideration.

First, I think the week worked well because its purpose wasn’t to disconnect from the Internet or digital devices; being disconnected was simply a consequence of where I happened to be. I suspect that when one explicitly sets out to disconnect, the psychology of the experience works against you. You’re disconnecting in order to be disconnected because you assume or hope it will yield some beneficial consequences. The potential problem with this scenario is that “being connected” is still framing, and to some degree defining, your experience. When you’re disconnected, you’re likely to be thinking about your experience in terms of not being connected. Call it the disconnection paradox.

This might mean, for example, that you’re overly aware of what you’re missing out on, thus distracted from what you hoped to achieve by disconnecting. It might also lead to framing your experience negatively in terms of what you didn’t do–which isn’t ultimately very helpful–rather than positively in terms of what you accomplished. In the worst cases, it might also lead to little more than self-congratulatory or self-loathing status updates.

In my recent case, I didn’t set out to be disconnected. In fact, I was rather disappointed that I’d be unable to continue writing about some of the themes I’d been recently addressing. So while I was carrying on with my disconnected week, I didn’t think at all about being connected or disconnected; it was simply a matter of fact. And, upon reflection, I think this worked in my favor.

This observation does raise a practical problem, however. How can one disconnect, if so desired, while avoiding the disconnection paradox? Two things come to mind. As Morozov pointed out in his piece on the practice of disconnection, there’s little point in disconnecting if it amounts to coming up for breath before plunging back into the digital flood. Ultimately, then, the idea is to so order our digital practices that enforced periods of disconnection are unnecessary.

But what if, for whatever reason, this is not a realistic goal? At this point we run up against the limits of individual actions and need to think about how to effect structural and institutional changes. Alongside those longterm projects, I’d suggest that making the practice of disconnection regular and habitual will eventually overcome the disconnection paradox.

Second consideration, obvious though it may be: it matters what you do with the time that you gain. For my part, I was more physically active than I would be during the course of an ordinary week, much more so. I walked, often; I swam; and I did a good bit of paddling too. Not all of this activity was pleasurable as it transpired. Some of it was exhausting. I was often tired and sore. But I welcomed all of it because it relieved the accumulated stress and tension that I tend to carry around on my back, shoulders, neck, and jaw, much of it a product of sitting in front of a computer or with a book for extended periods of time. It was a good week because at the end of it, my body felt as good as it had in a long time, even if it was a bit battered and ragged.

The feeling reminded me of what the Patrick Leigh Fermor wrote about his stay in a monastery early in the late 1950s, a kind of modernity detox. Initially, he was agitated, then he was overwhelmed for a few days by the desire to sleep. Finally, he emerged “full of energy and limpid freshness.” Here is how he described the experience in A Time to Keep Silence:

“The explanation is simple enough: the desire for talk, movements and nervous expression that I had transported from Paris found, in this silent place, no response or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which must be the common property of all our contemporaries, broke loose and swamped everything. No demands, once I had emerged from that flood of sleep, were made upon my nervous energy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but appeared to have lost their dragonish validity.”

“[T]he tremendous accumulation of tiredness, which must be the common property of all our contemporaries”–indeed, and to that we might add the tremendous accumulation of stress and anxiety. The Internet, always-on connectivity, and digital devices have not of themselves caused the tiredness, stress, and anxiety, but they haven’t helped either. In certain cases they’ve aggravated the problem. And, I’d suggest, they have done so regardless of what, specifically, we have been doing. Rather the aggravation is in part a function of how our bodies engage with these tools. Whether we spend a day in front of a computer perusing cat videos, playing Minecraft, writing a research paper, or preparing financial reports makes little difference to our bodies. It is in each case a sedentary day, and these are, as we all know, less than ideal for our bodies. And, because so much of our well-being depends on our bodies, the consequences extend to the whole of our being.

I know countless critics since the dawn of industrial society have lamented the loss of regular physical activity, particularly activity that unfolded in “nature.” Long before the Internet, such complaints were raised about the factory and the cubicle. It is also true that many of these calls for robust physical activity have been laden with misguided assumptions about the nature of masculinity and worse. But none of this changes the stubborn, intractable fact that we are embodied creatures and the concrete physicality of our nature is subject to certain limits and thrives under certain conditions and not others.

One further point about my experience: some of it was moderately risky. Not extreme sports-risky or risky bordering on foolish, you understand. More like “watch where you step there might be a rattle snake” risky (I avoided one by two feet or so) or “take care not to slip off the narrow trail, that’s a 300 foot drop” risky (I took no such falls, happily). I’m not sure what I can claim for all of this, but I would be tempted to make a Merleau-Ponty-esque argument about the sort of engagement with our surroundings that navigating risk requires of us. I’d modestly suggest, on a strictly anecdotal basis, that there is something mentally and physically salubrious about safely navigating the experience of risk. While we’re at, it plug-in the “troubles” (read, sometimes risky, often demanding activities) that philosopher Albert Borgmann encourages us to accept in principle.

Of course, it must immediately be added that this is a first-world-problem par excellence. Around the globe there are people who have no choice but to constantly navigate all sorts of risks to their well-being, and not of the moderate variety either. It must then seem perverse to suggest that some of us might need to occasionally elect to encounter risk, but only carefully so. Indeed, but such might nonetheless be the case. Certainly, it is also true that all of us are at risk everyday when walking a city street, or driving a car, or flying in a plane, and so on. My only rejoinder is again to lean on my experience and suggest that the sort of physical activity I engaged in had the unexpected effect of calling on and honing aspects of my body and mind that are not ordinarily called into service by my typical day-to-day experience, and this was a good thing. The accustomed risks we thoughtlessly take, crossing a street say, precisely because they are a routinized part of our experience do not call forth the same mental and bodily resources.

A final thought. Advocating disconnection sometimes raises the charges of elitism–Sherry Turkle strolling down Cape Cod beaches and what not. I more or less get where this is coming from, I think. Disconnection is often construed as a luxury experience. Who gets to placidly stroll the beaches of Cape Cod anyway? And, indeed, it is an unfortunate feature of modernity’s unfolding that what we eliminate from our lives, often to make room for one technology or another, we then end up compensating for with another technology because we suddenly realized that what we eliminated might have been useful and health-giving.

It was Neil Postman, I believe, who observed that having eliminated walking by the adoption of the automobile and the design of our public spaces, we then invented a machine on which we could simulate walking in order to maintain a minimal level of fitness. Postman’s chief focus, if I remember the passage correctly, was to point out the prima facie absurdity of the case, but I would add an economic consideration: in this pattern of technological displacement and replacement, the replacement is always a commodity. No one previously paid to walk, but the treadmill and the gym membership are bought and sold. So it is now with disconnection, it is often packaged as a commodified experience that must be bought, and the costs of disconnection (monetary and otherwise) are for some too hight to bear. This is unfortunate if not simply tragic.

But it seems to me that the answer is not to dismiss the practice of disconnecting as such or efforts to engage more robustly with the wider world. If these practices are, even in small measure, steps toward human flourishing, then our task is to figure out how we can make them as widely available as possible.