Facebook Doesn’t Care About Your Children

Facebook is coming for your children.

Is that framing too stark? Maybe it’s not stark enough.

Facebook recently introduced Messenger Kids, a version of their Messenger app designed for six to twelve year olds. Antigone Davis, Facebook’s Public Policy Director and Global Head of Safety, wrote a blog post introducing Messenger Kids and assuring parents the app is safe for kids.

“We created an advisory board of experts,” Davis informs us. “With them, we are considering important questions like: Is there a ‘right age’ to introduce kids to the digital world? Is technology good for kids, or is it having adverse affects on their social skills and health? And perhaps most pressing of all: do we know the long-term effects of screen time?”

The very next line of Davis’s post reads, “Today we’re rolling out our US preview of Messenger Kids.”

Translation: We hired a bunch of people to ask important questions. We have no idea what the answers may be, but we built this app anyway.

Davis doesn’t even attempt to fudge an answer to those questions. She raises them and never comes back to them again. In fact, she explicitly acknowledges “we know there are still a lot of unanswered questions about the impact of specific technologies on children’s development.” But you know, whatever.

Naturally, we’re presented with statistics about the rates at which children under 13 use the Internet, Internet-enabled devices, and social media. It’s a case from presumed inevitability. Kids are going to be online whether you like it or not, so they might as well use our product. More about this in a moment.

We’re also told that parents are anxious about their kid’s safety online. Chiefly, this amounts to concerns about privacy or online predators. Valid concerns, of course, and Facebook promises to give parents control over their kids online activity. However, safety, in this sense, is not the only concern we should have. A perfectly safe technology may nonetheless have detrimental consequences for our intellectual, moral, and emotional well-being and for the well-being of society when the technology’s effects are widely dispersed.

Finally, we’re given five principles Facebook and its advisory board developed in order to guide the development of their suite of products for children. These are largely meaningless sentences composed of platitudes and buzzwords.

Let’s not forget that this is the same company that “offered advertisers the opportunity to target 6.4 million younger users, some only 14 years old, during moments of psychological vulnerability, such as when they felt ‘worthless,’ ‘insecure,’ ‘stressed,’ ‘defeated,’ ‘anxious,’ and like a ‘failure.'”

Facebook doesn’t care about your children. Facebook cares about your children’s data. As Wired reported, “The company will collect the content of children’s messages, photos they send, what features they use on the app, and information about the device they use.”

There are no ads on Messenger Kids the company is quick to point out. “For now,” I’m tempted to add. Barriers of this sort tend to erode over time. Moreover, even if the barrier holds, an end game remains.

“If they are weaned on Google and Facebook,” Jeffrey Chester, executive director for the Center of Digital Democracy, warns, “you have socialized them to use your service when they become an adult. On the one hand it’s diabolical and on the other hand it’s how corporations work.”

Facebook’s interest in producing an app for children appears to be a part of a larger trend. “Tech companies have made a much more aggressive push into targeting younger users,” the same Wired article noted, “a strategy that began in earnest in 2015 when Google launched YouTube Kids, which includes advertising.”

In truth, I think this is about more than just Facebook. It’s about thinking more carefully about how technology shapes our children and their experience. It is about refusing the rhetoric of inevitability and assuming responsibility.

Look, what if there is no safe way for seven-year-olds to use social media or even the Internet and Internet-enabled devices? I realize this may sound like head-in-the-ground overreaction, and maybe it is, but perhaps it’s worth contemplating the question.

I also realize I’m treading on sensitive ground here, and I want to proceed with care. The last thing over-worked, under-supported parents need is something more to feel guilty about. Let’s forget the guilt. We’re all trying to do our best. Let’s just think together about this stuff.

As adults, we’ve barely got a handle on the digital world. We know devices and apps and platforms are designed to capture and hold attention in a manner that is intellectually and emotionally unhealthy. We know that these design choices are not made with the user’s best interest in mind. We are only now beginning to recognize the personal and social costs of our uncritical embrace of constant connectivity and social media. How eager should we be to usher our children in to this reality?

The reality is upon them whether we like it or not, someone might counter. Maybe, but I don’t quite buy it. Even if it is, the degree to which this is the case will certainly vary based in large part upon the choices parents make and their resolve.

Part of our problem is that we think too narrowly about technology, almost always in terms of functionality and safety. With regards to children, this amounts to safeguarding against offensive content, against exploitation, and against would-be predators. Again, these are valid concerns, but they do not exhaust the range of questions we should be asking about how children relate to digital media and devices.

To be clear, this is not only about preventing “bad things” from happening. It is also a question of the good we want to pursue.

Our disordered relationship with technology is often a product of treating technology as an end rather than a means. Our default setting is to uncritically adopt and ask questions later if at all. We need, instead, to clearly discern the ends we want to pursue and evaluate technology accordingly, especially when it comes to our children because in this, as in so much else, they depend on us.

Some time ago, I put together a list of 41 questions to guide our thinking about the ethical dimensions of technology. These questions are a useful way of examining not only the technology we use but also the technology to which we introduce our children.

What ideals inform the choices we make when we raise children? What sort of person do we hope they will become? What habits do we desire for them cultivate? How do we want them to experience time and place? How do we hope they will perceive themselves? These are just a few of the questions we should be asking.

Your answers to these questions may not be mine or your neighbor’s, of course. The point is not that we should share these ideals, but that we recognize that the realization of these ideals, whatever they may be for you and for me, will depend, in greater measure than most of us realize, on the tools we put in our children’s hands. All that I’m advocating is that we think hard about this and proceed with great care and great courage. Great care because the stakes are high; great courage because merely by our determination to think critically about these matters we will be setting ourselves against powerful and pervasive forces.


Technology in the Classroom

I want to briefly draw your attention to a series of related posts about technology in the classroom, beginning with Clay Shirky’s recent post explaining his decision to have students put their wired digital devices away during class. Let me say that again: Clay Shirky has decided to ban lap tops from his classroom. Clay Shirky. Shirky has long been one of the Internet’s leading advocates and cheerleaders, so this seems to be a pretty telling indication of the scope of the problem.

I particularly appreciated the way Shirky focused on what we might call the ecosystem of the classroom. The problem is not simply that connected devices distract the student who uses them and hampers their ability to learn:

“Anyone distracted in class doesn’t just lose out on the content of the discussion, they create a sense of permission that opting out is OK, and, worse, a haze of second-hand distraction for their peers. In an environment like this, students need support for the better angels of their nature (or at least the more intellectual angels), and they need defenses against the powerful short-term incentives to put off complex, frustrating tasks. That support and those defenses don’t just happen, and they are not limited to the individual’s choices. They are provided by social structure, and that structure is disproportionately provided by the professor, especially during the first weeks of class.”

I came across Shirky’s post via Nick Carr, who also considers a handful of studies that appear to support the decision to create a relatively low-tech classroom environment. I recommend you click through to read the whole thing.

If you’re thinking that this is a rather retrograde, reactionary move to make, then I’d suggest taking a quick look at Alan Jacob’s brief comments on the matter.

You might also want to ask yourself why the late Steve Jobs; Chris Anderson, the former editor at Wired and CEO of a robotics company; Evan Williams, the founder of Blogger, Twitter, and Medium; and a host of other tech-industry heavyweights deploy seemingly draconian rules for how their own children relate to digital devices and the Internet. Here’s Anderson: “My kids accuse me and my wife of being fascists and overly concerned about tech, and they say that none of their friends have the same rules.”

Perhaps they are on to something, albeit in a “do-as-I-say-not-as-I-do” sort of way. Nick Bilton has the story here.

__________________________

Okay, and now a quick administrative note. Rather than create a separate entry for this, I thought it best just to raise the matter at the tail end of this shorter post. Depending on how you ordinarily get to this site, you may have noticed that the feed for this blog now only gives you a snippet view and asks you to click through to read the whole.

I initially made this change for rather self-serving reasons related to the architecture of WordPress, and it was also going to be a temporary change. However, I realized that this change resolved a couple of frustrations I’d had for awhile.

The first of these centered on my mildly obsessive nature when it came to editing and revising. Invariably, regardless of what care I took before publishing, posts would get out with at least one or two typos, inelegant phrases, etc. When I catch them later, I fix them, but those who get their posts via email never got the corrections. If you have to click over to read the whole, however, you would always see the latest, cleanest version. Relatedly, I sometimes find it preferable to update a post with some related information or new links rather than create a new post (e.g.). It would be unlikely that email subscribers would ever see those updates unless they were clicking to the site for the most updated version of the post.

Consequently, I’m considering keeping the snippet feed. I do realize, though, that this might be mildly annoying, involving as it does an extra click or two. So, my question to you is this: do you care? I have a small but dedicated readership, and I’d hate to make a change that might ultimately discourage you from continuing to read. If you have any thoughts on the matter, feel free to share in the comments below or via email.

Also, I’ve been quite negligent about replying to comments of late. When I get a chance to devote some time to this blog, which is not often, I’m opting to write instead. I really appreciate the comments, though, and I’ll do my best to interact as time allows.

Waiting for Socrates … So We Can Kill Him Again and Post the Video on Youtube

It will come as no surprise, I’m sure, if I tell you that the wells of online discourse are poisoned. It will come as no surprise because critics have complained about the tone of online discourse for as long as people have interacted with one another online. In fact, we more or less take the toxic, volatile nature of online discourse for granted. “Don’t read the comments” is about as routine a piece of advice as “look both ways before crossing the street.” And, of course, it is also true that complaints about the coarsening of public discourse in general have been around for a lot longer than the Internet and digital media.

That said, I’ve been intrigued, heartened actually, by a recent round of posts bemoaning the state of online rhetoric from some of the most thoughtful people whose work I follow. Here is Freddie deBoer lamenting the rhetoric of the left, and here is Matthew Anderson noting much of the same on the right. Here is Alan Jacobs on why he’s stepping away from Twitter. Follow any of those links and you’ll find another series of links to thoughtful, articulate writers all telling us, more or less, that they’ve had enough. This piece urges civility and it suggests, hopefully (naively?), that the “Internet” will learn soon enough to police itself, but the evidence it cites along the way seems rather to undermine such hopefulness. I won’t bother to point you to some of the worst of what I’ve regrettably encountered online in recent weeks.

Why is this the case? Why, as David Sessions recently put it, is the state of the Internet awful?

Like everyone else, I have scattered thoughts about this. For one thing, the nature of the medium seems to encourage rancor, incivility, misunderstanding, and worse. Anonymity has something to do with this, and so does the abstraction of the body from the context of communication.

Along the same media-ecological lines, Walter Ong noted that oral discourse tends to be agonistic and literate discourse tends to be irenic. Online discourse tends to be conducted in writing, which might seem to challenge Ong’s characterization. But just as television and radio constituted what Ong called secondary orality, so might we say that social media is a form of secondary literacy, blurring the distinctions between orality and literacy. It is text based, but, like oral discourse, it brings people into a context of relative communicative immediacy. That is to say that through social media people are responding to one another in public and in short order, more as they would in a face-to-face encounter, for example, than in private letters exchanged over the course of months.

In theory, writing affords us the temporal space to be more thoughtful and precise in expressing our ideas, but, in practice, the expectations of immediacy in digital contexts collapse that space. So we lose the strengths of each medium: we get none of the meaning-making cues of face-to-face communication nor any of the time for reflection that written communication ordinarily grants. The media context, then, ends up being rife with misunderstanding and agonistic; it encourages performative pugilism.

Also, as the moral philosopher Alasdair MacIntyre pointed out some time ago, we no longer operate with a set of broadly shared assumptions about what is good and what shape a good life should take. Our ethical reasoning tends not to be built on the same foundation. Because we are reasoning from incompatible moral premises, the conclusions reached by two opposing parties tend to be interpreted as sheer stupidity or moral obtuseness. In other words, because our arguments, proceeding as they do from such disparate moral frameworks, fail to convince and persuade, we begin to assume that those who will not yield to our moral vision must thus be fools or worse. Moreover, we conclude, fools and miscreants cannot be argued with; they can only be shamed, shouted down, or otherwise silenced.

Digital dualism is also to blame. Some people seem to operate under the assumption that they are not really racists, misogynists, anti-Semites, etc.–they just play one on Twitter. It really is much too late in the game to play that tired card.

Perhaps, too, we’ve conflated truth and identity in such a way that we cannot conceive of a challenge to our views as anything other than a challenge to our humanity. Conversely, it seems that in some highly-charged contexts being wrong can cost you the basic respect one might be owed as a fellow human being.

Finally, the Internet is awful because, frankly, people are awful. We all are; at least we all can be under the right circumstances. As Solzhenitsyn put it, “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”

To that list, I want to offer just one more consideration: a little knowledge is a dangerous thing and there are few things the Internet does better than giving everyone a little knowledge. A little knowledge is a dangerous thing because it is just enough to give us the illusion of mastery and a sense of authority. This illusion, encouraged by the myth of having all the world’s information at our finger tips, has encouraged us to believe that by skimming an article here or reading the summary of a book there we thus become experts who may now liberally pontificate about the most complex and divisive issues with unbounded moral and intellectual authority. This is the worst kind of insufferable foolishness, that which mistakes itself for wisdom without a hint of irony.

Real knowledge on the other hand is constantly aware of all that it does not know. The more you learn, the more you realize how much you don’t know, and the more hesitant you’ll be to speak as if you’ve got everything figured out. Getting past that threshold of “a little knowledge” tends to breed humility and create the conditions that make genuine dialogue possible. But that threshold will never be crossed if all we ever do is skim the surface of reality, and this seems to be the mode of engagement encouraged by the information ecosystem sustained by digital media.

We’re in need of another Socrates who will teach us once again that the way of wisdom starts with a deep awareness of our own ignorance. Of course, we’d kill him too, after a good skewering on Twitter, and probably without the dignity of hemlock. A posthumous skewering would follow, naturally, after the video of his death got passed around on Reddit and Youtube.

I don’t want to leave things on that cheery note, but the fact is that I don’t have a grand scheme for making online discourse civil, informed, and thoughtful. I’m pretty sure, though, that things will not simply work themselves out for the better without deliberate and sustained effort. Consider how W.H. Auden framed the difference between traditional cultures and modernity:

“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues which were once nursed unconsciously by the forces of nature must now be recovered and fostered by a deliberate effort of the will and the intelligence. In the future, societies will not grow of themselves. They will be either made consciously or decay.”

For better or worse, or more likely both, this is where we find ourselves–either we deploy deliberate effort of will and intelligence or face perpetual decay. Who knows, maybe the best we can do is to form and maintain enclaves of civility and thoughtfulness amid the rancor, communities of discourse where meaningful conversation can be cultivated. These would probably remain small communities, but their success would be no small thing.

__________________________________

Update: After publishing, I read Nick Carr’s post on the revival of blogs and the decline of Big Internet. “So, yeah, I’m down with this retro movement,” Carr writes, “Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.” I thought that was good news in light of my closing paragraph.

And, just in case you need more by way of diagnosis, there’s this: “A Second Look At The Giant Garbage Pile That Is Online Media, 2014.”

Our Little Apocalypses

An incoming link to my synopsis of Melvin Kranzberg’s Six Laws of Technology alerted me to a short post on Quartz about a new book by an author named Michael Harris. The book, The End of Absence: Reclaiming What We’’ve Lost in a World of Constant Connection, explores the tradeoffs induced by the advent of the Internet. Having not read the book, I obviously can’t say much about it, but I was intrigued by one angle Harris takes that comes across in the Quartz piece.

Harris’s book is focused on the generation, a fuzzy category to be sure, that came of age just before the Internet exploded onto the scene in the early 90s. Here’s Harris:

“If you were born before 1985, then you know what life is like both with the internet and without. You are making the pilgrimage from Before to After.”

“If we’re the last people in history to know life before the internet, we are also the only ones who will ever speak, as it were, both languages. We are the only fluent translators of Before and After.”

It would be interesting to read what Harris does with this framing. In any case, it’s something I’ve thought about often. This is my fifteenth year teaching. Over the years I’ve noticed, with each new class, how the world that I knew as a child and as a young adult recedes further and further into the murky past. As you might guess, digital technology has been one of the most telling indicators.

Except for a brief flirtation with Prodigy on an MS-DOS machine with a monochrome screen, the Internet did not come into my life until I was a freshman in college. I’m one of those people Harris is writing about, one of the Last Generation to know life before the Internet. Putting it that way threatens to steer us into a rather unseemly romanticism, and, knowing that I’m temperamentally drawn to dying lights, I want to make sure I don’t give way to it. That said, it does seem to me that those who’ve known the Before and After, as Harris puts it, are in a unique position to evaluate the changes. Experience, after all, is irreducible and incommunicable.

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of my unsolicited points of advice to tech writers.

Now while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

To be very clear and to avoid the pedantic, sanctimonious reply–although, in all honesty, I’ve gotten so little of that on this blog that I’ve come to think that a magical filter of civility vets all those who come by–let me affirm that yes, of course, I certainly would’ve made many trade-offs along the way, too. To recognize costs and losses does not mean that you always refuse to incur them, it simply means that you might incur them in something other than a naive, triumphalist spirit.

Around this time last year, an excerpt from Jonathan Franzen’s then-forthcoming edited work on Karl Krauss was published in the Guardian; it was panned, frequently and forcefully, and deservedly so in some respects. But the conclusion of the essay struck me then as being on to something.

“Maybe … apocalypse is, paradoxically, always individual, always personal,” Franzen wrote,

“I have a brief tenure on earth, bracketed by infinities of nothingness, and during the first part of this tenure I form an attachment to a particular set of human values that are shaped inevitably by my social circumstances. If I’d been born in 1159, when the world was steadier, I might well have felt, at fifty-three, that the next generation would share my values and appreciate the same things I appreciated; no apocalypse pending.”

But, of course, he wasn’t. He was born in the modern world, like all of us, and this has meant change, unrelenting change. Here is where the Austrian writer Karl Kraus, whose life straddled the turn of the twentieth century, comes in: “Kraus was the first great instance of a writer fully experiencing how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for personal apocalypse.” Perhaps. I’m tempted to quibble with this claim. The words of John Donne, “Tis all in pieces, all coherence gone,” come to mind. Yet, even if Franzen is not quite right about the historical details, I think he’s given honest voice to a common experience of modernity:

“The experience of each succeeding generation is so different from that of the previous one that there will always be people to whom it seems that the key values have been lost and there can be no more posterity. As long as modernity lasts, all days will feel to someone like the last days of humanity. Kraus’s rage and his sense of doom and apocalypse may be the anti-thesis of the upbeat rhetoric of Progress, but like that rhetoric, they remain an unchanging modality of modernity.”

This is, perhaps, a bit melodramatic, and it is certainly not all that could be said on the matter, or all that should be said. But Franzen is telling us something about what it feels like to be alive these days. It’s true, Franzen is not the best public face for those who are marginalized and swept aside by the tides of technological change, tides which do not lift all boats, tides which may, in fact, sink a great many. But there are such people, and we do well to temper our enthusiasm long enough to enter, so far as it is possible, into their experience. In fact, precisely because we do not have a common culture to fall back on, we must work extraordinarily hard to understand one another.

Franzen is still working on the assumption that these little personal apocalypses are a generational phenomenon. I’d argue that he’s underestimated the situation. The rate of change may be such that the apocalypses are now intra-generational. It is not simply that my world is not my parents’ world; it is that my world now is not what my world was a decade ago. We are all exiles now, displaced from a world we cannot reach because it fades away just as its contours begin to materialize. This explains why, as I wrote earlier this year, nostalgia is not so much a desire for a place or a time as it is a desire for some lost version of ourselves. We are like Margaret, who in Hopkins’ poem, laments the passing of the seasons, Margaret to whom the poet’s voice says kindly, “It is Margaret you mourn for.”

Although I do believe that certain kinds of change ought to be resisted–I’d be a fool not to–none of what I’ve been trying to get at in this post is about resisting change in itself. Rather, I think all I’ve been trying to say is this: we must learn to take account of how differently we experience the changing world so that we might best help one another as we live through the change that must come. That is all.

Unplugged

I’m back. In fact, I’ve been back for more than a week now. I’ve been back from several days spent in western North Carolina. It’s beautiful country out there, and, where I was staying, it was beautiful country without cell phone signal or Internet connection. It was a week-long digital sabbath, or, if you prefer, a week-long digital detox. It was a good week. I didn’t find myself. I didn’t discover the meaning of life. I had no epiphanies, and I didn’t necessarily feel more connected to nature. But it was a good week.

I know that reflection pieces on technology sabbaths, digital detoxes, unplugging, and disconnecting are a dime a dozen. Slightly less common are pieces critical of the disconnectionists, as Nathan Jurgenson has called them, but these aren’t hard to come by either. Others, like Evgeny Morozov, have contributed more nuanced evaluations. Not only has the topic been widely covered, if you’re reading this blog I’d guess that you’re likely to be more or less sympathetic to these practices, even if you harbor some reservations about how they are sometimes presented and implemented. All of that to say, I’ve hesitated to add yet another piece on the experience of disconnection, especially since I’d be (mostly) preaching to the choir. But … I’m going to try your patience and offer just a few thoughts for your consideration.

First, I think the week worked well because its purpose wasn’t to disconnect from the Internet or digital devices; being disconnected was simply a consequence of where I happened to be. I suspect that when one explicitly sets out to disconnect, the psychology of the experience works against you. You’re disconnecting in order to be disconnected because you assume or hope it will yield some beneficial consequences. The potential problem with this scenario is that “being connected” is still framing, and to some degree defining, your experience. When you’re disconnected, you’re likely to be thinking about your experience in terms of not being connected. Call it the disconnection paradox.

This might mean, for example, that you’re overly aware of what you’re missing out on, thus distracted from what you hoped to achieve by disconnecting. It might also lead to framing your experience negatively in terms of what you didn’t do–which isn’t ultimately very helpful–rather than positively in terms of what you accomplished. In the worst cases, it might also lead to little more than self-congratulatory or self-loathing status updates.

In my recent case, I didn’t set out to be disconnected. In fact, I was rather disappointed that I’d be unable to continue writing about some of the themes I’d been recently addressing. So while I was carrying on with my disconnected week, I didn’t think at all about being connected or disconnected; it was simply a matter of fact. And, upon reflection, I think this worked in my favor.

This observation does raise a practical problem, however. How can one disconnect, if so desired, while avoiding the disconnection paradox? Two things come to mind. As Morozov pointed out in his piece on the practice of disconnection, there’s little point in disconnecting if it amounts to coming up for breath before plunging back into the digital flood. Ultimately, then, the idea is to so order our digital practices that enforced periods of disconnection are unnecessary.

But what if, for whatever reason, this is not a realistic goal? At this point we run up against the limits of individual actions and need to think about how to effect structural and institutional changes. Alongside those longterm projects, I’d suggest that making the practice of disconnection regular and habitual will eventually overcome the disconnection paradox.

Second consideration, obvious though it may be: it matters what you do with the time that you gain. For my part, I was more physically active than I would be during the course of an ordinary week, much more so. I walked, often; I swam; and I did a good bit of paddling too. Not all of this activity was pleasurable as it transpired. Some of it was exhausting. I was often tired and sore. But I welcomed all of it because it relieved the accumulated stress and tension that I tend to carry around on my back, shoulders, neck, and jaw, much of it a product of sitting in front of a computer or with a book for extended periods of time. It was a good week because at the end of it, my body felt as good as it had in a long time, even if it was a bit battered and ragged.

The feeling reminded me of what the Patrick Leigh Fermor wrote about his stay in a monastery early in the late 1950s, a kind of modernity detox. Initially, he was agitated, then he was overwhelmed for a few days by the desire to sleep. Finally, he emerged “full of energy and limpid freshness.” Here is how he described the experience in A Time to Keep Silence:

“The explanation is simple enough: the desire for talk, movements and nervous expression that I had transported from Paris found, in this silent place, no response or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which must be the common property of all our contemporaries, broke loose and swamped everything. No demands, once I had emerged from that flood of sleep, were made upon my nervous energy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but appeared to have lost their dragonish validity.”

“[T]he tremendous accumulation of tiredness, which must be the common property of all our contemporaries”–indeed, and to that we might add the tremendous accumulation of stress and anxiety. The Internet, always-on connectivity, and digital devices have not of themselves caused the tiredness, stress, and anxiety, but they haven’t helped either. In certain cases they’ve aggravated the problem. And, I’d suggest, they have done so regardless of what, specifically, we have been doing. Rather the aggravation is in part a function of how our bodies engage with these tools. Whether we spend a day in front of a computer perusing cat videos, playing Minecraft, writing a research paper, or preparing financial reports makes little difference to our bodies. It is in each case a sedentary day, and these are, as we all know, less than ideal for our bodies. And, because so much of our well-being depends on our bodies, the consequences extend to the whole of our being.

I know countless critics since the dawn of industrial society have lamented the loss of regular physical activity, particularly activity that unfolded in “nature.” Long before the Internet, such complaints were raised about the factory and the cubicle. It is also true that many of these calls for robust physical activity have been laden with misguided assumptions about the nature of masculinity and worse. But none of this changes the stubborn, intractable fact that we are embodied creatures and the concrete physicality of our nature is subject to certain limits and thrives under certain conditions and not others.

One further point about my experience: some of it was moderately risky. Not extreme sports-risky or risky bordering on foolish, you understand. More like “watch where you step there might be a rattle snake” risky (I avoided one by two feet or so) or “take care not to slip off the narrow trail, that’s a 300 foot drop” risky (I took no such falls, happily). I’m not sure what I can claim for all of this, but I would be tempted to make a Merleau-Ponty-esque argument about the sort of engagement with our surroundings that navigating risk requires of us. I’d modestly suggest, on a strictly anecdotal basis, that there is something mentally and physically salubrious about safely navigating the experience of risk. While we’re at, it plug-in the “troubles” (read, sometimes risky, often demanding activities) that philosopher Albert Borgmann encourages us to accept in principle.

Of course, it must immediately be added that this is a first-world-problem par excellence. Around the globe there are people who have no choice but to constantly navigate all sorts of risks to their well-being, and not of the moderate variety either. It must then seem perverse to suggest that some of us might need to occasionally elect to encounter risk, but only carefully so. Indeed, but such might nonetheless be the case. Certainly, it is also true that all of us are at risk everyday when walking a city street, or driving a car, or flying in a plane, and so on. My only rejoinder is again to lean on my experience and suggest that the sort of physical activity I engaged in had the unexpected effect of calling on and honing aspects of my body and mind that are not ordinarily called into service by my typical day-to-day experience, and this was a good thing. The accustomed risks we thoughtlessly take, crossing a street say, precisely because they are a routinized part of our experience do not call forth the same mental and bodily resources.

A final thought. Advocating disconnection sometimes raises the charges of elitism–Sherry Turkle strolling down Cape Cod beaches and what not. I more or less get where this is coming from, I think. Disconnection is often construed as a luxury experience. Who gets to placidly stroll the beaches of Cape Cod anyway? And, indeed, it is an unfortunate feature of modernity’s unfolding that what we eliminate from our lives, often to make room for one technology or another, we then end up compensating for with another technology because we suddenly realized that what we eliminated might have been useful and health-giving.

It was Neil Postman, I believe, who observed that having eliminated walking by the adoption of the automobile and the design of our public spaces, we then invented a machine on which we could simulate walking in order to maintain a minimal level of fitness. Postman’s chief focus, if I remember the passage correctly, was to point out the prima facie absurdity of the case, but I would add an economic consideration: in this pattern of technological displacement and replacement, the replacement is always a commodity. No one previously paid to walk, but the treadmill and the gym membership are bought and sold. So it is now with disconnection, it is often packaged as a commodified experience that must be bought, and the costs of disconnection (monetary and otherwise) are for some too hight to bear. This is unfortunate if not simply tragic.

But it seems to me that the answer is not to dismiss the practice of disconnecting as such or efforts to engage more robustly with the wider world. If these practices are, even in small measure, steps toward human flourishing, then our task is to figure out how we can make them as widely available as possible.