Distracted from What?

Via Alan Jacobs, I came across an essay by Frank Furedi titled, “Age of Distraction: Why the idea digital devices are destroying our concentration and memory is a myth.”

One would expect that an essay so titled would go on to demonstrate by carefully reasoned arguments and by the deployment of relevant evidence that the claim in question was indeed a myth. One would be disappointed.

“If all the recent reports of memory loss and diminishing attention spans are to be believed,” Furedi notes sardonically, “it is unlikely that you will get to the end of this essay.” Perhaps he was banking on it and hoping you would simply take him at his word.

Furedi opens by acknowledging that every time he meets with educators, he is invariably confronted with the lament that we live in an age of distraction. In paragraph after subsequent paragraph, Furedi references study after study that appear to confirm the educators’ fears. He then references a spate of recent books that raise similar concerns about our use of digital devices. It’s a curious rhetorical strategy. The attentive reader may be forgiven for thinking that maybe the title was an error perpetrated by a careless editor. More likely, of course, we suspect that we are being set up. Furedi will surely show us why all of this evidence and each of these critics should be dismissed.

He does not. Rather, he merely asserts that new technology is “unlikely” to be the cause of our disordered attention:

Whatever one makes of the current claims about the effects of our supposed Age of Distraction, it should be evident that their cause is unlikely to be the workings of new technology. The experience of the past indicates that most of the troubles attributed to the internet and digital technology have served as topics of concern in previous centuries. Contributions on the current challenges facing readers recycle an age-old mantra that there is too much choice, too much information and too much change. It is far more likely that our current predicament is not the availability of powerful and exciting new technologies of communication, but an uncertainty about what to communicate.

Jacobs, who has written a little gem of a book bearing the title The Pleasures of Reading in an Age of Distraction, rightly skewered Furedi:

Furedi does not offer any evidence for what he belives is the “far more likely” explanation for “our current predicament.” He just says that his view is more likely. He does not explain what he thinks “our current predicament” is. He disbelieves the studies suggesting that human concentration and memory are affected by the use of digital devices, but he does not say why he disbelieves them: he offers no reasons for doubting their conclusions. He notes that rhetorically similar comments have been made about other technologies in the past, but does not inquire whether those earlier comments were right or wrong, nor does he explain how critiques of some past technologies are relevant to the assessment of other technologies today. He has written a good many words here without showing any curosity about the truth, and without providing evidence to support a single one of his claims. Perhaps he was too distracted to do the job properly.

Insofar as there is any kind of argument in Furedi’s essay, it is this: “The truth is, 21st century society may fear distraction and that our attention span is diminishing, but so did our ancestors.” So what? Even if we were to establish that prior concerns were indeed meaningfully analogous, what exactly would this prove?

I’ve commented on this non-argument a time or two because it occurs so frequently in discussions about technology, frequently enough to qualify as a Borg Complex symptom and as the second bit of unsolicited advice I’ve offered to tech writers: “Do not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today.”

It is not, to be clear, that the historical parallels are irrelevant or useless. It is only that the precise nature of their relevance is not obvious. Some work needs to be done in order to establish their relevance in the first place, to show that the parallels are not merely apparent. Then, more than a little work needs to be done in order to establish the significance of their relevance. One cannot simply posit the historical parallels and leave it at that as if their significance was plain to see.

Laying all of this to one side, however, Furedi did raise a question that we would do well to consider: “The question that is rarely posed by advocates of the distraction thesis is: what are people distracted from?”

Very often, of course, the answer is the deep reading of challenging texts. That said, Furedi’s question brings to mind an oft-cited observation by Hannah Arendt regarding the promise of automation:

It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won . . . . What we are confronted with is the prospect of a society of laborers without labor, that is, without the only activity left to them.  Surely, nothing could be worse.

If we were suddenly able to achieve the kind of attentiveness the loss or lack of which critics, past and present, have lamented, would we know how to direct it or what to do with it? Upon what objects would we lavish this hard won resource?

We generally value leisure for leisure’s sake, and it is a diminished notion of leisure at that. In another age, as Arendt knew, leisure was not an end but rather the state that allowed the fortunate few to pursue some higher, noble purpose. Not incidentally, that purpose, or better that constellation of purposes might be best understood as varieties of the vita contemplativa–the contemplative life in either its classical, Christian, or Eastern manifestations, which would have been the raison d’être of both leisure and attention.

Here is the question I’m left with at this point: Is our distractedness not only an effect of our technological environment but also a consequence of the absence of a normative telos that might give our attention something at which to aim? Or, to put it another way, is distractedness the natural state of the liberated will that refuses to be captured by goods external to itself? If so, then it appears that distractedness may be understood as the natural state of the aimless soul, free to attend to whatever it will but with no compelling reason to attend for very long to anything in particular.


Follow up here: Attention and the Moral Life.

The Consolations of a Technologically Re-enchanted World

Navneet Alang writes about digital culture with a rare combination of insight and eloquence. In a characteristically humane meditation on the perennial longings expressed by our use of social media and digital devices, Alang recounts a brief exchange he found himself having with Alexa, the AI assistant that accompanies Amazon Echo.

Alang had asked Alexa about the weather while he was traveling in an unfamiliar city. Alexa alerted him of the forecasted rain, and, without knowing why exactly, Alang thanked the device. “No problem,” Alexa replied.

It was Alang’s subsequent reflection on that exchange that I found especially interesting:

In retrospect, I had what was a very strange reaction: a little jolt of pleasure. Perhaps it was because I had mostly spent those two weeks alone, but Alexa’s response was close enough to the outline of human communication to elicit a feeling of relief in me. For a moment, I felt a little less lonely.

From there, Alang considers apps which allow users to anonymously publish their secrets to the world or to the void–who can tell–and little-used social media sites on which users compose surprisingly revealing messages seemingly directed at no one in particular. A reminder that, as Elizabeth Stoker Bruenig has noted, “Confession, once rooted in religious practice, has assumed a secular importance that can be difficult to describe.”

Part of what makes the effort to understand technology so fascinating and challenging is that we are not, finally, trying to understand discreet artifacts or even expansive systems; what we are really trying to understand is the human condition, alternatively and sometimes simultaneously expressed, constituted, and frustrated by our use of all that we call technology.

As Alang notes near the end of his essay, “what digital technologies do best, to our benefit and detriment, is to act as a canvas for our desires.” And, in his discussion, social media and confessional apps express “a wish to be seen, to be heard, to be apprehended as nothing less than who we imagine ourselves to be.” In the most striking paragraph of the piece, Alang expands on this point:

“Perhaps, then, that Instagram shot or confessional tweet isn’t always meant to evoke some mythical, pretend version of ourselves, but instead seeks to invoke the imagined perfect audience—the non-existent people who will see us exactly as we want to be seen. We are not curating an ideal self, but rather, an ideal Other, a fantasy in which our struggle to become ourselves is met with the utmost empathy.”

This strikes me as being rather near the mark. We might also consider the possibility that we seek this ideal Other precisely so that we might receive back from it a more coherent version of ourselves. The empathetic Other who comes to know me may then tell me what I need to know about myself. A trajectory begins to come into focus taking up both the confessional booth and the therapist’s office. Perhaps this presses the point too far, I don’t know. It is, in any case, a promise implicit in the rhetoric of Big Data, that it is the Other that knows us better than we know ourselves. If, to borrow St. Augustine’s formulation, we have become a question to ourselves, then the purveyors of Big Data proffer to us the answer.

It also strikes me that the yearning Alang describes, in another era, would have been understood chiefly as a deeply religious longing. We may see it as fantasy, or, as C.S. Lewis once put it, we may see it as “the truest index of our real situation.”

Interestingly, the paragraph from which that line is taken may bring us back to where we started: with Alang deriving a “little jolt of pleasure” from his exchange with Alexa. `Here is the rest of it:

“Apparently, then, our lifelong nostalgia, our longing to be reunited with something in the universe from which we now feel cut off, to be on the inside of some door which we have always seen from the outside, is no mere neurotic fancy, but the truest index of our real situation.”

For some time now, I’ve entertained the idea that the combination of technologies that promises to animate our mute and unresponsive material environment–think Internet of Things, autonomous machines, augmented reality, AI–entice us with a re-enchanted world: the human-built world, technologically enchanted. Which is to say a material world that flatters us by appearing to be responsive to our wishes and desires, even speaking to us when spoken to–in short, noting us and thereby marginally assuaging the loneliness for which our social media posts are just another sort of therapy.

Fin

Had you asked me at any point during the last sixteen years about my plans, I would have mentioned something about earning a PhD. It is a strange thing to walk away from a goal that has, for so long, structured the shape of one’s thinking about the future. But that is exactly what I have done.

I’m tempted to wax introspective about this, but I think I’ll resist. Frankly, I’m not entirely sure that I would know quite what to say at this point anyway. It was, as you might imagine, a difficult decision–among the most difficult I’ve had to make–and, under the circumstances, I have as much peace about it as I could reasonably expect to have.

So there you have it.

Many of you have been reading this blog for awhile and you may remember that it began alongside my course work nearly seven years ago. I’ve appreciated the support and encouragement many of you have given along the way, so it seemed only natural to let you all know.

As for the future, I’ll continue thinking and writing about technology as time allows, on this site and elsewhere. I’ll be pitching essays and reviews here and there. This site is getting a bit of a makeover in line with these newer goals. And, as time and resources allow, I’ll be working to make something of CSET. I’ll keep you apprised on both fronts.

Cheers!

Maybe the Kids Aren’t Alright

Consider the following statements regarding the place of digital media in the lives of a cohort of thirteen-year-olds:

“One teenager, Fesse, was usually late – partly because he played Xbox till late into the night ….”

“We witnessed a fair number of struggles to make the technology work, or sometimes to engage pupils with digital media in the classroom.”

“Homework was often accompanied by Facebook, partly as a distraction and partly for summoning help from friends. Some became quickly absorbed in computer games.”

“Adam [played] with people from the online multi-player game in which he could adopt an identity he felt was truly himself.”

“Megan worked on creating her private online space in Tumblr – hours passing by unnoticed.”

“Each found themselves drawn, to varying degrees, into their parents’ efforts to gather as a family, at supper, through shared hobbies, looking after pets, or simply chatting in front of the television – albeit each with phones or tablets at the ready – before peeling off in separate directions.”

“Digital devices and the uses they put them to have become teenagers’ way of asserting their agency – a shield from bossy parents or annoying younger siblings or seemingly critical teachers, a means to connect with sympathetic friends or catching up with ongoing peer ‘drama.'”

Okay, now what would be your initial thoughts about the state of affairs described by these statements? Generally speaking, presented with these observations about the lives of 13-year-olds, I’d think that we might be forgiven a bit of concern. Sure, some of this describes the generally recognizable behavior of “teenagers” writ large, and nothing here suggested life-or-death matters, necessarily, but, nonetheless, it seemed to me that we might wish things were a touch different in some respects. At least, we might want a little more information about how these factors play out over the long run.

But the author framed these statements with these sorts of interpretative comments:

“… the more we know about teenagers’ lives the clearer it becomes that young people are no more interested in being constantly plugged in than are the adults around them.”

“As adults and parents, we might spend less time worrying about what they get up to as teenagers and more time with them, discussing the challenges that lie ahead for them as adults in an increasingly connected world.”

Couple that with the opening paragraph, which begins thus: “With each generation the public consciousness conjures up a new fear for our youth ….” There is no quicker way to signal that you are not at all concerned about something than by leading with “each generation, blah, blah, blah.”

When I first read this piece, I felt a certain dissonance, and I couldn’t quite figure out its source. After thinking about it a bit more, I realized that the dissonance arose from the incongruity between the cheery, “the kids are alright” tone of the article and what the article actually reported.

(I might add that part of my unease also regards methodology. Why would we think that the students were any more transparent with this adult researcher in their midst than they were with the teachers whose halting attempts to connect with them via digital media they hold in apparent contempt? Mind you, this may very well be addressed in a perfectly adequate manner by the author in the book that this article introduces.)

Let me be clear, I’m not calling for what is conventionally and dismissively referred to as a “moral panic.” But I don’t think our only options are “everything is going to hell” and “we live in a digital paradise, quit complaining.” And what is reported in this article suggests to me that we should not be altogether unconcerned about how digital media floods every aspect of our lives and the lives of our children.

To the author’s point that “the more we know about teenagers’ lives the clearer it becomes that young people are no more interested in being constantly plugged in than are the adults around them,” I reply, that’s a damnably low bar and, thus, little comfort.

And when the author preaches “As adults and parents, we might spend less time worrying about what they get up to as teenagers and more time with them, discussing the challenges that lie ahead for them as adults in an increasingly connected world,” I reply, that’s exactly what many adults and parents are trying to do but many of them feel as if they are fighting a losing battle against the very thing you don’t want them to worry about.

One last thought: we are deeply invested in the comforting notion that “the kids are alright,” aren’t we? I’m not saying they are not or that they will not be alright, necessarily. I’m just not sure. Maybe some will and some won’t. Some of the very stories linked by the website to the article in question suggest that there are at least some troubling dimensions to the place of digital media in the lives of teens. I’ve spent the better part of the last fifteen years teaching teens in multiple contexts. In my experience, with a much larger data set mind you, there are indeed reasons to be hopeful, but there are also reasons to be concerned. But never mind that, we really want to believe that they will be just fine regardless.

That desire to believe the “kids are alright” couples all too well with the desire to hold our technology innocent of all wrong. My technological habits are no different, may be they’re worse, so if the kids are alright then so am I. Perhaps the deeper desire underlying these tendencies is the desire to hold ourselves blameless and deflect responsibility for our own actions. If the “kids are alright” no matter what we do or how badly we screw up, then I’ve got nothing to worry about as an adult and a parent. And if the technologies that I’ve allowed to colonize my life and theirs are never, ever to blame, then I can indulge in them to my heart’s content without so much as a twinge of compunction. I get a pass either way, and who doesn’t want that? But maybe the kids are not altogether alright, and maybe it is not altogether their fault but ours.

Finally, one last thought occurred to me. Do we even know what it would mean to be alright anymore? Sometimes I think all we’re aiming at is something like a never-ending and exhausting management of perpetual chaos. Maybe we’ve forgotten how our lives might be alternatively ordered. Maybe our social and cultural context inhibits us from pursuing a better ordered life. Perhaps out of resignation, perhaps for lack of imagination, perhaps because we lack the will, we dare not ask what might be the root causes of our disorders. If we did, we might find that some cherished and unquestioned value, like our own obsession with unbridled individual autonomy, might be complicit. Easier to go on telling ourselves that everything will be alright.

Links and News

Zygmunt Bauman, the esteemed Polish sociologist, is now 90 years old. His advanced age has done nothing to slow his prodigious productivity. During a recent visit to Spain, he gave an interview that was published in El Pais. I encourage you to read the whole thing, it is not very long. There was one paragraph in particular that caught my attention:

The question of identity has changed from being something you are born with to a task: you have to create your own community. But communities aren’t created, and you either have one or you don’t. What the social networks can create is a substitute. The difference between a community and a network is that you belong to a community, but a network belongs to you. You feel in control. You can add friends if you wish, you can delete them if you wish. You are in control of the important people to whom you relate. People feel a little better as a result, because loneliness, abandonment, is the great fear in our individualist age. But it’s so easy to add or remove friends on the internet that people fail to learn the real social skills, which you need when you go to the street, when you go to your workplace, where you find lots of people who you need to enter into sensible interaction with. Pope Francis, who is a great man, gave his first interview after being elected to Eugenio Scalfari, an Italian journalist who is also a self-proclaimed atheist. It was a sign: real dialogue isn’t about talking to people who believe the same things as you. Social media don’t teach us to dialogue because it is so easy to avoid controversy… But most people use social media not to unite, not to open their horizons wider, but on the contrary, to cut themselves a comfort zone where the only sounds they hear are the echoes of their own voice, where the only things they see are the reflections of their own face. Social media are very useful, they provide pleasure, but they are a trap.

I’m not sure that the very broad strokes with which he characterized social media are entirely fair. A bit more nuance is probably called for. That quibble aside, the bit about networks and communites strikes me as being quite well put and worthy of our consideration. One way of characterizing the history of modernity is precisely as the progressive liberation of the individual from the bounds of traditional communities. This liberation has not been without its costs.

I’ll pass along another interview, this one in De Zeen with Maarten Hajer, chief curator of the International Architecture Biennale Rotterdam 2016. I found it notable chiefly for Hajer’s determined opposition to what I’ve called a Borg Complex:

Speaking at an opening event for the biennale, Hajer called for architects and designers to stop treating the advent of smart technologies as inevitable, and to question whether they will solve any problems at all.

“People with lots of media force pretend to know exactly what the future will look like, as if there is no choice,” he said. “I’m of course thinking about self-driving vehicles inevitably coming our way [….]

Discussions about the future of cities are at risk of being “mesmerised” by technology, he added.

“We think about big data coming towards us, 3D printing demoting us, or the implication of robots in the sphere of health, as if they are inevitabilities. My call is for us to think about what we want from those technological advances.”

Quite right.

I should add, too, that I’ve very recently started posting with some regularity at CSET’s blog. Click over and follow along.