Digital Media and Our Experience of Time

Early in the life of this site, which is to say about eight years ago, I commented briefly on a story about five scientists who embarked on a rafting trip down the San Juan River in southern Utah in an effort to understand how “heavy use of digital devices” was affecting us. The trip was the brainchild of a professor of psychology at the University of Utah, David Strayer, and Matt Ritchel wrote about it for the Times.

I remember this story chiefly because it introduced me to what Strayer called “third day syndrome,” the tendency, after about three days of being “unplugged,” to find oneself more at ease, more calm, more focused, and more rested. Over the past few years, Strayer’s observation has been reinforced by my own experience and by similar unsolicited accounts I’ve heard from several others.

As I noted back in 2010, the rafting trip led one of the scientists to wonder out loud: “If we can find out that people are walking around fatigued and not realizing their cognitive potential … What can we do to get us back to our full potential?”

I’m sure the idea that we are walking around fatigued will strike most as entirely plausible. That we’re not realizing our full cognitive potential, well, yes, that resonates pretty well, too. But, that’s not what mostly concerns me at the moment.

What mostly concerns me has more to do with what I’d call the internalized pace at which we experience the world. I’m not sure that’s the most elegant formulation for what I’m trying to get at. I have in mind something like our inner experience of time, but that’s not exactly right either. It’s more like the speed at which we feel ourselves moving across the temporal dimension.

Perhaps the best metaphor I can think of is that of walking on those long motorized walkways we might find at an airport, for example. If you’re not in a hurry, maybe you stand while the walkway carries you along. If you are in a hurry or don’t like standing, you walk, briskly perhaps. Then you step off at the end of the walkway and find yourself awkwardly hurtling forward for a few steps before you resume a more standard gait.

So that’s our experience of time, no? Within ordinary time, modernity has built the runways, and we jump on and hurry ourselves along. Then, for whatever reason, we get dumped out into ordinary time and our first experience is to find ourselves disoriented and somehow still feeling ourselves propelled forward by the some internalized temporal inertia.

I feel this most pronouncedly when I take my two toddlers out to the park, usually in the late afternoons after a day of work. I delight in the time, it is a joy and not at all a chore, yet I frequently find something inside me rushing me along. I’ve entered into ordinary time along with two others who’ve never known any other kind of time, but I’ve been dumped into it after running along the walkway all day, day in and day out, for weeks and months and years. On the worst days, it takes a significant effort of the will to overcome the inner temporal inertia and that only for a few moments at a time.

This state of affairs is not entirely novel. As Harmut Rosa notes in Social Acceleration, Georg Simmel in 1897 had already remarked on how “one often hears about the ‘pace of life,’ that it varies in different historical epochs, in the regions of the contemporary world, even in the same country and among individuals of the same social circle.”

Then I recall, too, Patrick Leigh Fermor’s experience boarding at a monastery in the late 1950s. Fermor relates the experience in A Time to Keep Silence:

“The most remarkable preliminary symptoms were the variations of my need of sleep. After initial spells of insomnia, nightmare and falling asleep by day, I found that my capacity for sleep was becoming more and more remarkable: till the hours I spent in or on my bed vastly outnumbered the hours I spent awake; and my sleep was so profound that I might have been under the influence of some hypnotic drug. For two days, meals and the offices in the church — Mass, Vespers and Compline — were almost my only lucid moments. Then began an extraordinary transformation: this extreme lassitude dwindled to nothing; night shrank to five hours of light, dreamless and perfect sleep, followed by awakenings full of energy and limpid freshness. The explanation is simple enough: the desire for talk, movements and nervous expression that I had transported from Paris found, in this silent place, no response or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which must be the common property of all our contemporaries, broke loose and swamped everything. No demands, once I had emerged from that flood of sleep, were made upon my nervous energy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but appeared to have lost their dragonish validity.”

I read this, in part, as the description of someone who was re-entering ordinary time, someone whose own internal pacing was being resynchronized with ordinary time.

So I don’t think digital media has created this phenomenon, but I do think it has been a powerful accelerating agent. How one experiences time is complex matter and I’m not Henri Bergson, but one way to account for the experience of time that I’m trying to describe here is to consider the frequency with which we encounter certain kinds of stimuli, the sort of stimuli that assault our attention and redirect it, the kinds of stimuli digital media is most adept at delivering. I suppose the normal English word for what I’ve just awkwardly described is distraction. Having become accustomed to a certain high frequency rate of distraction, our inner temporal drive runs at a commensurate speed. The temporal inertia I’ve been trying to describe, then, may also be characterized as a withdrawal symptom once we’re deprived of the stimuli or their rate dramatically decreases. The total effect is both cognitive and affective: the product of distraction is distractedness but also agitation and anxiety.

Along these same lines, then, we might say that our experience of time is also structured by desire. Of course, we knew this from the time we were children: the more you await something the slower time seems to pass. Deprived of stimuli, we crave it and so grow impatient. We find ourselves in a “frenetic standstill,” to borrow a phrase from Paul Virilio. In this state, we are unable to attend to others or to the world as we should, so the temporal disorder is revealed to have moral as well as cognitive and affective dimensions.

It’s worth mentioning, too, how digital tools theoretically allow us to plan our days and months in fine grain detail but how they have also allowed us to forgo planning. Constant accessibility means that we don’t have to structure our days or weeks ahead of time. We can fiddle with plans right up to the last second, and frequently do so. The fact that the speed of commerce and communication has dramatically increased also means that I have less reason to plan ahead and I am more likely to make just-in-time purchases and just-in-time arrangements. Consequently, our experience of time amounts to the experience of frequently repeated mini-dashes to beat a deadline, and there are so many deadlines.

“From the perspective of everyday life in modern societies,” Harmut Rosa writes, “as everyone knows from their own experience, time appears as fundamentally paradoxical insofar as it is saved in ever greater quantities through the ever more refined deployment of modern technology and organizational planning in almost all everyday practices, while it does not lose its scarce character at all.” Rosa cites two researchers who conclude, “American Society is starving,” not for food, of course, “but for the ultimate scarcity of the postmodern world, time.” “Starving for time,” they add, “does not result in death, bur rather, as ancient Athenian philosophers observed, in never beginning to live.”


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

Jacques Ellul Was Not Impressed With “Technical Humanism”

Forgive the rather long excerpt, but I think it’s worth reading. This is Jacques Ellul in The Technological Society discussing the attempt to humanize technology/technique.

It useful to consider Ellul’s perspective in light of the recent revival of a certain kind of tech humanism that seeks to mitigate what it takes to be the dehumanizing effects of digital technologies. This passage is also worth considering in light of the proliferation of devices and apps designed to gather data about everything from our steps to our sleep habits in order to help us optimize, maximize, manage, or otherwise finely calibrate ourselves. The calibration becomes necessary because the rhythms and patterns of the modern world, owing at least in part to its technological development, have indeed become unlivable. So now the same techno-economic forces present themselves as the solution to the problems they have generated.

The claims of the human being have thus come to assert themselves to a greater and greater degree in the development of techniques; this is known as “humanizing the techniques.” Man is not supposed to be merely a technical object, but a participant in a complicated movement. His fatigue, pleasures, nerves, and opinions are carefully taken into account. Attention is paid to his reactions to orders, his phobias, and his earnings. All this fills the uneasy with hope. From the moment man is taken seriously, it seems to them that they are witnessing the creation of a technical humanism.

If we seek the real reason, we hear over and over again that there is “something out of line” in the technical system, an insupportable state of affairs for a technician. A remedy must be found. What is out of line? According to the usual superficial analysis, it is man that is amiss. The technician thereupon tackles the problem as he would any other. He has a method which has hitherto enabled him to solve all difficulties, and he uses it here too. But he considers man only as an object of technique and only to the degree that man interferes with the proper function of the technique. Technique reveals its essential efficiency in discerning that man has a sentimental and moral life which can have great influence on his material behavior and in proposing to do something about such factors on the basis of its own ends. These factors are, for technique, human and subjective; but if means can be found to act upon them, to rationalize them and bring them into line, they need not be a technical drawback. Of course, man as such does not count.

When the technical problem is well in hand, the professional humanists look at the situation and dub it “humanist.” This procedure suits the literati, moralists, and philosophers who are concerned about the human situation. What is more natural than for philosophers to say: “See how we are concerned with Man?”; and for their literary admirers to echo: “At last, a humanism which is not confined to playing with ideas but which penetrates the facts!” Unfortunately, it is a historical fact that this shouting of humanism always comes after the technicians have intervened; for a true humanism, it ought to have occurred before. This is nothing more than the traditional psychological maneuver called rationalizing.

Since 1947 we have witnessed the same humanist rationalizing with respect to the earth itself. In the United States, for example, methods of large-scale agriculture had been savagely applied. The humanists became alarmed by this violation of the sacred soil, this lack of respect for nature; but the technical people troubled themselves not at all until a steady decline in agricultural productivity became apparent. Technical research discovered that the earth contains certain trace elements which become exhausted when the soil is mistreated. This discovery, made by Sir Albert Howard in his thorough investigation of Indian agriculture, led to the conclusion that animal and vegetable (“organic”) fertilizers were superior to any and all artificial fertilizers, and that it is essential not to exhaust the earth’s reserves. Up to now no one has succeeded in finding a way of replacing trace elements artificially. The technicians have recommended more care in the use of fertilizers and moderation in the utilization of machinery; in short. “respect” for the soil. And all nature lovers rejoice. But was any real respect for the earth involved here? Clearly not. The important thing was agricultural yield.

It might be objected: “Who cares what the real causes were if the result is respect for man or for nature? If technical excess brings us to wisdom, let us by all means develop techniques. If man must be effectively protected by a technique that understands him, we may at least rest assured that he will be better protected than he ever was by all his philosophies.” This is hocus-pocus. Today’s technique may respect man because it is in its interest and part of its normal course of development to do so. But we have no certainty that this will be so in the future. We could have a measure of certainty only if technique, by necessity and for deep and lasting reasons, subordinated its power in principle to the interests of man Otherwise, a complete reversal is always possible. Tomorrow it might be in technique’s interest to exploit man brutally, to mutilate and suppress him. We have, as of now, no guarantee whatsoever that this is not the road it will take. On the contrary, we see all around us at least as many signs of increasing contempt for man as of respect for him. Technique mixes the one with the other indiscriminately. The only law it follows is that of its own autonomous development. To me, therefore, it seems impossible to speak of a technical humanism.

The Political Paradoxes of Gene Editing Technology

By now you’ve probably heard about the breakthrough in gene editing that was announced on November 25th: the birth of twin girls in China whose genetic code was altered to eliminate a gene, CCR5, in an effort to make the girls resistant to HIV. The genetic alteration was accomplished by the Chinese scientist He Jiankui using the technique known as CRISPR.

Antonio Regalado broke the story and has continued to cover the aftermath at MIT’s Technology Review: “Chinese scientists are creating CRISPR babies,” “CRISPR inventor Feng Zhang calls for moratorium on gene-edited babies.” Ed Yong has two good pieces at  The Atlantic as well: “A Reckless and Needless Use of Gene Editing on Human Embryos” and “The CRISPR Baby Scandal Gets Worse by the Day.”

The public response, such as it is, has been generally negative. Concerns have rightly focused on consequences for the longterm health of the twin girls but also on the impact that this “reckless” venture would have on public acceptance of gene editing moving forward and the apparent secrecy with which this work was conducted.

These concerns, it should be noted, appear to be mostly procedural rather than substantive. And they are not quite the whole story either. News about the birth of these two girls came on the eve of the Second International Summit on Human Genome Editing. On the first night of the conference, Antonio Regalado, who was in attendance, tweeted the following:

holy cow Harvard Medical School dean George Daley is making the case, big time, and eloquently, FOR editing embryos, at #geneeditsummit

he is says technically we are *ready* for RESPONSIBLE clinic use.

He went on to add, “he’s basically saying, stop debating ethics, start talking about the pathway forward.” The whole series of tweets is worth considering. At one point, Regalado notes, “They are talking about these babies like they are lab rats.”

Two things seem clear at this point. First, we have crossed a threshold and there is likely no going back. Second, among those with meaningful power in these matters, there is little interest in much else besides moving forward.

Among the 15 troubling aspects of He Jiankui’s work detailed by Ed Yong, we find the following:

8. He acted in contravention of global consensus.
9. He acted in contravention of his own stated ethical views.
10. He sought ethical advice and ignored it.
12. He has doubled down.
15. This could easily happen again.

One is reminded of what Alan Jacobs, in another context, dubbed the Oppenheimer Principle.

I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb” ….

Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.

Jacobs’ discussion here recalls Lewis’ analysis of the Conditioners in The Abolition of Man, where he observed that “What we call Man’s power is, in reality, a power possessed
by some men which they may, or may not, allow other men to profit by” and, similarly, “what we call Man’s power over Nature turns out to be a power exercised by some men over other Men with Nature as its instrument.” This was especially true, in Lewis’ view, when the human person became the last frontier of the Nature that was to be conquered.

I’m reminded as well of Langdon Winner’s question for philosophers and ethicists of technology. After they have done their work, however admirably, “there remains the embarrassing question: Who in the world are we talking to? Where is the community in which our wisdom will be welcome?”

“It is time to ask,” Winner writes, “what is the identity and character of the moral communities that will make the crucial, world-altering judgments and take appropriate action as a result?” Or, to put it another way, “How can and should democratic citizenry participate in decision making about technology?”

As I’ve suggested before, there is no “we” there. We appear to be stuck with an unfortunate paradox: as the scale and power of technology increases, technology simultaneously becomes more of a political problem and less susceptible to political processes.

It also seems to be the case that an issue such as genetic engineering lies beyond the threshold of how politics has been designed to work in western liberal societies. It is a matter of profound moral consequence involving fundamental questions about the meaning of human life. In other words, the sorts of questions ostensibly bracketed by the liberal democratic order are the very questions raised by this technology.


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

The Allegory of the Cave for the Digital Age

You remember Plato’s famous allegory of the cave. Plato invites us to imagine a cave in which prisoners have been shackled to a wall unable to move or turn their heads. On the wall before them are shadows that are being cast as the light of a fire shines on people walking along a path above and behind the prisoners. Plato asks us to consider whether these prisoners would not take the shadows for reality and then whether our situation were not quite similar to that of these prisoners.

So far as Plato is concerned, sense experience cannot reveal to us what is, in fact, real. What is real, what is true, what is good exists outside the material realm and is accessible chiefly by the operations of the mind acting independently of sense experience.

In the allegory, Plato imagines that one of the prisoners has managed to free himself. He turns around and sees the fire that has been casting the light and the people whose shadows he had mistaken for reality. It is a painful experience, of course, chiefly because the fire dazzles the eyes. Plato also tells us that this same prisoner manages with much difficulty to climb out of the cave in order to see the light of the sun and behold the sun itself. That experience is analogous to the experience of one, who through the exercise of reason and contemplation, has attained to behold the highest order of being, the form of the good.

The allegory of the cave, odd as it might strike us, memorably exemplifies one of Plato’s enduring contributions to what we might think of as the Wester imagination. It posits the existence of two worlds, as it were:  one material and one immaterial, the former accessible to the senses, the latter not. In Plato’s account, it is the philosopher who takes it upon himself, like the man whose has escaped the cave, to discover the truth of things.

I thought of the allegory of the cave as I read about an online service I wrote about a few days ago, Predictim, which promises to vet potential babysitters by analyzing their social media feeds. The service is representative of a class of tools and services that claim to leverage the combined power of data, AI, and/or algorithms in order to arrive at otherwise inaccessible knowledge, insight, or certainty. They claim, in other words, to lead us out of the cave to see the truth of things, to grant us access to what is real beyond appearances.

Only in the new allegory, it is not the philosopher who is able to ascend out of the cave, it is the data analyst or, more precisely, the tools of data gathering and analysis that the data analyst himself may not fully understand. Indeed, the knowledge gained is very often knowledge without understanding. This is, of course, an element of the residual but transmuted Platonism, the knowledge is inaccessible to the senses not because it lies in an immaterial realm, unless one conceives of information or abstracted data as basically immaterial, but because it requires the encompassing of data so expansive no one mind can comprehend it.

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central
Ad for Microsoft Band c. 2014

Arendt noted that the two-tiered Platonic structure was subject to a series of inversions in the 19th century, most notably at the hands of Kierkegaard, Marx, and Nietzsche. But, as she points out, they cannot altogether escape Plato because they are working within the tensions he generated. It might be better to imagine, then, that the two-tiered world was simply immanentized, the dual structure was merely brought into this world. Rather than conceiving of an immaterial realm that transcends the material realm, we can conceive of the material realm itself divided into two spheres. The first is generally accessible to our senses, but the second is not.

As Arendt herself observes, modernity was characterized in part by this shattering of confidence in our perceptive capacities. The world was not the center of the universe, for example, as our senses suggested, and it turned out there were whole words, telescopic and microscopic, which were inaccessible to our unaided perceptive apparatus. Indeed, Arendt and other have suggested that in the modern world we claim to know only what we can make.

We might say that border of the two-tiered world now runs through self. Data and some magic algorithmic sauce is now the key that unlocks the truth about the self. It’s knowledge that others seek about us and it is knowledge we also seek about ourselves. My main interest here has not been to question the validity of such knowledge, although that’s a necessary exercise, but to note the assumptions that make the promises of data analytics plausible. One of those assumptions seems to be the belief that the truth, about the self in this case, lies in a realm that is inaccessible to our ordinary means of perception but accessible by other esoteric means.

Regarding the validity of the knowledge of the self to be gained, I would suggest the only self we’ll discover is the self we’ve become as we have sought the self in our data. In this respect then, the Platonic structure is modernized: it is not ultimately about beholding and knowing what is there but about knowing what we are fabricating. The instruments constitute the reality they purport to reveal, both by my awareness of them and by the their delimiting the legible aspects of the self. The only self I can know on these terms is the self that I am in the process of creating through my tools. In some non-trivial way it would seem that the work of climbing out of the cave is simultaneously the work of creating a new cave in which we will dwell.

______________________

See also “Data Science as Machinic Neoplatonism” by Dan McQuillan.


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

 

Messages In Bottles

[Caveat lector: this a pitch, to be read in your best modulated NPR host voice.]

_______________

“The Republic of Newsletters and the Isles of Blogging, my friend. That’s what’s left. Messages in bottles from hermit caves by the sea.”

That’s the English writer Warren Ellis. I’m not sure where he wrote that, but I caught it in a tweet from Austin Kleon right before I sat down to compose this post.

Ellis’ sentiment resonated with me. As most of you reading this know, it describes rather well where I feel most at home in the writing world:  here on this site, which is in its ninth year, and in the newsletter I began publishing earlier this year. And, admittedly, it does sometimes feel just like sending messages in bottles from a hermit cave by the sea. I’m never quite sure what exactly I’m accomplishing with regards to the kinds of problems I tend to write about, but I keep writing.

Sometimes I think of the title of the Flannery O’Connor short story, “The Life You Save May Be Your Own,” and it occurs to me that this is pretty much what I’m doing here. Saving my own life in some way I don’t altogether understand by continuing to think in public as it were. If this thinking in public ends up being helpful to someone else as well, all the better.

I think I’ve got something to say, of course, it would be disingenuous to suggest otherwise, but, as I see it, that’s chiefly a function of the intellectual and moral resources I channel. Whatever is valuable in the point of view I take is entirely derivative of this eclectic set of resources. The best I know to do is to bring the light of those who have illuminated my understanding to bear on the present darkness, and, perhaps, combine those lights in a way that amplifies their power.

As I’ve mentioned here before, I’ve found myself with a brief window wherein to make some sort of sustainable living out of the independent scholar/nontraditional academic/freelancer gig, while also working to build CSET from the ground up (exciting things coming next year, by the way). If you think something of value is going on here and think it might be worth supporting, I’ve got a link for you right here. Consider this my combined Black Friday/Small Business Saturday/Cyber Monday appeal. (I’ll spare you the “for the cost of just one tall macchiato a day” spiel.) Alternatively, there will be a link at the end of each post and in the sidebar should you occasionally feel the urge to leave a tip.

Cheers!

_______________

That’s it, we both made it, barely. Drop the NPR voice. The pitch is over. Won’t do that again for a long time. Thanks for reading. Regularly scheduled programming will resume now.