Just Delete It

Facebook’s had an awful year. Frankly, it would be tedious to recount the details. If you are so inclined, you can read Wired’s summary of “the 21 (and counting) biggest Facebook Scandals of 2018.”

The latest round of bad news for Facebook came when the Times reported that the platform granted over 150 companies special access to user data, apparently circumventing stated privacy guidelines and giving the lie once again to their purported desire to give users complete control over their data.

I’m not sure whether these stories even register anymore with those outside the tech journalism/tech criticism/privacy scholarship world. I do see, however, through my small Twitter-sized window into that world, that this story has occasioned another round of debate about the efficacy of deleting Facebook.

More than one tech journalist has re-tweeted a link to Siva Viadhyanathan’s Times op-ed from March of this year urging us not to delete Facebook but, rather, to do something about it. To which claim my inner monologue immediately rejoins, quitting is doing something. Moreover, how difficult is it to imagine quitting Facebook and doing something about it in the way that these kinds of arguments suggest. You know, quit and call your representative or whatever.

The underlying idea here seems to be that there is no future world where Facebook doesn’t exist. We must stay on because leaving is a luxury and we would be abandoning all of those who do not have such a luxury. This assumes that the platform can sustain good faith efforts to speak and defend the truth, etc. That seems, at best, wildly optimistic.

Alternatively, the assumption may also be that what Facebook provides is, generally speaking, a good thing, but, unfortunately, this “good” service is provided by a “bad” company. The point, then, is to somehow preserve the service while making the company better by means to which companies tend to be more responsive (regulation, law suits, etc.).

I’ve written a bit about Facebook over the years—too much, frankly. At just this moment, I’m rather annoyed that I’ve given Facebook as much attention as I have. This summer I wrote a review of Viadhyanathan’s fine book on Facebook for The New Atlantis, and I commented here on the #DeleteFacebook debate from earlier this year.

In the review, I argued that we should consider the possibility that the service Facebook provides, even if it were delivered by a “good” company, would still be an individually and politically debilitating reality. In the post, I tried to make the case that whatever other action is taken with regards to Facebook, the choice of some to delete their accounts should not be derided or discouraged.

Maybe the lesson we are to take from the last two years is not simply that surveillance capitalism is bad news but also that the kind of ubiquitous connectivity upon which it is built is also bad news. This, it seems, is somehow unthinkable to us. To some damning degree, we seem to agree with Zuckerberg’s ideology of connection, most stridently articulated in the infamous Bosworth memo (also this year!). We’ve bought into the idea that digitally connecting people is somehow an unalloyed and innocent good.

This recalls Alan Jacobs’ point from a couple of years back: “So there is a relationship between distraction and addiction, but we are not addicted to devices. As Brooke’s Snapchat story demonstrates, we are addicted to one another, to the affirmation of our value—our very being—that comes from other human beings. We are addicted to being validated by our peers.” (Although, as we now understand more clearly, the design of the devices and platforms is not irrelevant either.)

I would also suggest that the very stickiness of the platform, the very way it leads us generate nuanced and dubious arguments as a rationale for remaining, this by itself should impel us to cut our ties.

So, look, just delete it. At the very least, let’s not give anyone grief for doing so. They are doing something. And I’m not convinced that what they’re doing isn’t, in fact, the most efficacious action we can take. They are willing to believe what we should all consider more seriously, that we can make do just fine in a world without Facebook.


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

A Withering Light

I read a tweet the other day from a writer, a very good writer, introducing a thread with links to their work throughout the year. The tweet was all irony and self-deprecation, and I thought to myself this is how the world ends.

More precisely, this is how a world keeps from coming into being. I don’t know when or where I first encountered the line from Gramsci about how the old is dying and the new cannot be born, but it has remained with me ever since as one those fragments that somehow manages to illuminate the present. I think of it often.

The old is dying. The new cannot be born. The morbid symptoms are all around us. The new cannot be born because there is no darkness within which it can take shape and grow. As Arendt observed, “Everything that lives, not vegetative life alone, emerges from darkness and, however strong its natural tendency to thrust itself into the light, it nevertheless needs the security of darkness to grow at all.”

There is no darkness within which a world can come into being. All is bathed in the light of reflexive self-consciousness. “If you tell people how they can sublimate,” the psychologist Harry Stack Sullivan once observed, “they can’t sublimate.” “Life needs the protection of nonawareness,” Guardini writes, “There is no path to take us outside of ourselves.” “Our action is constantly interrupted by reflection on it,” he adds. “Thus all our life bears the distinctive character of what is interrupted, broken. It does not have the great line that is sure of itself, the confident movement deriving from the self.”

But of what are we conscious when our self-consciousness is mediated by the internet? The knowledge we gain is not knowledge of the self, which has always been elusive. “I have become a question to myself,” St. Augustine declares. It is rather knowledge of the self playing itself to an audience of indeterminate size and presence. Or, to put it another way, we do not become aware of ourselves, we become aware of the relation of the self to itself. Our piety does not consist, as Socrates and Euthyphro contemplated, of a therapy of the gods but rather of a therapy of the self, and this piety is sustained by some of our most sophisticated tools.

Paradoxically, the self is, in some important respect, a gift of the other before whom it appears; it is received rather than made. We come to know ourselves as we become aware of who we are not through our attentiveness to others. Moreover, insofar as the self is constituted by its desires, it is, like our desires, an intersubjective reality. When the other before whom we appear is the amorphous audience mediated to us by tools designed to induce the self into its own self-management, then the self is positioned in such a way that it cannot receive itself as gift.

The trajectory is long—writing heightens consciousness Ong observed—but the internet has brought us through a critical threshold. It has been, among much else, a machine for the generation of obsessive self-documentation and self-awareness at both a global and personal scale. It brings a withering light to bear upon our inner lives and suffuses all our public acts with doubt, half-heartedness, and hesitation. How can it be otherwise, the self cannot sustain itself. Or else it fuels our public acts with a petty or violent rage as if the only way we could prove to ourselves that we are real is by the measure of the suffering we inflict upon ourselves or others (this is the lesson of the Underground Man). Or, as Yeats put it, “the best lack all conviction while the worst are full of passionate intensity.” He speaks as well, interestingly enough, of belated birth, of the rough beast slouching toward Bethlehem to be born.


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

Digital Media and Our Experience of Time

Early in the life of this site, which is to say about eight years ago, I commented briefly on a story about five scientists who embarked on a rafting trip down the San Juan River in southern Utah in an effort to understand how “heavy use of digital devices” was affecting us. The trip was the brainchild of a professor of psychology at the University of Utah, David Strayer, and Matt Ritchel wrote about it for the Times.

I remember this story chiefly because it introduced me to what Strayer called “third day syndrome,” the tendency, after about three days of being “unplugged,” to find oneself more at ease, more calm, more focused, and more rested. Over the past few years, Strayer’s observation has been reinforced by my own experience and by similar unsolicited accounts I’ve heard from several others.

As I noted back in 2010, the rafting trip led one of the scientists to wonder out loud: “If we can find out that people are walking around fatigued and not realizing their cognitive potential … What can we do to get us back to our full potential?”

I’m sure the idea that we are walking around fatigued will strike most as entirely plausible. That we’re not realizing our full cognitive potential, well, yes, that resonates pretty well, too. But, that’s not what mostly concerns me at the moment.

What mostly concerns me has more to do with what I’d call the internalized pace at which we experience the world. I’m not sure that’s the most elegant formulation for what I’m trying to get at. I have in mind something like our inner experience of time, but that’s not exactly right either. It’s more like the speed at which we feel ourselves moving across the temporal dimension.

Perhaps the best metaphor I can think of is that of walking on those long motorized walkways we might find at an airport, for example. If you’re not in a hurry, maybe you stand while the walkway carries you along. If you are in a hurry or don’t like standing, you walk, briskly perhaps. Then you step off at the end of the walkway and find yourself awkwardly hurtling forward for a few steps before you resume a more standard gait.

So that’s our experience of time, no? Within ordinary time, modernity has built the runways, and we jump on and hurry ourselves along. Then, for whatever reason, we get dumped out into ordinary time and our first experience is to find ourselves disoriented and somehow still feeling ourselves propelled forward by the some internalized temporal inertia.

I feel this most pronouncedly when I take my two toddlers out to the park, usually in the late afternoons after a day of work. I delight in the time, it is a joy and not at all a chore, yet I frequently find something inside me rushing me along. I’ve entered into ordinary time along with two others who’ve never known any other kind of time, but I’ve been dumped into it after running along the walkway all day, day in and day out, for weeks and months and years. On the worst days, it takes a significant effort of the will to overcome the inner temporal inertia and that only for a few moments at a time.

This state of affairs is not entirely novel. As Harmut Rosa notes in Social Acceleration, Georg Simmel in 1897 had already remarked on how “one often hears about the ‘pace of life,’ that it varies in different historical epochs, in the regions of the contemporary world, even in the same country and among individuals of the same social circle.”

Then I recall, too, Patrick Leigh Fermor’s experience boarding at a monastery in the late 1950s. Fermor relates the experience in A Time to Keep Silence:

“The most remarkable preliminary symptoms were the variations of my need of sleep. After initial spells of insomnia, nightmare and falling asleep by day, I found that my capacity for sleep was becoming more and more remarkable: till the hours I spent in or on my bed vastly outnumbered the hours I spent awake; and my sleep was so profound that I might have been under the influence of some hypnotic drug. For two days, meals and the offices in the church — Mass, Vespers and Compline — were almost my only lucid moments. Then began an extraordinary transformation: this extreme lassitude dwindled to nothing; night shrank to five hours of light, dreamless and perfect sleep, followed by awakenings full of energy and limpid freshness. The explanation is simple enough: the desire for talk, movements and nervous expression that I had transported from Paris found, in this silent place, no response or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which must be the common property of all our contemporaries, broke loose and swamped everything. No demands, once I had emerged from that flood of sleep, were made upon my nervous energy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but appeared to have lost their dragonish validity.”

I read this, in part, as the description of someone who was re-entering ordinary time, someone whose own internal pacing was being resynchronized with ordinary time.

So I don’t think digital media has created this phenomenon, but I do think it has been a powerful accelerating agent. How one experiences time is complex matter and I’m not Henri Bergson, but one way to account for the experience of time that I’m trying to describe here is to consider the frequency with which we encounter certain kinds of stimuli, the sort of stimuli that assault our attention and redirect it, the kinds of stimuli digital media is most adept at delivering. I suppose the normal English word for what I’ve just awkwardly described is distraction. Having become accustomed to a certain high frequency rate of distraction, our inner temporal drive runs at a commensurate speed. The temporal inertia I’ve been trying to describe, then, may also be characterized as a withdrawal symptom once we’re deprived of the stimuli or their rate dramatically decreases. The total effect is both cognitive and affective: the product of distraction is distractedness but also agitation and anxiety.

Along these same lines, then, we might say that our experience of time is also structured by desire. Of course, we knew this from the time we were children: the more you await something the slower time seems to pass. Deprived of stimuli, we crave it and so grow impatient. We find ourselves in a “frenetic standstill,” to borrow a phrase from Paul Virilio. In this state, we are unable to attend to others or to the world as we should, so the temporal disorder is revealed to have moral as well as cognitive and affective dimensions.

It’s worth mentioning, too, how digital tools theoretically allow us to plan our days and months in fine grain detail but how they have also allowed us to forgo planning. Constant accessibility means that we don’t have to structure our days or weeks ahead of time. We can fiddle with plans right up to the last second, and frequently do so. The fact that the speed of commerce and communication has dramatically increased also means that I have less reason to plan ahead and I am more likely to make just-in-time purchases and just-in-time arrangements. Consequently, our experience of time amounts to the experience of frequently repeated mini-dashes to beat a deadline, and there are so many deadlines.

“From the perspective of everyday life in modern societies,” Harmut Rosa writes, “as everyone knows from their own experience, time appears as fundamentally paradoxical insofar as it is saved in ever greater quantities through the ever more refined deployment of modern technology and organizational planning in almost all everyday practices, while it does not lose its scarce character at all.” Rosa cites two researchers who conclude, “American Society is starving,” not for food, of course, “but for the ultimate scarcity of the postmodern world, time.” “Starving for time,” they add, “does not result in death, bur rather, as ancient Athenian philosophers observed, in never beginning to live.”


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

Jacques Ellul Was Not Impressed With “Technical Humanism”

Forgive the rather long excerpt, but I think it’s worth reading. This is Jacques Ellul in The Technological Society discussing the attempt to humanize technology/technique.

It useful to consider Ellul’s perspective in light of the recent revival of a certain kind of tech humanism that seeks to mitigate what it takes to be the dehumanizing effects of digital technologies. This passage is also worth considering in light of the proliferation of devices and apps designed to gather data about everything from our steps to our sleep habits in order to help us optimize, maximize, manage, or otherwise finely calibrate ourselves. The calibration becomes necessary because the rhythms and patterns of the modern world, owing at least in part to its technological development, have indeed become unlivable. So now the same techno-economic forces present themselves as the solution to the problems they have generated.

The claims of the human being have thus come to assert themselves to a greater and greater degree in the development of techniques; this is known as “humanizing the techniques.” Man is not supposed to be merely a technical object, but a participant in a complicated movement. His fatigue, pleasures, nerves, and opinions are carefully taken into account. Attention is paid to his reactions to orders, his phobias, and his earnings. All this fills the uneasy with hope. From the moment man is taken seriously, it seems to them that they are witnessing the creation of a technical humanism.

If we seek the real reason, we hear over and over again that there is “something out of line” in the technical system, an insupportable state of affairs for a technician. A remedy must be found. What is out of line? According to the usual superficial analysis, it is man that is amiss. The technician thereupon tackles the problem as he would any other. He has a method which has hitherto enabled him to solve all difficulties, and he uses it here too. But he considers man only as an object of technique and only to the degree that man interferes with the proper function of the technique. Technique reveals its essential efficiency in discerning that man has a sentimental and moral life which can have great influence on his material behavior and in proposing to do something about such factors on the basis of its own ends. These factors are, for technique, human and subjective; but if means can be found to act upon them, to rationalize them and bring them into line, they need not be a technical drawback. Of course, man as such does not count.

When the technical problem is well in hand, the professional humanists look at the situation and dub it “humanist.” This procedure suits the literati, moralists, and philosophers who are concerned about the human situation. What is more natural than for philosophers to say: “See how we are concerned with Man?”; and for their literary admirers to echo: “At last, a humanism which is not confined to playing with ideas but which penetrates the facts!” Unfortunately, it is a historical fact that this shouting of humanism always comes after the technicians have intervened; for a true humanism, it ought to have occurred before. This is nothing more than the traditional psychological maneuver called rationalizing.

Since 1947 we have witnessed the same humanist rationalizing with respect to the earth itself. In the United States, for example, methods of large-scale agriculture had been savagely applied. The humanists became alarmed by this violation of the sacred soil, this lack of respect for nature; but the technical people troubled themselves not at all until a steady decline in agricultural productivity became apparent. Technical research discovered that the earth contains certain trace elements which become exhausted when the soil is mistreated. This discovery, made by Sir Albert Howard in his thorough investigation of Indian agriculture, led to the conclusion that animal and vegetable (“organic”) fertilizers were superior to any and all artificial fertilizers, and that it is essential not to exhaust the earth’s reserves. Up to now no one has succeeded in finding a way of replacing trace elements artificially. The technicians have recommended more care in the use of fertilizers and moderation in the utilization of machinery; in short. “respect” for the soil. And all nature lovers rejoice. But was any real respect for the earth involved here? Clearly not. The important thing was agricultural yield.

It might be objected: “Who cares what the real causes were if the result is respect for man or for nature? If technical excess brings us to wisdom, let us by all means develop techniques. If man must be effectively protected by a technique that understands him, we may at least rest assured that he will be better protected than he ever was by all his philosophies.” This is hocus-pocus. Today’s technique may respect man because it is in its interest and part of its normal course of development to do so. But we have no certainty that this will be so in the future. We could have a measure of certainty only if technique, by necessity and for deep and lasting reasons, subordinated its power in principle to the interests of man Otherwise, a complete reversal is always possible. Tomorrow it might be in technique’s interest to exploit man brutally, to mutilate and suppress him. We have, as of now, no guarantee whatsoever that this is not the road it will take. On the contrary, we see all around us at least as many signs of increasing contempt for man as of respect for him. Technique mixes the one with the other indiscriminately. The only law it follows is that of its own autonomous development. To me, therefore, it seems impossible to speak of a technical humanism.

The Political Paradoxes of Gene Editing Technology

By now you’ve probably heard about the breakthrough in gene editing that was announced on November 25th: the birth of twin girls in China whose genetic code was altered to eliminate a gene, CCR5, in an effort to make the girls resistant to HIV. The genetic alteration was accomplished by the Chinese scientist He Jiankui using the technique known as CRISPR.

Antonio Regalado broke the story and has continued to cover the aftermath at MIT’s Technology Review: “Chinese scientists are creating CRISPR babies,” “CRISPR inventor Feng Zhang calls for moratorium on gene-edited babies.” Ed Yong has two good pieces at  The Atlantic as well: “A Reckless and Needless Use of Gene Editing on Human Embryos” and “The CRISPR Baby Scandal Gets Worse by the Day.”

The public response, such as it is, has been generally negative. Concerns have rightly focused on consequences for the longterm health of the twin girls but also on the impact that this “reckless” venture would have on public acceptance of gene editing moving forward and the apparent secrecy with which this work was conducted.

These concerns, it should be noted, appear to be mostly procedural rather than substantive. And they are not quite the whole story either. News about the birth of these two girls came on the eve of the Second International Summit on Human Genome Editing. On the first night of the conference, Antonio Regalado, who was in attendance, tweeted the following:

holy cow Harvard Medical School dean George Daley is making the case, big time, and eloquently, FOR editing embryos, at #geneeditsummit

he is says technically we are *ready* for RESPONSIBLE clinic use.

He went on to add, “he’s basically saying, stop debating ethics, start talking about the pathway forward.” The whole series of tweets is worth considering. At one point, Regalado notes, “They are talking about these babies like they are lab rats.”

Two things seem clear at this point. First, we have crossed a threshold and there is likely no going back. Second, among those with meaningful power in these matters, there is little interest in much else besides moving forward.

Among the 15 troubling aspects of He Jiankui’s work detailed by Ed Yong, we find the following:

8. He acted in contravention of global consensus.
9. He acted in contravention of his own stated ethical views.
10. He sought ethical advice and ignored it.
12. He has doubled down.
15. This could easily happen again.

One is reminded of what Alan Jacobs, in another context, dubbed the Oppenheimer Principle.

I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb” ….

Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.

Jacobs’ discussion here recalls Lewis’ analysis of the Conditioners in The Abolition of Man, where he observed that “What we call Man’s power is, in reality, a power possessed
by some men which they may, or may not, allow other men to profit by” and, similarly, “what we call Man’s power over Nature turns out to be a power exercised by some men over other Men with Nature as its instrument.” This was especially true, in Lewis’ view, when the human person became the last frontier of the Nature that was to be conquered.

I’m reminded as well of Langdon Winner’s question for philosophers and ethicists of technology. After they have done their work, however admirably, “there remains the embarrassing question: Who in the world are we talking to? Where is the community in which our wisdom will be welcome?”

“It is time to ask,” Winner writes, “what is the identity and character of the moral communities that will make the crucial, world-altering judgments and take appropriate action as a result?” Or, to put it another way, “How can and should democratic citizenry participate in decision making about technology?”

As I’ve suggested before, there is no “we” there. We appear to be stuck with an unfortunate paradox: as the scale and power of technology increases, technology simultaneously becomes more of a political problem and less susceptible to political processes.

It also seems to be the case that an issue such as genetic engineering lies beyond the threshold of how politics has been designed to work in western liberal societies. It is a matter of profound moral consequence involving fundamental questions about the meaning of human life. In other words, the sorts of questions ostensibly bracketed by the liberal democratic order are the very questions raised by this technology.


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.