Attention and the Moral Life

I’ve continued to think about a question raised by Frank Furedi in an otherwise lackluster essay about distraction and digital devices. Furedi set out to debunk the claim that digital devices are undermining our attention and our memory. I don’t think he succeeded, but he left us with a question worth considering: “The question that is rarely posed by advocates of the distraction thesis is: what are people distracted from?”

In an earlier post, I suggested that this question can be usefully set alongside a mid-20th century observation by Hannah Arendt. Considering the advent of automation, Arendt feared “the prospect of a society of laborers without labor, that is, without the only activity left to them.” “Surely, nothing could be worse,” she added.

The connection might not have been as clear as I imagined it, so let me explain. Arendt believed that labor is the “only activity left” to the laborer because the glorification of labor in modern society had eclipsed the older ends and goods to which labor had been subordinated and for the sake of which we might have sought freedom from labor.

To put it as directly as I can, Arendt believed that if we indeed found ourselves liberated from the need to labor, we would not know what to do with ourselves. We would not know what to do with ourselves because, in the modern world, laboring had become the ordering principle of our lives.

Recalling Arendt’s fear, I wondered whether we were not in a similar situation with regards to attention. If we were able to successfully challenge the regime of digital distraction, to what would we give the attention that we would have fought so hard to achieve? Would we be like the laborers in Arendt’s analysis, finally free but without anything to do with our freedom? I wondered, as well, if it were not harder to combat distraction, if we were inclined to do so, precisely because we had no telos for the sake of which we might undertake the struggle.

Interestingly, then, while the link between Arendt’s comments about labor and the question about the purpose of attention was initially only suggestive, I soon realized the two were more closely connected. They were connected by the idea of leisure.

We tend to think of leisure merely as an occasional break from work. That is not, however, how leisure was understood in either classical or medieval culture. Josef Pieper, a Catholic philosopher and theologian, was thinking about the cultural ascendency of labor or work and the eclipse of leisure around the same time that Arendt was articulating her fears of a society of laborers without labor. In many respects, their analysis overlaps. (I should note, though, that Arendt distinguishes between labor and work in way that Pieper does not. Work for Pieper is roughly analogous to labor in Arendt’s taxonomy.)

For her part, Arendt believed nothing could be worse than liberating laborers from labor at this stage in our cultural evolution, and this is why:

“The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society.  The fulfillment of the wish, therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won. Within this society, which is egalitarian because this is labor’s way of making men live together, there is no class left, no aristocracy of either a political or spiritual nature from which a restoration of the other capacities of man could start anew.”

To say that there is “no aristocracy of either a political or spiritual nature” is another way of saying that there is no leisured class in the older sense of the word. This older ideal of leisure did not entail freedom from labor for the sake of endless poolside lounging while sipping Coronas. It was freedom from labor for the sake of intellectual, political, moral, or spiritual aims, the achievement of which may very well require arduous discipline. We might say that it was freedom from the work of the body that made it possible for someone to take up the work of the soul or the mind. Thus Pieper can claim that leisure is “a condition of the soul.” But, we should also note, it was not necessarily a solitary endeavor, or, better, it was not an endeavor that had only the good of the individual in mind. It often involved service to the political or spiritual community.

Pieper further defines leisure as “a form of that stillness that is the necessary preparation for accepting reality; only the person who is still can hear, and whoever is not still cannot hear.” He makes clear, though, that the stillness he has in mind “is not mere soundlessness or a dead muteness; it means, rather, that the soul’s power, as real, of responding to the real – a co-respondence, eternally established in nature – has not yet descended into words.” Thus, leisure “is the disposition of receptive understanding, of contemplative beholding, and immersion – in the real.”

Pieper also claims that leisure “is only possible on the assumption that man is not only in harmony with himself, whereas idleness is rooted in the denial of this harmony, but also that he is in agreement with the world and its meaning. Leisure lives on affirmation.” The passing comment on idleness is especially useful to us.

In our view, leisure and idleness are nearly indistinguishable. But on the older view, idleness is not leisure; indeed, it is the enemy of leisure. Idleness, on the older view, may even take the shape of frenzied activity undertaken for the sake of, yes, distracting us from the absence of harmony or agreement with ourselves and the world.

We are now inevitably within the orbit of Blaise Pascal’s analysis of the restlessness of the human condition. Because we are not at peace with ourselves or our world, we crave distraction or what he called diversions. “What people want,” Pascal insists, “is not the easy peaceful life that allows us to think of our unhappy condition, nor the dangers of war, nor the burdens of office, but the agitation that takes our mind off it and diverts us.” “Nothing could be more wretched,” Pascal added, “than to be intolerably depressed as soon as one is reduced to introspection with no means of diversion.”

The novelist Walker Percy, a younger contemporary of both Arendt and Pieper, described what we called the “diverted self” as follows: “In a free and affluent society, the self is free to divert itself endlessly from itself.  It works in order to enjoy the diversions that the fruit of one’s labor can purchase.”  For the diverted self, Percy concluded, “The pursuit of happiness becomes the pursuit of diversion.”

If leisure is a condition of the soul as Pieper would have it, then might we also say the same of distraction? Discreet instances of being distracted, of failing to meaningfully direct our attention, would then be symptoms of a deeper disorder. Our digital devices, in this framing of distraction, are both a material cause and an effect. The absence of digital devices would not cure us of the underlying distractedness or aimlessness, but their presence preys upon, exacerbates, and amplifies this inner distractedness.

It is hard, at this point, for me not to feel that I have been speaking in another language or at least another dialect, one whose cadences and lexical peculiarities are foreign to our own idiom and, consequently, to our way of making sense of our experience. Leisure, idleness, contemplative beholding, spiritual and political aristocracies–all of this recalls to mind Alasdair MacIntyre’s observation that we use such words in much the same way that a post-apocalyptic society, picking up the scattered pieces of the modern scientific enterprise would use “neutrino,” “mass,” and “specific gravity”: not entirely without meaning, perhaps, but certainly not as scientists. The language I’ve employed, likewise, is the language of an older moral vision, a moral vision that we have lost.

I’m not suggesting that we ought to seek to recover the fullness of the language or the world that gave it meaning. That would not be possible, of course. But what if we, nonetheless, desired to bring a measure of order to the condition of distraction that we might experience as an affliction? What if we sought some telos to direct and sustain our attention, to at least buffer us from the forces of distraction?

If such is the case, I commend to you Simone Weil’s reflections on attention and will. Believing that the skill of paying attention cultivated in one domain was transferable to another, Weil went so far as to claim that the cultivation of attention was the real goal of education: “Although people seem to be unaware of it today, the development of the faculty of attention forms the real object and almost the sole interest of studies.”

It was Weil who wrote, “Attention is the rarest and purest form of generosity.” A beautiful sentiment grounded in a deeply moral understanding of attention. Attention, for Weil, was not merely an intellectual asset, what we require for the sake of reading long, dense novels. Rather, for Weil, attention appears to be something foundational to the moral life:

“There is something in our soul that loathes true attention much more violently than flesh loathes fatigue. That something is much closer to evil than flesh is. That is why, every time we truly give our attention, we destroy some evil in ourselves.”

Ultimately, Weil understood attention to be a critical component of the religious life as well. “Attention, taken to its highest degree,” Weil wrote, “is the same thing as prayer. It presupposes faith and love.” “If we turn our mind toward the good,” she added, “it is impossible that little by little the whole soul will not be attracted thereto in spite of itself.” And this is because, in her view, “We have to try to cure our faults by attention and not by will.”

So here we have, if we wanted it, something to animate our desire to discipline the distracted self, something at which to direct our attention. Weil’s counsel was echoed closer to our own time by David Foster Wallace, who also located the goal of education in the cultivation of attention.

“Learning how to think really means learning how to exercise some control over how and what you think,” Wallace explained in his now famous commencement address at Kenyon College. “It means being conscious and aware enough to choose what you pay attention to and to choose how you construct meaning from experience.”

“The really important kind of freedom,” Wallace added, “involves attention and awareness and discipline, and being able truly to care about other people and to sacrifice for them over and over in myriad petty, unsexy ways every day. That is real freedom. That is being educated, and understanding how to think.” Each day the truth of this claim impresses itself more and more deeply upon my mind and heart.

Finally, and briefly, we should be wary of imagining the work of cultivating attention as merely a matter of learning how to consciously choose what we will attend to at any given moment. That is part of it to be sure, but Weil and Pieper both knew that attention also involved an openness to what is, a capacity to experience the world as gift. Cultivating our attention in this sense is not a matter of focusing upon an object of attention for our own reasons, however noble those may be. It is also a matter of setting to one side our projects and aspirations that we might be surprised by what is there. “We do not obtain the most precious gifts by going in search of them,” Weil wrote, “but by waiting for them.” In this way, we prepare for “some dim dazzling trick of grace,” to borrow a felicitous phrase from Walker Percy, that may illumine our minds and enliven our hearts.

It is these considerations, then, that I would offer in response to Furedi’s question, What are we distracted from?

Resisting the Habits of the Algorithmic Mind

Algorithms, we are told, “rule our world.” They are ubiquitous. They lurk in the shadows, shaping our lives without our consent. They may revoke your driver’s license, determine whether you get your next job, or cause the stock market to crash. More worrisome still, they can also be the arbiters of lethal violence. No wonder one scholar has dubbed 2015 “the year we get creeped out by algorithms.” While some worry about the power of algorithms, other think we are in danger of overstating their significance or misunderstanding their nature. Some have even complained that we are treating algorithms like gods whose fickle, inscrutable wills control our destinies.

Clearly, it’s important that we grapple with the power of algorithms, real and imagined, but where do we start? It might help to disambiguate a few related concepts that tend to get lumped together when the word algorithm (or the phrase “Bid Data”) functions more as a master metaphor than a concrete noun. I would suggest that we distinguish at least three realities: data, algorithms, and devices. Through the use of our devices we generate massive amounts of data, which would be useless were it not for analytical tools, algorithms prominent among them. It may be useful to consider each of these separately; at least we should be mindful of the distinctions.

We should also pay some attention to the language we use to identify and understand algorithms. As Ian Bogost has forcefully argued, we should certainly avoid implicitly deifying algorithms by how we talk about them. But even some of our more mundane metaphors are not without their own difficulties. In a series of posts at The Infernal Machine, Kevin Hamilton considers the implications of the popular “black box” metaphor and how it encourages us to think about and respond to algorithms.

The black box metaphor tries to get at the opacity of algorithmic processes. Inputs are transformed into outputs, but most of us have no idea how the transformation was effected. More concretely, you may have been denied a loan or job based on the determinations of a program running an algorithm, but how exactly that determination was made remains a mystery.

In his discussion of the black box metaphor, Hamilton invites us to consider the following scenario:

“Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.”

But how effective is this new way of approaching her engagement with Facebook, now informed by the black box metaphor? Hamilton thinks “this grasp toward agency is also the beginning of a new system.” “Tweaking to account for black-boxed algorithmic processes,” Hamilton suggests, “could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” Ultimately, Hamilton concludes, “most of us are stuck in an ‘opt-in or opt-out’ scenario that never goes anywhere.”

If I read him correctly, Hamilton is describing an escalating, never-ending battle to achieve a variety of desired outcomes in relation to the algorithmic system, all of which involve securing some kind of independence from the system, which we now understand as something standing apart and against us. One of those outcomes may be understood as the state Evan Selinger and Woodrow Hartzog have called obscurity, “the idea that when information is hard to obtain or understand, it is, to some degree, safe.” “Obscurity,” in their view, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power.”

Another desired outcome that fuels resistance to black box algorithms involves what we might sum up as the quest for authenticity. Whatever relative success algorithms achieve in predicting our likes and dislikes, our actions, our desires–such successes are often experienced as an affront to our individuality and autonomy. Ironically, the resulting battle against the algorithm often secures the their relative victory by fostering what Frank Pasquale has called the algorithmic self, constantly modulating itself in response/reaction to the algorithms it encounters.

More recently, Quinn Norton expressed similar concerns from a slightly different angle: “Your internet experience isn’t the main result of algorithms built on surveillance data; you are. Humans are beautifully plastic, endlessly adaptable, and over time advertisers can use that fact to make you into whatever they were hired to make you be.”

Algorithms and the Banality of Evil

These concerns about privacy or obscurity on the one hand and agency or authenticity on the other are far from insignificant. Moving forward, though, I will propose another approach to the challenges posed by algorithmic culture, and I’ll do so with a little help from Joseph Conrad and Hannah Arendt.

In Conrad’s Heart of Darkness, as the narrator, Marlow, makes his way down the western coast of Africa toward the mouth of the Congo River in the service of a Belgian trading company, he spots a warship anchored not far from shore: “There wasn’t even a shed there,” he remembers, “and she was shelling the bush.”

“In the empty immensity of earth, sky, and water,” he goes on, “there she was, incomprehensible, firing into a continent …. and nothing happened. Nothing could happen.” “There was a touch of insanity in the proceeding,” he concluded. This curious and disturbing sight is the first of three such cases encountered by Marlow in quick succession.

Not long after he arrived at the Company’s station, Marlow heard a loud horn and then saw natives scurry away just before witnessing an explosion on the mountainside: “No change appeared on the face of the rock. They were building a railway. The cliff was not in the way of anything; but this objectless blasting was all the work that was going on.”

These two instances of seemingly absurd, arbitrary action are followed by a third. Walking along the station’s grounds, Marlow “avoided a vast artificial hole somebody had been digging on the slope, the purpose of which I found it impossible to divine.” As they say: two is a coincidence; three’s a pattern.

Nestled among these cases of mindless, meaningless action, we encounter as well another kind of related thoughtlessness. The seemingly aimless shelling he witnessed at sea, Marlow is assured, targeted an unseen camp of natives. Registering the incongruity, Marlow exclaims, “he called them enemies!” Later, Marlow recalls the shelling off the coastline when he observed the natives scampering clear of each blast on the mountainside: “but these men could by no stretch of the imagination be called enemies. They were called criminals, and the outraged law, like the bursting shells, had come to them, an insoluble mystery from the sea.”

Taken together these incidents convey a principle: thoughtlessness couples with ideology to abet violent oppression. We’ll come back to that principle in a moment, but, before doing so, consider two more passages from the novel. Just before that third case of mindless action, Marlow reflected on the peculiar nature of the evil he was encountering:

“I’ve seen the devil of violence, and the devil of greed, and the devil of hot desire; but, by all the stars! these were strong, lusty, red-eyed devils, that swayed and drove men–men, I tell you. But as I stood on this hillside, I foresaw that in the blinding sunshine of that land I would become acquainted with a flabby, pretending, weak-eyed devil of rapacious and pitiless folly.”

Finally, although more illustrations could be adduced, after an exchange with an insipid, chatty company functionary, who is also an acolyte of Mr. Kurtz, Marlow had this to say: “I let him run on, the papier-mâché Mephistopheles, and it seemed to me that if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

That sentence, to my mind, most readily explains why T.S. Eliot chose as an epigraph for his 1925 poem, “The Hollow Men,” a line from Heart of Darkness: “Mistah Kurtz – he dead.” This is likely an idiosyncratic reading, so take it with the requisite grain of salt, but I take Conrad’s papier-mâché Mephistopheles to be of a piece with Eliot’s hollow men, who having died are remembered “Not as lost

Violent souls, but only
As the hollow men
The stuffed men.”

For his part, Conrad understood that these hollow men, these flabby devils were still capable of immense mischief. Within the world as it is administered by the Company, there is a great deal of doing but very little thinking or understanding. Under these circumstances, men are characterized by a thoroughgoing superficiality that renders them willing, if not altogether motivated participants in the Company’s depredations. Conrad, in fact, seems to have intuited the peculiar dangers posed by bureaucratic anomie and anticipated something like what Hannah Arendt later sought to capture in her (in)famous formulation, “the banality of evil.”

If you are familiar with the concept of the banality of evil, you know that Arendt conceived of it as a way of characterizing the kind of evil embodied by Adolph Eichmann, a leading architect of the Holocaust, and you may now be wondering if I’m preparing to argue that algorithms will somehow facilitate another mass extermination of human beings.

Not exactly. I am circumspectly suggesting that the habits of the algorithmic mind are not altogether unlike the habits of the bureaucratic mind. (Adam Elkus makes a similar correlation here, but I think I’m aiming at a slightly different target.) Both are characterized by an unthinking automaticity, a narrowness of focus, and a refusal of responsibility that yields the superficiality or hollowness Conrad, Eliot, and Arendt all seem to be describing, each in their own way. And this superficiality or hollowness is too easily filled with mischief and cruelty.

While Eichmann in Jerusalem is mostly remembered for that one phrase (and also for the controversy the book engendered), “the banality of evil” appears, by my count, only once in the book. Arendt later regretted using the phrase, and it has been widely misunderstood. Nonetheless, I think there is some value to it, or at least to the condition that it sought to elucidate. Happily, Arendt returned to the theme in a later, unfinished work, The Life of the Mind.

Eichmann’s trial continued to haunt Arendt. In the Introduction, Arendt explained that the impetus for the lectures that would become The Life of the Mind stemmed from the Eichmann trial. She admits that in referring to the banality of evil she “held no thesis or doctrine,” but she now returns to the nature of evil embodied by Eichmann in a renewed attempt to understand it: “The deeds were monstrous, but the doer … was quite ordinary, commonplace, and neither demonic nor monstrous.” She might have added: “… if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

There was only one “notable characteristic” that stood out to Arendt: “it was not stupidity but thoughtlessness.” Arendt’s close friend, Mary McCarthy, felt that this word choice was unfortunate. “Inability to think” rather than thoughtlessness, McCarthy believed, was closer to the sense of the German word Gedankenlosigkeit.

Later in the Introduction, Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”

Arendt explained that it was this “absence of thinking–which is so ordinary an experience in our everyday life, where we have hardly the time, let alone the inclination, to stop and think–that awakened my interest.” And it posed a series of related questions that Arendt sought to address:

“Is evil-doing (the sins of omission, as well as the sins of commission) possible in default of not just ‘base motives’ (as the law calls them) but of any motives whatever, of any particular prompting of interest or volition?”

“Might the problem of good and evil, our faculty for telling right from wrong, be connected with our faculty of thought?”

All told, Arendt arrived at this final formulation of the question that drove her inquiry: “Could the activity of thinking as such, the habit of examining whatever happens to come to pass or to attract attention, regardless of results and specific content, could this activity be among the conditions that make men abstain from evil-doing or even actually ‘condition’ them against it?”

It is with these questions in mind–questions, mind you, not answers–that I want to return to the subject with which we began, algorithms.

Outsourcing the Life of the Mind

Momentarily considered apart from data collection and the devices that enable it, algorithms are principally problem solving tools. They solve problems that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind. So, to borrow the language that Evan Selinger has deployed so effectively in his critique of contemporary technology, we might say that algorithms outsource the life of the mind. And, if Arendt is right, this outsourcing of the life of the mind is morally consequential.

The outsourcing problem is at the root of much of our unease with contemporary technology. Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

But perhaps we have ignored another dimension of the problem, one that the outsourcing critique itself might, possibly, encourage. Consider this:  to say that algorithms are displacing the life of the mind is to unwittingly endorse a terribly impoverished account of the life of the mind. For instance, if I were to argue that the ability to “Google” whatever bit of information we happen to need when we need it leads to an unfortunate “outsourcing” of our memory, it may be that I am already giving up the game because I am implicitly granting that a real equivalence exists between all that is entailed by human memory and the ability to digitally store and access information. A moments reflection, of course, will reveal that human remembering involves considerably more than the mere retrieval of discreet bits of data. The outsourcing critique, then, valuable as it is, must also challenge the assumption that the outsourcing occurs without remainder.

Viewed in this light, the problem with outsourcing the life of the mind is that it encourages an impoverished conception of what constitutes the life of the mind in the first place. Outsourcing, then, threatens our ability to think not only because some of our “thinking” will be done for us; it will do so because, if we are not careful, we will be habituated into conceiving of the life of the mind on the model of the problem-solving algorithm. We would thereby surrender the kind of thinking that Arendt sought to describe and defend, thinking that might “condition” us against the varieties of evil that transpire in environments of pervasive thoughtlessness.

In our responses to the concerns raised by algorithmic culture, we tend to ask, What can we do? Perhaps, this is already to miss the point by conceiving of the matter as a problem to be solved by something like a technical solution. Perhaps the most important and powerful response is not an action we take but rather an increased devotion to the life of the mind. The phrase sounds quaint, or, worse, elitist. As Arendt meant it, it was neither. Indeed, Arendt was convinced that if thinking was somehow essential to moral action, it must be accessible to all: “If […] the ability to tell right from wrong should turn out to have anything to do with the ability to think, then we must be able to ‘demand’ its exercise from every sane person, no matter how erudite or ignorant, intelligent or stupid, he may happen to be.”

And how might we pursue the life of the mind? Perhaps the first, modest step in that direction is simply the cultivation of times and spaces for thinking, and perhaps also resisting the urge to check if there is an app for that.


Tip the Writer

$1.00

The Ageless and the Useless

In The Religion of the Future, Roberto Unger, a professor of law at Harvard, identifies humanity’s three “irreparable flaws”: mortality, groundlessness, and insatiability. We are plagued by death. We are fundamentally ignorant about our origins and our place in the grand scheme of things. We are made perpetually restless by desires that cannot finally be satisfied. This is the human condition. In his view, all of the world’s major religions have tried to address these three irreparable flaws, and they have all failed. It is now time, he proposes, to envision a new religion that will be adequate to the challenges of the 21st century. His own proposal is a rather vague program of learning to be at once more god-like while eschewing certain god-like qualities, such as immortality, omniscience, and perfectibility. It strikes me as less than actionable.

There is, however, another religious option taking shape. In a wide-ranging Edge interview with Daniel Kahneman about the unfolding future, historian Yuval Noah Harari concluded with the following observation:

“In terms of history, the events in Middle East, of ISIS and all of that, is just a speed bump on history’s highway. The Middle East is not very important. Silicon Valley is much more important. It’s the world of the 21st century … I’m not speaking only about technology. In terms of ideas, in terms of religions, the most interesting place today in the world is Silicon Valley, not the Middle East. This is where people like Ray Kurzweil, are creating new religions. These are the religions that will take over the world, not the ones coming out of Syria and Iraq and Nigeria.”

This is hardly an original claim, although it’s not clear that Harari recognizes this. Indeed, just a few months ago I commented on another Edge conversation in which Jaron Lanier took aim at the “layer of religious thinking” being added “to what otherwise should be a technical field.” Lanier was talking about the field of AI. He went on to complain about a “core of technically proficient, digitally-minded people” who “reject traditional religions and superstitions,” but then “re-create versions of those old religious superstitions!” “In the technical world,” he added, “these superstitions are just as confusing and just as damaging as before, and in similar ways.”

This emerging Silicon Valley religion, which is just the latest iteration of the religion of technology, is devoted to addressing one of the three irreparable flaws identified by Unger: our mortality. From this angle it becomes apparent that there are two schools within this religious tradition. The first of these seeks immortality through the digitization of consciousness so that it may be downloaded and preserved forever. Decoupled from corruptible bodies, our essential self lives on in the cloud–a metaphor that now appears in a new light. We may call this the gnostic strain of the Silicon Valley religion.

The second school grounds its slightly more plausible hopes for immortality in the prospect of making the body imperishable through biogenetic and cyborg enhancements. It is this prospect that Harari takes to be a serious possibility:

“Yes, the attitude now towards disease and old age and death is that they are basically technical problems. It is a huge revolution in human thinking. Throughout history, old age and death were always treated as metaphysical problems, as something that the gods decreed, as something fundamental to what defines humans, what defines the human condition and reality ….

People never die because the Angel of Death comes, they die because their heart stops pumping, or because an artery is clogged, or because cancerous cells are spreading in the liver or somewhere. These are all technical problems, and in essence, they should have some technical solution. And this way of thinking is now becoming very dominant in scientific circles, and also among the ultra-rich who have come to understand that, wait a minute, something is happening here. For the first time in history, if I’m rich enough, maybe I don’t have to die.”

Harari expands on that last line a little further on:

“Death is optional. And if you think about it from the viewpoint of the poor, it looks terrible, because throughout history, death was the great equalizer. The big consolation of the poor throughout history was that okay, these rich people, they have it good, but they’re going to die just like me. But think about the world, say, in 50 years, 100 years, where the poor people continue to die, but the rich people, in addition to all the other things they get, also get an exemption from death. That’s going to bring a lot of anger.”

Kahneman pressed Harari on this point. Won’t the medical technology that yields radical life extension trickle down to the masses? In response, Harari draws on a second prominent theme that runs throughout the conversation: superfluous humans.

“But in the 21st century, there is a good chance that most humans will lose, they are losing, their military and economic value. This is true for the military, it’s done, it’s over …. And once most people are no longer really necessary, for the military and for the economy, the idea that you will continue to have mass medicine is not so certain.”

There is a lot to consider in these few paragraphs, but here are what I take to be the three salient points: the problem solving approach to death, the coming radical inequality, and the problem of “useless people.”

Harari is admirably frank about his status as a historian and the nature of the predictions he is making. He acknowledges that he is not a technologist nor a physician and that he is merely extrapolating possible futures from observable trends. That said, I think Harari’s discussion is compelling not only because of the elegance of his synthesis, but also because it steers clear of the more improbable possibilities–he does not think that AI will become conscious, for instance. It also helps that he is chastened by a historian’s understanding of the contingency of human affairs.

He is almost certainly right about the transformation of death into a technical problem. Adumbrations of this attitude are present at the very beginnings of modern science. Francis Bacon, the great Elizabethan promoter of modern science, wrote in his History of Life and Death, “Whatever can be repaired gradually without destroying the original whole is, like the vestal fire, potentially eternal.” Elsewhere, he gave as the goal of the pursuit of knowledge “a discovery of all operations and possibilities of operations from immortality (if it were possible) to the meanest mechanical practice.”

In the 1950’s, Hannah Arendt anticipated these concerns as well when, in the Prologue to The Human Condition, she wrote about the “hope to extend man’s life-span far beyond the hundred-year limit.” “This future man,” she added,

“whom scientists tell us they will produce in no more than a hundred years seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself. There is no reason to doubt our abilities to accomplish such an exchange, just as there is no reason to doubt our present ability to destroy all organic life on earth.”

Approaching death as a technical problem will surely yield some tangible benefits even if it fails to deliver immortality or even radical life extension. But what will be the costs? It will be the case that even if it fails to yield a “solution,” turning death into a technical problem will have profound social, psychological, and moral consequences. How will it affect the conduct of my life? How will this approach help us face death when it finally comes? As Harari himself puts it, “My guess, which is only a guess, is that the people who live today, and who count on the ability to live forever, or to overcome death in 50 years, 60 years, are going to be hugely disappointed. It’s one thing to accept that I’m going to die. It’s another thing to think that you can cheat death and then die eventually. It’s much harder.”

Strikingly, Arendt also commented on “the advent of automation, which in a few decades probably will empty the factories and liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity.” If this appears to us as an unmitigated blessing, Arendt would have us think otherwise:

“The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society. The fulfillment of the wish, therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won . . . What we are confronted with is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.”

So we are back to useless people. Interestingly, Harari locates this possibility in a long trend toward specialization that has been unfolding for some time:

“And when you look at it more and more, for most of the tasks that humans are needed for, what is required is just intelligence, and a very particular type of intelligence, because we are undergoing, for thousands of years, a process of specialization, which makes it easier to replace us.”

Intelligence as a opposed to consciousness. Harari makes the point that the two have been paired throughout human history. Increasingly, we are able to create intelligence apart from consciousness. The intelligence is very limited, it may be able to do one thing extremely well but utterly fail at other, seemingly simple tasks. But specialization, or the division of labor, has opened the door for the replacement of human or consciousness-based intelligence with machine intelligence. In other words, the mechanization of human action prepares the way for the replacement of human actors.

Some may object by noting that similar predictions have been made before and have not materialized. I think Harari’s rejoinder is spot on:

“And again, I don’t want to give a prediction, 20 years, 50 years, 100 years, but what you do see is it’s a bit like the boy who cried wolf, that, yes, you cry wolf once, twice, three times, and maybe people say yes, 50 years ago, they already predicted that computers will replace humans, and it didn’t happen. But the thing is that with every generation, it is becoming closer, and predictions such as these fuel the process.”

I’ve noted before that utopians often take the moral of Chicken Little for their interpretive paradigm: the sky never falls. Better I think, as Harari also suggests, to consider the wisdom of the story of the boy who cried wolf.

I would add here that the plausibility of these predictions is only part of what makes them interesting or disconcerting, depending on your perspective. Even if these predictions turn out to be far off the mark, they are instructive as symptoms. As Dale Carrico has put it, the best response to futurist rhetoric may be “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

Moreover, to the degree that these predictions are extrapolations from present trends, they may reveal something to us about these existing tendencies. Along these lines, I think the very idea of “useless people” tells us something of interest about the existing trend to outsource a wide range of human actions to machines and apps. This outsourcing presents itself as a great boon, of course, but it finally it raises a question: What exactly are we be liberated for?

It’s a point I’ve raised before in connection to the so-called programmable world of the Internet of Things:

For some people at least, the idea seems to be that when we are freed from these mundane and tedious activities, we will be free to finally tap the real potential of our humanity. It’s as if there were some abstract plane of human existence that no one had yet achieved because we were fettered by our need to be directly engaged with the material world. I suppose that makes this a kind of gnostic fantasy. When we no longer have to tend to the world, we can focus on … what exactly?

Put the possibility of even marginally extended life-spans together with the reductio ad absurdum of digital outsourcing, and we can render an even more pointed version of Arendt’s warning about a society of laborers without labor. We are being promised the extension of human life precisely when we have lost any compelling account of what exactly we should do with our lives.

As for what to do about the problem of useless people, or the permanently unemployed, Harari is less than sanguine:

“I don’t have a solution, and the biggest question maybe in economics and politics of the coming decades will be what to do with all these useless people. I don’t think we have an economic model for that. My best guess, which is just a guess, is that food will not be a problem. With that kind of technology, you will be able to produce food to feed everybody. The problem is more boredom, and what to do with people, and how will they find some sense of meaning in life when they are basically meaningless, worthless.

My best guess at present is a combination of drugs and computer games as a solution for most … it’s already happening. Under different titles, different headings, you see more and more people spending more and more time, or solving their inner problems with drugs and computer games, both legal drugs and illegal drugs. But this is just a wild guess.”

Of course, as Harari states repeatedly, all of this is conjecture. Certainly, the future need not unfold this way. Arendt, after commenting on the desire to break free of the human condition by the deployment of our technical know-how, added,

“The question is only whether we wish to use our new scientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order and therefore can hardly be left to the decision of professional scientists or professional politicians.”

Or, as Marshall McLuhan put it, “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”

A Thought About Thinking

Several posts in the last few months have touched on the idea of thinking, mostly with reference to the work of Hannah Arendt. “Thinking what we are doing” was a recurring theme in her writing, and it could very easily serve as a slogan, along with the line from McLuhan below the blog’s title, for what I am trying to do here.

Thinking, though, is one of those things that we do naturally, or so we believe, so it is therefore one of those things for which we have a hard time imagining an alternative mode. Let me try putting that another way. The more “natural” a fact about the world seems to us, the harder it is for us to imagine that it could be otherwise. What’s more, thinking about our own thinking is a dynamic best captured by trying to imagine jumping over our own shadow, although, finally, not impossible in the same way.

We all think, if by “thinking” we simply mean our stream of consciousness, our unending internal monologue. But, having thoughts does not necessarily equal thinking. That’s neither a terribly profound observation nor a controversial one. But what, then, does constitute thinking?

Here’s one line of thought in partial response. It’s tempting to associate thinking with “problem solving.” Thinking in these cases takes as its point of departure some problem that needs to be solved. Our thinking then sets out to understand the problem, perhaps by identifying its causes, before proceeding to propose solutions, solutions which usually involve the weighing of pros and cons.

This is the sort of thinking that we tend to prize, and for obvious reasons. When there are problems, we want solutions. We might call this sort of thinking technocratic thinking, or thinking on the model of engineering. By calling it this I don’t intend to disparage it. We need this sort of thinking, no doubt. But if this is the only sort of thinking we do, then we’ve impoverished the category.

But what’s the alternative?

The technocratic mode of thinking makes the assumption that all problems have solutions and all questions have answers. Or, what’s worse, that the only problems worth thinking about are those we can solve and the only questions worth asking are those that we can definitively answer. The corollary temptation is that we begin to look at life merely as a series of problems in search of a solution. We might call this the engineered life.

All of this further assumes that thinking itself is not inherently valuable; it is valuable only as a means to an end: in this case, either the solution or the answer.

We need, instead, to insist on the value of thinking as an end in itself. We might make a start by distinguishing between questions we answer and questions we live with–that is, questions we may never fully answer, but whose contemplation enriches our lives. We may further distinguish between problems we solve and problems we simply inhabit as a condition of being human.

This needs to be further elaborated, but I’ll leave that to your own thinking. I’ll also leave you with another line that has meant a lot to me over the years. It’s taken from a poem by Wendell Berry:

“We live the given life, not the planned.”

Technology and “The Human Condition”

If you’re a regular reader, you know that increasingly my attention has been turning toward the work of Hannah Arendt. My interest in Arendt’s work, particularly as it speaks to technology, was sparked a few years ago when I began reading The Human Condition. Below are some comments, prepared for another context, discussing Arendt’s Prologue to that book. 

________________________

In the Prologue to The Human Condition, Arendt wrote, “What I propose in the following is a reconsideration of the human condition from the vantage point of our newest experiences and our most recent fears.” In her framing, these newest experiences and most recent fears were born out of technological developments that had come about within Arendt’s own lifetime, particularly those that had transpired in the two decades that preceded the writing of The Human Condition. Among the more notable of these developments were the successful harnessing of atomic power and the launching, just one year prior to the publication of Arendt’s book, of the first manmade object into earth’s orbit. These two developments powerfully signaled the end of one age of human history and the opening of another. Positioned in this liminal space, Arendt explained that her purpose was “to trace back modern world alienation, its twofold flight from the earth in the universe and from the world into the self, to its origins, in order to arrive at an understanding of the nature of society as it had developed and presented itself at the very moment when it was overcome by the advent of a new and yet unknown age.”

It is striking how similar Arendt’s concerns are to our own experiences and fears nearly sixty years later. Arendt, for instance, wrote about the advent of automation, which threatened to “empty the factories and liberate mankind from its oldest and most natural burden, the burden of laboring” just at the point when human beings had lost sight of the “higher and more meaningful activities for the sake of which this freedom would deserve to be won.” In our own day, we are told “robots will—and must—take our jobs.” [Arendt, by the way, wasn’t the only worried about automation.]

Similarly, Arendt spoke forebodingly of scientific aspirations that are today associated with advocates of Transhumanism. These aspirations include the prospect of radical human enhancement, the creation of artificial life, and the achievement of super-longevity. “This future man, whom scientists tell us they will produce in no more than a hundred years,” Arendt suggests, “seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.” We should not doubt the capability of scientists to make good on this claim, Arendt tells us, “just as there is no reason to doubt our present ability to destroy all organic life on earth.” Sixty years later, the Transhumanist vision moves from the fringes of public discussion to the mainstream, and we still retain the power to destroy all organic life on earth, although this is not much discussed any longer.

It was in the context of such fears and such experiences that Arendt wrote, “What I propose, therefore, is very simple: it is nothing more than to think what we are doing.” The simplicity of the proposal, of course, masks the astounding complexity against which the task must unfold. Even in her own day, Arendt feared that we could not rise to the challenge. “[I]t could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do.” A similar concern had been registered by the poet W.H. Auden, who, in 1945, wrote of the modern mind,

“Though instruments at Its command
Make wish and counterwish come true,
It clearly cannot understand
What It can clearly do.”

For her part, Arendt continued, “it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking.” With that Arendt spoke more than she knew; she anticipated the computer age. But Arendt did not look warmly upon the prospect of thinking and speaking supported by artificial machines. She reckoned the prospect a form of slavery, not to the machines but to our “know-how,” a form of knowledge which Arendt opposed to thought. (Arendt would go on to expand her thinking about thought in an unfinished and posthumously published work, The Life of the Mind.)

Moreover, Arendt contended that the question of technology is also a political question; it is, in other words, a question of how human beings live and act together. It is, consequently, a matter of meaningful speech. In Arendt’s view, and it is hard to imagine the case being otherwise, politics is premised on the ability of human beings to “talk with and make sense to each other and to themselves.” These considerations raise the further question of action. Even if we were able to think what we were doing with regard to technology, would it be possible to act meaningfully on the deliberations of such thought? What is the relationship, in other words, not only of technology to thought but of technology to the character of political communities? Finally, returning to the question of machine-assisted thinking, would such thought be politically consequential given that politics depends on meaningful speech?

Already, in 1958, Arendt perceived that the advances of scientific knowledge were secured in the rarefied language of advanced mathematics, a language that was not susceptible to translation into the more ordinary forms of human speech. Today, some forms of machine-assisted thinking, particularly those collected under the concept of Big Data, promise knowledge without understanding. Such knowledge may be useful, but it may also prove difficult to incorporate into the deliberative discourse of political communities.

In a few pages, then, Arendt managed to present a series of concerns and questions that remain vital today. Can we think what we are doing, particularly with the Promethean powers of modern technology? Can our technology help us with such thinking? Can we act in politically meaningful ways on the basis of such thought?