Finding A Place For Thought

Yesterday, I wrote briefly about how difficult it can be to find a place for thought when our attention, in both its mental and emotional dimensions, is set aimlessly adrift on the currents of digital media. Digital media, in fact, amounts to an environment that is inhospitable and, indeed, overtly hostile to thought.

Many within the tech industry are coming to a belated sense of responsibility for this world they helped fashion. A recent article in the Guardian tells their story. They include Justin Rosenstein, who helped design the “Like” button for Facebook but now realizes that it is common “for humans to develop things with the best of intentions and for them to have unintended, negative consequences” and James Williams, who worked on analytics for Google but who experienced an epiphany “when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on.”

Better late than never one might say, or perhaps it is too late. As per usual, there is a bit of ancient wisdom that speaks to the situation. In this case, the story of Pandora’s Box comes to mind. Nonetheless, when so many in the industry seem bent on evading responsibility for the consequences of their work, it is mildly refreshing to read about some who are at least willing to own the consequences of their work and even striving to somehow make ammends.

It is telling, though, that, as the article observes, “These refuseniks are rarely founders or chief executives, who have little incentive to deviate from the mantra that their companies are making the world a better place. Instead, they tend to have worked a rung or two down the corporate ladder: designers, engineers and product managers who, like Rosenstein, several years ago put in place the building blocks of a digital world from which they are now trying to disentangle themselves.”

Tristan Harris, formerly at Google, has been especially pointed in his criticism of the tech industries penchant for addictive design. Perhaps the most instructive part of Harris’s story is how he experienced a promotion to ethics position within Google as, in effect, a marginalization and silencing.

(It is also edifying to consider the steady drumbeat of stories about how tech executives stringently monitor and limit the access their own children have to devices and the Internet and why they send their children to expensive low tech schools.)

Informed as my own thinking has been by the work of Hannah Arendt, I see this hostility to thought as a serious threat to our society. Arendt believed that thinking was somehow intimately related to our moral judgment and an inability to think a gateway to grave evils. Of course, it was a particular kind of thinking that Arendt had in mind–thinking, one might say, for thinking’s sake. Or, thinking that was devoid of instrumentality.

Writing in Aeon recently, Jennifer Stitt drew on Arendt to argue for the importance of solitude for thought and thought for conscience and conscience for politics. As Stitt notes, Arendt believed that “living together with others begins with living together with oneself.” Here is Stitt’s concluding paragraph:

But, Arendt reminds us, if we lose our capacity for solitude, our ability to be alone with ourselves, then we lose our very ability to think. We risk getting caught up in the crowd. We risk being ‘swept away’, as she put it, ‘by what everybody else does and believes in’ – no longer able, in the cage of thoughtless conformity, to distinguish ‘right from wrong, beautiful from ugly’. Solitude is not only a state of mind essential to the development of an individual’s consciousness – and conscience – but also a practice that prepares one for participation in social and political life.

Solitude, then, is at least one practice that can help create a place for thought.

Paradoxically, in a connected world it is challenging to find either solitude or companionship. If we submit to a regime of constant connectivity, we end up with hybrid versions of both, versions which fail to yield their full satisfactions.

Additionally, as someone who works one and a half jobs and is also raising a toddler and an infant, I understand how hard it can be to find anything approaching solitude. In a real sense it is a luxury, but it is a necessary luxury and if the world won’t offer it freely then we must fight for it as best we can.

There was one thing left in Pandora’s Box after all the evils had flown irreversibly into the world: it was hope.

A Lost World

Human beings have two ways, generally speaking, of going about the business of living with one another: through speech or violence. One of the comforting stories we tell each other about the modern world is that we have, for the most part, set violence aside. Indeed, one of modernity’s founding myths is that it arose as a rational alternative to the inevitable violence of a religious and unenlightened world. The truth of the matter is more complicated, of course. In any case, we would do well to recall that it was popularly believed at the turn of the twentieth century that western civilization had seen the end of large scale conflict among nations.

Setting to one side the historical validity of modernity’s myths, let us at least acknowledge that a social order grounded in the power of speech is a precarious one. Speech can be powerful, but it is also fragile. It requires hospitable structures and institutions that are able to sustain the possibility of intelligibility, meaning, and action–all of which are necessary in order for a political order premised on the debate and deliberation to exist and flourish. This is why emerging technologies of the word–writing, the printing press, the television, the Internet–always adumbrate profound political and cultural transformations.

A crisis of the word can all too easily become a political crisis. This insight, which we might associate with George Orwell, is, in fact, ancient.

Consider the following: “To fit in with the change of events, words, too, had to change their usual meanings. What used to be described as a thoughtless act of aggression was now regarded as the courage one would expect to find in a party member,” so wrote, not Orwell but Thucydides in the first half of the fifth century BC. He goes on as follows:

… to think of the future and wait was merely another way of saying one was a coward; any idea of moderation was just an attempt to disguise one’s unmanly character; ability to understand a question from all sides meant that one was totally unfitted for action. Fanatical enthusiasm was the mark of a real man, and to plot against an enemy behind his back was perfectly legitimate self-defense. Anyone who held violent opinions could always be trusted, and anyone who objected to them became a suspect. To plot successfully was a sign of intelligence, but it was still cleverer to see that a plot was hatching. If one attempted to provide against having to do either, one was disrupting the unity of the party and acting out of fear of the opposition. In short, it was equally praiseworthy to get one’s blow in first against someone who was going to do wrong, and to denounce someone who had no intention of doing any wrong at all. Family relations were a weaker tie than party membership, since party members were more ready to go to any extreme for any reason whatever. These parties were not formed to enjoy the benefit of the established laws, but to acquire power by overthrowing the existing regime; and the members of these parties felt confidence in each other not because of any fellowship in a religious communion, but because they were partners in crime.”

I came across a portion of this paragraph on to separate occasions during the past week or two, first in a tweet and then again while reading Alasdair MacIntyre’s A Short History of Ethics.

The passage, taken from Thucydides’ The History of the Peloponnesian War, speaks with arresting power to our present state of affairs. We should note, however, that what Thucydides is describing is not primarily a situation of pervasive deceitfulness, one in which people knowingly betray the ordinary and commonly accepted meaning of a word. Rather, it is a situation in which moral evaluations themselves have shifted. It is not that some people now lied and called an act of thoughtless aggression a courageous act. It is that what had before been commonly judged to be an act of thoughtless aggression was now judged by some to be a courageous act. In other words, it would appear that in very short order, moral judgments and the moral vocabulary in which they were expressed shifted dramatically.

It brings to mind Hannah Arendt’s frequent observation about how quickly the self-evidence of long-standing moral principles were overturned in Nazi Germany: “… it was as though morality suddenly stood revealed in the original meaning of the word, as a set of mores, customs and manners, which could be exchanged for another set with hardly more trouble than it would take to change the table manners of an individual or a people.”

It is shortsighted, at this juncture, to ask how we can find agreement or even compromise. We do not, now, even know how to disagree well; nothing like an argument in the traditional sense is being had. It is an open question whether anyone can even be said to be speaking intelligibly to anyone who does not already fully agree with their positions and premises. The common world that is both the condition of speech and its gift to us is withering away. A rift has opened up in our political culture that will not be mended until we figure out how to reconstruct the conditions under which speech can once again become meaningful. Until then, I fear, the worst is still before us.

Dark Times

“I borrow the term [“dark times”] from Brecht’s famous poem ‘To Posterity,’ which mentions the disorder and the hunger, the massacres and the slaughterers, the outrage over injustice and the despair ‘when there was only wrong and no outrage,’ the legitimate hatred that makes you ugly nevertheless, the well-founded wrath that makes the voice grow hoarse. All this was real enough as it took place in public; there was nothing secret or mysterious about it. And still, it was by no means visible to all, nor was it at all easy to perceive it; for until the very moment when catastrophe overtook everything and everybody, it was covered up not by realities but by the highly efficient talk and double-talk of nearly all official representatives who, without interruption and in many ingenious variations, explained away unpleasant facts and justified concerns. When we think of dark times and of people living and moving in them, we have to take this camouflage, emanating from and spread by ‘the establishment’ – or ‘the system,’ as it was then called – also into account. If it is the function of the public realm to throw light on the affairs of men by providing a space of appearances in which they can show in deed and word, for better and worse, who they are and what they can do, then darkness has come when this light is extinguished by ‘credibility gaps’ and ‘invisible government,’ by speech that does not disclose what is but sweeps it under the carpet, by exhortations, moral and otherwise, that, under the pretext of upholding old truths, degrade all truth to meaningless triviality.

Nothing of this is new.

(Hannah Arendt, Preface to Men in Dark Times)

The Miracle That Saves the World

are-233x300“Hannah Arendt is preeminently the theorist of beginnings,” according to Margaret Canovan in her Introduction to Arendt’s The Human Condition. “Reflections on the human capacity to start something new pervade her thinking,” she adds.

I’ve been thinking about this theme in Arendt’s work, particularly as the old year faded and the new one approached. Arendt spoke of birth and death, natality and morality, as the “most general condition of human existence.” Whereas most Western philosophy had taken its point of departure from the fact of our mortality, Arendt made a point of emphasizing natality, the possibility of new beginnings.

“The most heartening message of The Human Condition,” Canovan writes,

is its reminder of human natality and the miracle of beginning. In sharp contrast to Heidegger’s stress on our mortality, Arendt argues that faith and hope in human affairs come from the fact that new people are continually coming into the world, each of them unique, each capable of new initiatives that may interrupt or divert the chains of events set in motion by previous actions.”

This is, indeed, a heartening message. One that we need to take to heart in these our own darkening days. Below are a three key paragraphs in which Arendt develops her understanding of the importance of natality in human affairs.

First, on the centrality of natality to political activity:

[T]he new beginning inherent in birth can make itself felt in the world only because the newcomer possesses the capacity of beginning something anew, that is, of acting. In this sense of initiative, an element of action, and therefore of natality, is inherent in all human activities. Moreover, since action is the political activity par excellence, natality, and not mortality, may be the central category of political, as distinguished from metaphysical, thought

Natality was a theme that predated the writing of The Human Condition. Here is the closing paragraph of arguably her best known work, after Eichmann in Jerusalem, The Origins of Totalitarianism, written a few years earlier.

“But there remains also the truth that every end in history also contains a new beginning; this beginning is the promise, the only ‘message’ which the end can ever produce. Beginning, before it becomes a historical event, is the supreme capacity of man; politically, it is identical with man’s freedom. Initium ut esset homo creatus est–”that a beginning be made man was created” said Augustine. This beginning is guaranteed by each new birth; it is indeed every man.”

In a well-known passage from The Human Condition, Arendt refers to the fact of natality as the “miracle that saves the world.” By the word world, Arendt does not mean the Earth, rather what we could call, borrowing a phrase from historian Thomas Hughes, the human-built world, what she glosses as “the realm of human affairs.” Here is the whole passage:

The miracle that saves the world, the realm of human affairs, from its normal, “natural” ruin is ultimately the fact of natality, in which the faculty of action is ontologically rooted. It is, in other words, the birth of new men and the new beginning, the action they are capable of by virtue of being born. Only the full experience of this capacity can bestow upon human affairs faith and hope, those two essential characteristics of human existence which Greek antiquity ignored altogether, discounting the keeping of faith as a very uncommon and not too important virtue and counting hope among the evils of illusion in Pandora’s box. It is this faith in and hope for the world that found perhaps its most glorious and most succinct expression in the few words with which the Gospels announced their “glad tidings”: “A child has been born unto us.”

Arendt well understood, however, that not all new beginnings would be good or just or desirable.

If without action and speech, without the articulation of natality, we would be doomed to swing forever in the ever-recurring cycle of becoming, then without the faculty to undo what we have done and to control at least partially the processes we have let loose, we would be the victims of an automatic necessity bearing all the marks of the inexorable laws which, according to the natural sciences before our time, were supposed to constitute the outstanding characteristic of natural processes.

In fact, Arendt attributes “the frailty of human institutions and laws and, generally, of all matters pertaining to men’s living together” to the “human condition of natality.” However, Arendt believed there were two capacities that channeled and constrained the power of action, the unpredictable force of natality: forgiveness and promise keeping. More on that, perhaps, in a later post.

Attention and the Moral Life

I’ve continued to think about a question raised by Frank Furedi in an otherwise lackluster essay about distraction and digital devices. Furedi set out to debunk the claim that digital devices are undermining our attention and our memory. I don’t think he succeeded, but he left us with a question worth considering: “The question that is rarely posed by advocates of the distraction thesis is: what are people distracted from?”

In an earlier post, I suggested that this question can be usefully set alongside a mid-20th century observation by Hannah Arendt. Considering the advent of automation, Arendt feared “the prospect of a society of laborers without labor, that is, without the only activity left to them.” “Surely, nothing could be worse,” she added.

The connection might not have been as clear as I imagined it, so let me explain. Arendt believed that labor is the “only activity left” to the laborer because the glorification of labor in modern society had eclipsed the older ends and goods to which labor had been subordinated and for the sake of which we might have sought freedom from labor.

To put it as directly as I can, Arendt believed that if we indeed found ourselves liberated from the need to labor, we would not know what to do with ourselves. We would not know what to do with ourselves because, in the modern world, laboring had become the ordering principle of our lives.

Recalling Arendt’s fear, I wondered whether we were not in a similar situation with regards to attention. If we were able to successfully challenge the regime of digital distraction, to what would we give the attention that we would have fought so hard to achieve? Would we be like the laborers in Arendt’s analysis, finally free but without anything to do with our freedom? I wondered, as well, if it were not harder to combat distraction, if we were inclined to do so, precisely because we had no telos for the sake of which we might undertake the struggle.

Interestingly, then, while the link between Arendt’s comments about labor and the question about the purpose of attention was initially only suggestive, I soon realized the two were more closely connected. They were connected by the idea of leisure.

We tend to think of leisure merely as an occasional break from work. That is not, however, how leisure was understood in either classical or medieval culture. Josef Pieper, a Catholic philosopher and theologian, was thinking about the cultural ascendency of labor or work and the eclipse of leisure around the same time that Arendt was articulating her fears of a society of laborers without labor. In many respects, their analysis overlaps. (I should note, though, that Arendt distinguishes between labor and work in way that Pieper does not. Work for Pieper is roughly analogous to labor in Arendt’s taxonomy.)

For her part, Arendt believed nothing could be worse than liberating laborers from labor at this stage in our cultural evolution, and this is why:

“The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society.  The fulfillment of the wish, therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won. Within this society, which is egalitarian because this is labor’s way of making men live together, there is no class left, no aristocracy of either a political or spiritual nature from which a restoration of the other capacities of man could start anew.”

To say that there is “no aristocracy of either a political or spiritual nature” is another way of saying that there is no leisured class in the older sense of the word. This older ideal of leisure did not entail freedom from labor for the sake of endless poolside lounging while sipping Coronas. It was freedom from labor for the sake of intellectual, political, moral, or spiritual aims, the achievement of which may very well require arduous discipline. We might say that it was freedom from the work of the body that made it possible for someone to take up the work of the soul or the mind. Thus Pieper can claim that leisure is “a condition of the soul.” But, we should also note, it was not necessarily a solitary endeavor, or, better, it was not an endeavor that had only the good of the individual in mind. It often involved service to the political or spiritual community.

Pieper further defines leisure as “a form of that stillness that is the necessary preparation for accepting reality; only the person who is still can hear, and whoever is not still cannot hear.” He makes clear, though, that the stillness he has in mind “is not mere soundlessness or a dead muteness; it means, rather, that the soul’s power, as real, of responding to the real – a co-respondence, eternally established in nature – has not yet descended into words.” Thus, leisure “is the disposition of receptive understanding, of contemplative beholding, and immersion – in the real.”

Pieper also claims that leisure “is only possible on the assumption that man is not only in harmony with himself, whereas idleness is rooted in the denial of this harmony, but also that he is in agreement with the world and its meaning. Leisure lives on affirmation.” The passing comment on idleness is especially useful to us.

In our view, leisure and idleness are nearly indistinguishable. But on the older view, idleness is not leisure; indeed, it is the enemy of leisure. Idleness, on the older view, may even take the shape of frenzied activity undertaken for the sake of, yes, distracting us from the absence of harmony or agreement with ourselves and the world.

We are now inevitably within the orbit of Blaise Pascal’s analysis of the restlessness of the human condition. Because we are not at peace with ourselves or our world, we crave distraction or what he called diversions. “What people want,” Pascal insists, “is not the easy peaceful life that allows us to think of our unhappy condition, nor the dangers of war, nor the burdens of office, but the agitation that takes our mind off it and diverts us.” “Nothing could be more wretched,” Pascal added, “than to be intolerably depressed as soon as one is reduced to introspection with no means of diversion.”

The novelist Walker Percy, a younger contemporary of both Arendt and Pieper, described what we called the “diverted self” as follows: “In a free and affluent society, the self is free to divert itself endlessly from itself.  It works in order to enjoy the diversions that the fruit of one’s labor can purchase.”  For the diverted self, Percy concluded, “The pursuit of happiness becomes the pursuit of diversion.”

If leisure is a condition of the soul as Pieper would have it, then might we also say the same of distraction? Discreet instances of being distracted, of failing to meaningfully direct our attention, would then be symptoms of a deeper disorder. Our digital devices, in this framing of distraction, are both a material cause and an effect. The absence of digital devices would not cure us of the underlying distractedness or aimlessness, but their presence preys upon, exacerbates, and amplifies this inner distractedness.

It is hard, at this point, for me not to feel that I have been speaking in another language or at least another dialect, one whose cadences and lexical peculiarities are foreign to our own idiom and, consequently, to our way of making sense of our experience. Leisure, idleness, contemplative beholding, spiritual and political aristocracies–all of this recalls to mind Alasdair MacIntyre’s observation that we use such words in much the same way that a post-apocalyptic society, picking up the scattered pieces of the modern scientific enterprise would use “neutrino,” “mass,” and “specific gravity”: not entirely without meaning, perhaps, but certainly not as scientists. The language I’ve employed, likewise, is the language of an older moral vision, a moral vision that we have lost.

I’m not suggesting that we ought to seek to recover the fullness of the language or the world that gave it meaning. That would not be possible, of course. But what if we, nonetheless, desired to bring a measure of order to the condition of distraction that we might experience as an affliction? What if we sought some telos to direct and sustain our attention, to at least buffer us from the forces of distraction?

If such is the case, I commend to you Simone Weil’s reflections on attention and will. Believing that the skill of paying attention cultivated in one domain was transferable to another, Weil went so far as to claim that the cultivation of attention was the real goal of education: “Although people seem to be unaware of it today, the development of the faculty of attention forms the real object and almost the sole interest of studies.”

It was Weil who wrote, “Attention is the rarest and purest form of generosity.” A beautiful sentiment grounded in a deeply moral understanding of attention. Attention, for Weil, was not merely an intellectual asset, what we require for the sake of reading long, dense novels. Rather, for Weil, attention appears to be something foundational to the moral life:

“There is something in our soul that loathes true attention much more violently than flesh loathes fatigue. That something is much closer to evil than flesh is. That is why, every time we truly give our attention, we destroy some evil in ourselves.”

Ultimately, Weil understood attention to be a critical component of the religious life as well. “Attention, taken to its highest degree,” Weil wrote, “is the same thing as prayer. It presupposes faith and love.” “If we turn our mind toward the good,” she added, “it is impossible that little by little the whole soul will not be attracted thereto in spite of itself.” And this is because, in her view, “We have to try to cure our faults by attention and not by will.”

So here we have, if we wanted it, something to animate our desire to discipline the distracted self, something at which to direct our attention. Weil’s counsel was echoed closer to our own time by David Foster Wallace, who also located the goal of education in the cultivation of attention.

“Learning how to think really means learning how to exercise some control over how and what you think,” Wallace explained in his now famous commencement address at Kenyon College. “It means being conscious and aware enough to choose what you pay attention to and to choose how you construct meaning from experience.”

“The really important kind of freedom,” Wallace added, “involves attention and awareness and discipline, and being able truly to care about other people and to sacrifice for them over and over in myriad petty, unsexy ways every day. That is real freedom. That is being educated, and understanding how to think.” Each day the truth of this claim impresses itself more and more deeply upon my mind and heart.

Finally, and briefly, we should be wary of imagining the work of cultivating attention as merely a matter of learning how to consciously choose what we will attend to at any given moment. That is part of it to be sure, but Weil and Pieper both knew that attention also involved an openness to what is, a capacity to experience the world as gift. Cultivating our attention in this sense is not a matter of focusing upon an object of attention for our own reasons, however noble those may be. It is also a matter of setting to one side our projects and aspirations that we might be surprised by what is there. “We do not obtain the most precious gifts by going in search of them,” Weil wrote, “but by waiting for them.” In this way, we prepare for “some dim dazzling trick of grace,” to borrow a felicitous phrase from Walker Percy, that may illumine our minds and enliven our hearts.

It is these considerations, then, that I would offer in response to Furedi’s question, What are we distracted from?