Superfluous People, the Ideology of Silicon Valley, and The Origins of Totalitarianism

There’s a passage from Arendt’s The Origins of Totalitarianism that has been cited frequently in recent months, and with good reason. It speaks to the idea that we are experiencing an epistemic crisis with disastrous cultural and political consequences:

The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.

Jay Rosen recently tweeted that this was, for him, the quote of the year for 2017, and one can see why.

I would, however, suggest that there is another passage from the closing chapters of The Origins of Totalitarianism, or rather cluster of passages, that we might also consider. These passages speak to a different danger: the creation of superfluous people.

“There is only one thing,” Arendt concludes, “that seems discernible: we may say that radical evil has emerged in connection with a system in which all men have become equally superfluous.”

“Totalitarianism strives not toward despotic rule over men,” Arendt furthermore claims, “but toward a system in which men are superfluous.” She immediately adds, “Total power can be achieved and safeguarded only in a world of conditioned reflexes, of marionettes without the slightest trace of spontaneity.”

Superfluity, as Arendt uses the term, suggests some combination of thoughtless automatism, interchangeability, and expendability. A person is superfluous when they operate within a system in a completely predictable way and can, as a consequence, be easily replaced. Individuality is worse than meaningless in this context; it is a threat to the system and must be eradicated.

So just as the “ideal subject” of a totalitarian state is someone who has been overwhelmed by epistemic nihilism, Arendt describes the “model ‘citizen'” as the human person bereft of spontaneity: “Pavlov’s dog, the human specimen reduced to the most elementary reactions, the bundle of reactions that can always be liquidated and replaced by other bundles of reactions that behave in exactly the same way, is the model ‘citizen’ of a totalitarian state.”

Arendt adds that “such a citizen can be produced only imperfectly outside of the camps.” In the camps, the “world of the dying” as Arendt calls them, “men are taught they are superfluous through a way of life in which punishment is meted out without connection to crime, in which exploitation is practiced without profit, and where work is performed without product, is a place where senselessness is daily produced anew.”

It may be obvious how Arendt’s claim regarding the inability to distinguish between truth and falsehood, fact and fiction speaks to our present moment, but what does her discussion of superfluous people and concentration camps have to do with us?

First, I should make clear that I do not expect to see death camps anytime soon. That said, it seems that there are a number of developments which together tend toward rendering people superfluous. For example: the operant conditioning to which we submit on social media, the pursuit of ever more sophisticated forms of automation, and the drive to outsource more and more aspects of our humanity to digital tools.

“If we take totalitarian aspirations seriously and refuse to be misled by the common-sense assertion that they are utopian and unrealizable,” Arendt insisted, “it develops that the society of the dying established in the camps is the only form of society in which it is possible to dominate man entirely.”

I would suggest that having discovered another form of society in which it is possible to dominate people entirely may be the dark genius of our age, a Huxleyan spin on an earlier Orwellian threat. I would also suggest that this achievement has traded on the expression of individuality rather than its suppression.

For example, social media appears to encourage the expression of individuality. In reality, it is a Skinner box, we are being programmed, and our so-called individuality is irrelevant ephemera so far as the network is concerned. In other words, people, insofar as they are considered as individuals, are, in fact, superfluous.

Regarding automation, it is, from my vantage point and given my lack of expertise, impossible to tell what will be the scale of its impact on employment. But it seems clear that there is cause for concern (unless you happen to live in Sweden). I have no reason to doubt that what jobs can be automated will be automated at the expense of workers, workers who will be rendered superfluous. What new jobs are expected to arise will be of the micro-gig economy or tend-the-machine sort. Work, that is to say, in which people qua individuals are superfluous.

As for the outsourcing of our cognitive, emotional, and ethical labor and our obsessive self-tracking and self-monitoring, it amounts to being sealed in a tomb of our revealed preferences (to borrow Rob Horning’s memorable line). Once more, spontaneous desire, serendipity, much of what Arendt classified as natality, the capacity to make a beginning at the heart of our individuality—all of it is surrendered to the urge for an equilibrium of programmed predictability.


“Over and above the senselessness of totalitarian society,” Arendt went on to observe, “is enthroned the ridiculous supersense of its ideological superstition.” As she goes on to analyze the ideologies that supported the senselessness of totalitarian societies, discomforting similarities to strands of the Silicon Valley ideology emerge. Most notably, it seems to me, they share a blind adherence to a supposed Law driving human affairs. A Law adherence to which frees a person from ordinary moral responsibility, raises the person above the unenlightened masses, indeed, generates a barely veiled misanthropy.

Consider the following analysis:

Totalitarian lawfulness, defying legality and pretending to establish the direct reign of justice on earth executes the law of History [as understood by Communism] or of Nature [as understood by Nazism] without translating it into standards of right and wrong for individual behavior. It applies the law directly to mankind without bothering with the behavior of men. The law of Nature or the law of History, if properly executed, is expected to produce mankind as its end product; and this expectation lies behind the claim to global rule of all totalitarian governments.

Now substitute the “law of Technology” for the law of History and the law of Nature. Tell me if it does not work just as well. This law can be variously framed, but it amounts to some kind of self-serving, poorly conceived technological determinism built upon some ostensible fact like Moore’s Law, and it dictates that humanity as it exists must be left behind in order to accommodate this deep law ordering the flow of time.

“What totalitarian ideologies therefore aim at is not the transformation of the outside world or the revolutionizing of society, but the transformation of human nature itself,” Arendt recognized. And so it is with the transhumanist strains of the ideology of Silicon Valley.

As I write these words, an excerpt from Emily Chang’s forthcoming Brotopia: Breaking Up the Boys Club of Silicon Valley published in Vanity Fair is being shared widely on social media. It examines the “exclusive, drug-fueled, sex-laced parties” that some of the most powerful men in Silicon Valley regularly attend. The scandal is not in the sexual license. Indeed, that they believe their behavior to be somehow bravely unconventional and pioneering would be laughable were it not for its human toll. What is actually disturbing is how this behavior is an outworking of ideology and how this ideology generates so much more than drug-addled parties.

“[T]hey speak proudly about how they’re overturning traditions and paradigms in their private lives, just as they do in the technology world they rule,” Chang writes. “Their behavior at these high-end parties is an extension of the progressiveness and open-mindedness—the audacity, if you will—that make founders think they can change the world. And they believe that their entitlement to disrupt doesn’t stop at technology; it extends to society as well.”

“If this were just confined to personal lives it would be one thing,” Chang acknowledges. “But what happens at these sex parties—and in open relationships—unfortunately, doesn’t stay there. The freewheeling sex lives pursued by men in tech—from the elite down to the rank and file—have consequences for how business gets done in Silicon Valley.”

“When they look in the mirror,” Chang concludes, “they see individuals setting a new paradigm of behavior by pushing the boundaries of social mores and values.”

If you’re on the vanguard of the new humanity, social mores and values are for losers.

Arendt also gives us a useful way of framing  the obsession with disruption.

“In the interpretation of totalitarianism, all laws have become laws of movement,” Arendt claims. That is to say that stability is the enemy of the execution of the law of History or of Nature or, I would add, of Technology: “Neither nature nor history is any longer the stabilizing source of authority for the actions of mortal men; they are movements themselves.”

Upsetting social norms, disrupting institutions, destabilizing legal conventions, all of it is a way of freeing up the inevitable unfolding of the law of Technology. Never mind that what is actually being freed up, of course, is the movement of wealth. The point is that the ideology gives cover for whatever depredations are executed in its name. It engenders, as Arendt argues elsewhere, a pernicious species of thoughtlessness that abets all manner of moral outrages.

“Terror,” she explained, “is the realization of the law of movement; its chief aim is to make it possible for the force of nature or of history to race freely through mankind, unhindered by any spontaneous human action. As such, terror seeks to ‘stabilize’ men in order to liberate the forces of nature or history.”

Here again I would argue that we are witnessing a Huxleyan variant of this earlier Orwellian dynamic. Consider once more the cumulative effect of the many manifestations of the networks of surveillance, monitoring, operant conditioning, automation, routinization, and programmed predictability in which we are enmeshed. Their effect is not enhanced freedom, individuality, spontaneity, thoughtfulness, or joy. Their effect is, in fact, to stabilize us into routine and predictable patterns of behavior and consumption. Humanity is stabilized so that the law of Technology can run its course.

Under these circumstances, Arendt goes on to add, “Guilt or innocence become senseless notions; ‘guilty’ is he who stands in the way of the natural or historical process which has passed judgment over ‘inferior races,’ over ‘individuals ‘unfit to live,’ over ‘dying classes and decadent peoples.'” All the recent calls for the tech industry, then, may very well fall not necessarily on deaf ears but on on uncomprehending or indifferent ears tuned only to greater ideological “truths.”


“Totalitarian solutions may well survive the fall of totalitarian regimes in the form of strong temptations which will come up whenever it seems impossible to alleviate political, social, or economic misery in a manner worthy of man.”

Perhaps that, too, is an apt passage for our times.

In two other observations Arendt makes in the closing pages of Origins, we may gather enough light to hold off the darkness. She writes of loneliness as the “common ground for terror, the essence of totalitarian government, and for ideology and logicality, the preparation of its executioners and victims.” This loneliness is “closely connected with uprootedness and superfluousness which have been the curse of the modern masses since the beginning of the industrial revolution [….]

“Ideologies are never interested in the miracle of being,” she also observes.

Perhaps, then, we might think of the cultivation of wonder and friendship as inoculating measures, a way of sustaining the light.

The Meaning of Luddism

In his recent book about the future of technology, Tim O’Reilly, sometimes called the Oracle of Silicon Valley, faults the Luddites for a failure of imagination. According to O’Reilly, they did not imagine

… that their descendants would have more clothing than the kings and queens of Europe, that ordinary people would eat the fruits of summer in the depths of winter. They couldn’t imagine that we’d tunnel through mountains and under the sea, that we’d fly through the air, crossing continents in hours, that we’d build cities in the desert with buildings a half mile high, that we’d stand on the moon and put spacecraft in orbit around distant planets.…

Of course, O’Reilly doesn’t care about the Luddites in their historical particularity, as actual human beings who lived and suffered. The Luddites are merely a placeholder for an idea: that opponents of technological “progress” are ridiculous, misguided, and doomed. Never mind that the Luddites were not opposed to new technology, only to the disempowering and inequitable deployment of new technology.

In a fine critical review of O’Reilly’s book, Molly Sauter offers this bracing rejoinder of the contemporary application of this logic:

If you’ve lost your job, and can’t find another one, or were never able to find steady full time employment in the first place between automation, outsourcing, and strings of financial meltdowns, Tim O’Reilly wants you to know you shouldn’t be mad. If you’ve been driven into the exploitative arms of the gig economy because the jobs you have been able to find don’t pay a living wage, Tim O’Reilly wants you to know this is a great opportunity. If ever you find yourself being evicted from an apartment you can’t afford because Airbnb has fatally distorted the rental economy in your city, wondering how you’ll pay for the health care you need and the food you need and the student loans you carry with your miscellaneous collection of gigs and jobs and plasma donations, feeling like you’re part of a generational sacrifice zone, Tim O’Reilly wants you to know that it will be worth it, someday, for someone, a long time from now, somewhere in the future.

This is exactly right. There is a certain moral tone-deafness to O’Reilly’s rhetoric. He imagines that a family faced with destitution would bear up happily if only they knew that their suffering was a necessary step toward a future of technological marvels. Your family may not be able to put food on the table, but, not to worry, somewhere down the line, a man will walk on the moon.

In fact, it would seem as if O’Reilly would fault them not only for failing to stoically bear their role as the stepping stones of progress but for not celebrating while they were being trampled on.

There is a cold, calculating utilitarianism at work here. Consequently, the enduring meaning of the Luddites may best be captured in Ursula Le Guin’s short story, “The Ones Who Walk Away from Omelas.” The people of Omelas are prosperous and happy beyond our wildest dreams, but, when they come of age, they are each let in on a secret: the city’s happiness depends on the suffering of one lone child who is kept in perpetual squalor and isolation. Upon discovering this fact about their glittering city, most overcome their initial horror and settle back into the enjoyments the city provides. There are a few, however, who walk away. They forsake their happiness because they can no longer live with the knowledge of the price at which it is purchased.

“The place they go towards is a place even less imaginable to most of us than the city of happiness,” the narrator concludes. “I cannot describe it at all. It is possible it does not exist. But they seem to know where they are going, the ones who walk away from Omelas.”

The point is a simple one: the story of technological progress is often told at the expense of those who have no share in that progress or whose prosperity and well-being were sacrificed for its sake. This is true of individuals, institutions, communities, whole peoples, and the swaths of the non-human world.

Here, then, is the meaning of Luddism: the Luddites are a sign to us of the often hidden costs of our prosperity. Perhaps this is why they are the objects of our willful misunderstanding and ridicule. Better to heap scorn upon the dead than reckon with our own failures.

In truth then, the failure of imagination is ours, not theirs. It is we who have not been able to imagine a more just society in which technological progress is directed toward human flourishing and its costs, such as they must be, are more equitably distributed.


The blog Librarian Shipwreck has published a number of thoughtful posts on Luddism, its history and contemporary significance. They are collected here. I encourage you to not only read these posts, but to also follow the blog.

To the Man With a Machete, or How Technology Mediates Perception

The most powerful and pervasive myth about technology is that a tool is fundamentally neutral; all that is of any consequence, according to this myth, is what one happens to do with that tool. This myth is powerful because it does contain an important truth: one and the same tool can be used for both morally good and bad ends. However, this is far from the whole story.

Technology, as Melvin Kranzberg has put it, is “neither good nor bad; nor is it neutral.” Furthermore, as McLuhan and a host of others have observed, media carry total effects that are independent of any particular uses to which they are put. One of the most consequential aspects of any given tool’s non-neutrality is the power of a tool to shape perception. The old well-worn line — to the person with a hammer everything looks like a nail — neatly captures the basic idea.

Today, I was once again reminded of technology’s power to shape our perception and experience. Irma spared my home but not the many trees around my house. So I’ve been working these past couple of weeks to clean up limbs and debris. Some of this work I’ve undertaken with the help of a machete. It’s an elegant and effective tool, and I’ve been grateful for it. And sure enough, machete in hand my immediate environment is transformed, I see and feel my way through it differently than I would otherwise. A limb or branch now presents itself as something that could be struck. And with machete in hand I feel encouraged to strike.

I was reminded of the historian David Nye’s discussion of the difference between reading a tool as a text and using it. Each yields a different kind of knowledge.

The slightly bent form of an American axe handle, when grasped, becomes an extension of the arms. To know such a tool it is not enough merely to look at it: one must sense its balance, swing it, and feel its blade sink into a log. Anyone who has used an axe retains a sense of its heft, the arc of its swing, and its sound. As with a baseball bat or an axe, every tool is known through the body. We develop a feel for it. In contrast, when one is only looking at an axe, it becomes a text that can be analyzed and placed in a cultural context. It can be a basis for verifiable statements about its size, shape, and uses, including its incorporation into literature and art. Based on such observations, one can construct a chronology of when it was invented, manufactured, and marketed, and of how people incorporated it into a particular time and place. But ‘reading’ the axe yields a different kind of knowledge than using it.

This is true not only of axes and machetes and other tools that are obviously and overtly taken up and used with the body. I’d suggest that it is true of just about every kind of tool and technology. Every technology somehow mediates our relationship with the world around us, and in doing so every technology shapes our perception of the world. One of the most important things, then, that we can know about any technology is how it shapes our perception.

Happily, this kind of knowledge is not limited to the experts and scholars, it is available to all of us if only we stop to think and reflect upon our experience. A measure of self-awareness and a willingness to contemplate what it feels like to use a tool or how it directs our attention or how it represents the world to us would be enough to achieve this kind of knowledge. It may take a little practice, but we’d be a little better positioned to use our tools wisely if we took the time.


What Do I See When I See My Child?

An entry in a series on the experience of being a parent in the digital age. 

At first glance, this may seem like a question with an obvious and straightforward answer, but it isn’t. Vision plays a trick on us all. It offers its findings to us as a plain representation of “what is there.” But things are not so simple. Most of us know this because at some point our eyes have deceived us. The thing we thought we saw was not at all what was, in fact, there. Even this cliche about our eyes deceiving us reveals something about the implicit trust we ordinarily place in what our eyes show to us. When it turns out that our trust has been betrayed we do not simply say that we were mistaken–we speak as if we have been wronged, as if our eyes have behaved immorally. We are not in the habit, I don’t think, of claiming that our ears deceived us or our nose.

What we ordinarily fail to take into account is that seeing is an act of perception and perception is a form of interpretation.

Seeing is selective. Upon glancing at a scene, I’m tempted to think that I’ve taken it all in. But, of course, nothing could be further from the truth. If I were to look again and look for a very long time, I would continue to see more and more details that I did not see at first, second, or third glance. Whatever it was that I perceived when I first looked is not what I will necessarily see if I continue to look; at the very least, it will not be all that I will see. So why did I see what I saw when first I looked?

Sometimes we see what we think we ought to see, what we expect to see. Sometimes we see what we want to see or that for which we are looking. Seeing is thus an act of both remembering and desiring. And this is not yet to say anything of the meaning of what we see, which is also intertwined with perception.

It is also the case that perception is often subject to mediation and this mediation is ordinarily technological in nature. Indeed, one of the most important consequences of any given technology is, in my view, how it shapes our perception of the world. But we are as tempted to assume that technology is neutral in its mediations and representations as we are to believe that vision simply shows us “what is there.” So when our vision is technologically mediated it is as if we were subject to a double spell.

The philosopher Peter-Paul Verbeek, building on the work of Don Ihde, has written at length about what he has called the ethics of technological mediation. Technologies bring about “specific relations between human beings and reality.” They do this by virtue of their role in mediating both our perception of the world and our action in the world.

According to Ihde, the mediating work of technology comes in the form of two relations of mediation: embodiment relations and hermeneutic relations. In the first, tools are incorporated by the user and the world is experienced through the tool. Consider the blind man’s stick an example of an embodiment relation; the stick is incorporated into the man’s body schema.

Verbeek explains hermeneutic relations in this way: “technologies provide access to reality not because they are ‘incorporated,’ but because they provide a representation of reality, which requires interpretation.” Moreover, “technologies, when mediating our sensory relationship with reality, transform what we perceive. According to Ihde, the transformation of perception always has the structure of amplification and reduction.”

We might also speak of how technological mediation focuses our perception. Perhaps this is implied in Ihde’s two categories, amplification and reduction, or the two together amount to a technology’s focusing effect. We might also speak of this focusing effect as a directing of our attention.

So, once again, what do I see when I see my child?

There are many technologies that mediate how I perceive my child. When my child is in another room, I perceive her through a video monitor. When my child is ill, I perceive her through a digital thermometer, some which now continuously monitor body temperature and visualize the data on an app. Before she was born, I perceived her through ultrasound technology. When I am away from home, I perceive her through Facetime. More examples, I’m sure, may come readily to your mind. Each of these merits some attention, but I set them aside to briefly consider what may be the most ubiquitous form of technological mediation through which I perceive my child–the digital camera.

Interestingly, it strikes me that the digital camera, in particular the camera with which our phones are equipped, effects both an embodiment relation and a hermeneutic relation. I fear that I may be stretching the former category to make this claim, but I am thinking of the smartphone as a device which, in many respects, functions as a prosthesis. I mean by this that it is ready-to-hand to such a degree that it is experienced as an appendage of the body and that, even when it is not in hand, the ubiquitous capacity to document has worked its way into our psyche as a frame of mind through which we experience the world. It is not only the case that we see a child represented in a digital image, our ordinary act of seeing itself becomes a seeing-in-search-of-an-image.

What does the mediation of the digital smartphone camera amplify? What does it reduce? How does it bring my child into focus? What does it encourage me to notice and what does it encourage me to ignore? What can it not account for?

What does it condition me to look for when I look at my child and, thus, how does it condition my perception of my child?

Is it my child that I see or a moment to be documented? Am I perceiving my child in herself or am I perceiving my child as a component of an image, a piece of the visual furniture?

What becomes of the integrity of the moment when seeing is mediated through an always-present digital camera?

How does the representation of my child in images that capture discreet moments impact my experience of time with my child? Do these images sustain or discourage the formation of a narrative within which the meaning of my relationship with my child emerges?

It is worth noting, as well, that the smartphone camera ordinarily exists as one component within a network of tools that includes the internet and social media tools. In other words, the image is not merely a record of a moment or an externalized memory. It is also always potentially an act of communication. An audience–on Facebook, Twitter, Instagram, Youtube, Snapchat, etc.–is everywhere with me as an ambient potentiality that conditions my perception of all that enters into my experience. Consequently, I may perceive my child not only as a potential image but as a potential image for an audience.

What is the nature of this audience? What images do I believe they care to see? What images do I want them to see? From where does my idea of the images they care to see arise? Do they arise from the images I see displayed for me as part of another’s audience? Or from professional media or commercial marketing campaigns? Are these the visual patterns I remember, half-consciously perhaps, when my perceiving takes the on the aspect of seeing-as-expectation? Do they form my perception-as-desire? For whom is my child under these circumstances?

I have raised many questions, which I have left unanswered. I leave these questions unanswered chiefly because whatever my answers may be, they are not likely to be your answers. And the value of these questions lies in the asking and not in the particular answers that I might give to them. Regardless of the answers we give, the questions are worth asking for what they may reveal as we contemplate them.

If you’ve appreciated what you’ve read, consider supporting the writer.

All That’s Wrong With Education In One Picture

Okay, not “all,” but here is an image that captures much of what is wrong in the world of education.

You can read more about this school, called (without irony we are to assume) Carpe Diem, here.

When I first saw the image, I wondered, for a fleeting moment, if this were a parody or a fictional school set in some dreary, soul-numbing dystopian future. No such luck. Thinking beyond my initial visceral response, one question came to mind: What do you have to believe about the human person, knowledge, and education to think that this is a good model for how children should learn?

That question was followed by another, more cynical query: What business model do you have to buy into?

But let’s return to the first question for a moment. In numerous contexts, the philosopher James K.A. Smith has observed that every pedagogy assumes an anthropology. That is to say that every theory and practice of education assumes a certain view of the human person. Needless to say, this view is not always explicit, nor can it always be articulated by those who take it for granted. Nonetheless, when someone sets out to educate children they do so based on some understanding of what it means to flourish as a human being, the goal of education, what counts as knowledge, and how children learn.

So, again, what do you have to believe in order to conclude that this cubicle based model of education is the way to go? At the very least, I’d say that you’d have to discount both the embodied and social dimensions of learning. Hook your brain up to the screen, forget you have a body or that the body has much to do with how we come to learn about the world, and download the data. Never mind interpersonal relationships that fuel the desire to learn, never mind models and mentors, never mind the knowledge that can only be gained in conversation with peers and teachers.

You would also have to assume that education was merely a matter of transferring discreet bits of information from one receptacle, the computer, to another, the human mind. In other words, you would have to assume an impoverished account of both what it is to be a human being and of knowledge itself.

I would suggest that this impoverished view of the human person and of knowledge has become plausible because the computer has become a master metaphor ordering our thinking about knowledge and minds. Having understood the computer by analogy to the mind, we have now reversed the direction of the analogy and have come to understand the mind by analogy to the computer.

In fact, though, a similar trajectory was already discernible much earlier when “the machine” became our master metaphor. Consider this French cartoon from the late nineteenth century.

I’d suggest the image above finds its fulfillment in the image of the Carpe Diem school with which we began.

A few years ago, I touched on related matters from another angle. I wrote then of a similar “unspoken assumption” about learning: “that knowledge is merely aggregated data and its mode of acquisition does nothing to alter its status. But what if this were a rather blinkered view of knowledge? And what if the acquisition of knowledge, however understood, was itself only a means to other more important ends?

If the work of learning is ultimately subordinate to becoming a certain kind of person, then it matters very much how we go about learning. In some sense, it may matter more than what we learn. This is because  the manner in which we go about acquiring knowledge constitutes a kind of practice that over the long haul shapes our character and disposition in non-trivial ways. Acquiring knowledge through apprenticeship, for example, shapes people in a certain way, acquiring knowledge through extensive print reading in another, and through web based learning in still another. The practice which constitutes our learning, if we are to learn by it, will instill certain habits, virtues, and, potentially, vices — it will shape the kind of person we are becoming.”

If this is the case, then what sort of formation is taking place given the practice of learning embodied by the Carpe Diem school?

Let me reiterate, though: the Carpe Diem model is just a more extreme example of practices and assumptions that are widely distributed throughout the world of education, where, regrettably, the siren song of the next revolutionary educational technology often proves too hard to resist no matter how many times it has shipwrecked those who heed it.


arendt seminar

Yes, I know we can’t all sit around the seminar table with the likes of Hannah Arendt. Nonetheless, in my view, there is an ideal to strive for here.