When Silence is Power

In The Human Condition, Hannah Arendt wrote, “What first undermines and then kills political communities is loss of power and final impotence.” She went on to add, “Power is actualized only where word and deed have not parted company, where words are not empty and deeds not brutal, where words are not used to veil intentions but to disclose realities, and deeds are not used to violate and destroy but to establish relations and create new realities.”

In our present media environment, the opposite of this formula may be closer to the truth, at least in certain situations. In these cases, the refusal to speak is action. Silence is power.

The particular situation I have in view is the hijacking of public discourse (and consequently the political order) by the endless proliferation of manufactured news and fabricated controversy.

These pseudo-events are hyperreal. They are media events that exist as such only in so far as they are spoken about. “To go viral” is just another way of describing the achievement of hyperreality . To be “spoken about” is to be addressed within our communication networks. In a networked society, we are the relays and hyperreality is an emergent property of our networked acts of communication.

Every interest that constitutes our media environment and media economy is invested in the perpetuation of hyperreality.

Daily, these pseudo-events consume our attention and our mental and emotional energy. They feed off of and inspire frustration, rage, despair, paranoia, revenge, and, ultimately, cynicism. It is a daily boom/bust cycle of the soul.

Because they are constituted by speech, the pseudo-events are immune to critical speech. Speaking of them, even to criticize them, strengthens them.

When speaking is the only perceived form of action–it is, after all, the only way of existing on our social media networks–then that which thrives by being spoken about will persist.

How does one protest when acts of protest are consistently swallowed up by that which is being protested? When the act of protest has the perverse effect of empowering that which is being protested?

Silence.

Silence is the only effective form of boycott. Traditional boycotts, the refusal to purchase goods or patronize establishments, are ineffective against hyperreality. They are sucked up into the pseudo-events.

Finally, the practice of silence must be silent about itself.

Here the practice of subversive silence threatens to fray against the edge of our media environment. When the self is itself constituted by acts of speech within the same network, then refusal to speak feels like self-deprivation. And it is. Silence under these conditions is an ascetic practice, a denial of the self that requires considerable discipline.

But if we are relays in the network, then self-sabotage becomes a powerful act of protest.

Perhaps the practice of this kind of self-imposed, unacknowledged silence may be the power that helps resuscitate public discourse.


You can subscribe to my newsletter, The Convivial Society, here.

Flame Wars, Punctuation Cartoons, and Net-heads: The Internet in 1988

I remember having, in the late 1980s, an old PC with a monochrome monitor, amber on black, that was hooked up to the Internet through Prodigy. I was on the Internet, but I had no idea what the Internet was. All I knew was that I could now get near-realtime updates on the score of Mets games.

Back in November, the Washington Post’s tech page posted the text of an article from the newspaper covering the Internet in 1988, right about the time I was messing around on Prodigy:  “Here’s how The Post covered the ‘grand social experiment’ of the Internet in 1988.” It’s a fascinating portal into what feels like another world, sort of. I usually think of the modern Internet experience first taking shape with the World Wide Web in the early and mid-90s. Reading this article, I was struck by how early many of the contours of the web as we know it, and the language we use to describe, began to appear. To be sure, there are also striking discontinuities, but the Internet in 1988 — before AOL, WWW, the dot-com bubble, Web 2.0, social, mobile, and all that — exhibited characteristics that will be readily recognizable. 

Consider the early example of “crowd-sourcing” or collective intelligence with which the author opens the article:

Kreisel was looking for an efficient way to paint patterns inside computer-drawn shapes. Paul Heckbert, a graduate student in California, did not know Kreisel, but he had a pretty good piece of code, or computer programming, to do the job. He dispatched it to Kreisel’s machine, and seconds later the New Yorker’s problem was solved.

Of course, the snake was already in the garden. The article is, in fact, occasioned by “a rogue program, known as a ‘virus,'” designed by a student at Cornell that “did not destroy any data but clogged computers and wasted millions of dollars’ worth of skilled labor and computer time.”

The virus, we’re told, “frightens many network visionaries, who dream of a ‘worldnet’ with ever more extensive connections and ever fewer barriers to the exchange of knowledge.” (Cyber-utopianism? Check. Of course, the roots of cyber-utopianism go back further still than 1988.) According to a Harvard astrophysicist, the Internet is a “community far more than a network of computers and cables.” “When your neighbors become paranoid of one another,” he added, “they no longer cooperate, they no longer share things with each other. It takes only a very, very few vandals to … destroy the trust that glues our community together.”

The scale clearly appeared massive, but today it seems quaint: “Together the news groups produce about 4 million characters of new material a day, the equivalent of about five average books.” But don’t worry about trying to keep up with it all, “90 percent of it is complete and utter trash,” at least as far as that astrophysicist was concerned. Hard to imagine that he was far off the mark (or that the ratio has shifted too much in the ensuing years).

At the time, “thousands of men and women in 17 countries swap recipes and woodworking tips, debate politics, religion and antique cars, form friendships and even fall in love.” Honestly, that’s not a bad sample of the sorts of things we’re still doing on the Internet. In part, this is because it is not a bad sample of what human beings do generally, so it’s what we end up doing in whatever social spaces we end up creating with our tools.

And when human beings find themselves interacting in new contexts created by new communication technologies, there are bound to be what we might kindly call “issues” while norms and conventions adapt to account for the new techno-social configuration. So already in 1988, we read that the Internet “has evolved its own language, social norms and “netiquette.'” More than twenty years hence, that project is still ongoing.

The author focused chiefly on the tone of discourse on Internet forums and, rightly I think, attributed its volatility to the characteristics of the medium:

Wouk’s riposte is a good example of why arguments sometimes intensify into bitter feuds known as flame wars, after the tendency of one character in Marvel Comics’ Fantastic Four to burst into flames (“Flame on!” he shouts) when he is angry. Is Wouk truly angry or just having a good time? Because the written word conveys no tone of voice, it isn’t always easy to tell.

“In a normal social setting,” said Chuq von Rospach of Sun Microsystems, “chances are the two of them would have a giggle over it and it would go away. On a network it tends to escalate. The feedback mechanisms that tell you to back off, or tell you that this person’s joking, aren’t there. The words are basically lifeless.”

Not only would I have failed to guess that flame wars was already in use in 1988, I had no idea of its etymology. But the author doesn’t use emoticon when describing their use:  “True net-heads sometimes resort to punctuation cartoons to get around the absence of inflection. They may append a :-) if they are making a joke (turn your head to the left) or use :-( for an ersatz frown.”

“Net-heads” appears not to have caught on, but what the author calls “punctuation cartoons” certainly have. (What we now call emoticons, by the way, first appeared in the late 19th century.)

Finally, lest the Internet of 1988 appear all to familiar, here’s one glaring point of discontinuity on which to close: “The one unbending rule is that thou shalt not post commercial announcements. It isn’t written anywhere, but heaven help the user who tries to broadcast an advertisement.”

And for good measure, here’s another snippet of the not-too-distant past (1994) that nonetheless manages to feel as if it were ages ago. “Allison, can you explain what Internet is?”

The Assassin and the Camera

It’s not uncommon to hear someone say that they were haunted by an image, often an old photograph. It is a figurative and evocative expression. To say that an image is haunting is to say that the image has lodged itself in the mind like a ghost might stubbornly take up residence in a house, or that it has somehow gotten a hold of the imagination and in the imagination lives on as a spectral after-image. When we speak of images of the deceased, of course, the language of haunting approaches its literal meaning. In these photographs, the dead enjoy an afterlife in the imagination.

Lewis Powell

I’ve lately been haunted myself by one such photograph. It is a well-known image of Lewis Powell, the man hung for his failed attempt to assassinate Secretary of State William Seward. On the same night that John Wilkes Booth murdered the president, Powell was to kill the secretary of state and their co-conspirator, George Atzerodt, was to kill Vice-President Andrew Johnson. Atzerodt failed to attempt the assassination altogether. Powell followed through, and, although Seward survived, he inflicted tremendous suffering on the Seward household.

I came upon the haunting image of Powell in a series of recently colorized Civil War photographs, and I was immediately captivated by the apparent modernity of the image. Nineteenth century photographs tend to have a distinct feel, one that clearly announces the distant “pastness” of what they have captured. That they are ordinarily black-and-white only partially explains this effect. More significantly, the effect is communicated by the look of the people in the photographs. It’s not the look of their physical appearance, though; rather, it’s the “look” of their personality.

There is distinct subjectivity—or, perhaps, lack thereof—that emerges from these old photographs. There is something in the eyes that suggests a way of being in the world that is foreign and impenetrable. The camera is itself a double cause of this dissonance. First, the subjects seem unsure of how to position themselves before the camera; they are still unsettled, it seems, by the photographic technique. They seem to be wrestling with the camera’s gaze. They are too aware of it. It has rendered them objects, and they’ve not yet managed to negotiate the terms under which they may recover their status as subjects in their own right. In short, they had not yet grown comfortable playing themselves before the camera, with the self-alienated stance that such performance entails.

But then there is this image of Powell, which looks as if it could have been taken yesterday and posted on Instagram. The gap in consciousness seems entirely closed. The “pastness” is eclipsed. Was this merely a result of his clean-shaven, youthful air? Was it the temporal ambiguity of his clothing or of the way he wore his hair? Or was Powell on to something that his contemporaries had not yet grasped? Did he hold some clue about the evolution of modern consciousness? I went in search of an answer, and I found that the first person I turned to had been there already.

Death on Film

"He is dead, and he is going to die ..."
“He is dead, and he is going to die …”

Roland Barthes’ discussion of death and photography in Camera Lucida: Reflections on Photography has achieved canonical status, and I turned to his analysis in order to shed light on my experience of this particular image that was so weighted with death. I soon discovered that an image of Powell appears in Camera Lucida. It is not the same image that grabbed my attention, but a similar photograph taken at the same time. In this photograph, Powell is looking at the camera, the manacles that bind his hands are visible, but still the modernity of expression persists.

Barthes was taken by the way that a photograph suggests both the “that-has-been” and the “this-will-die” aspects of a photographic subject. His most famous discussion of this dual gesture involved a photograph of his mother, which does not appear in the book. But a shot of Powell is used to illustrate a very similar point. It is captioned, “He is dead, and he is going to die …” The photograph simultaneously witnesses to three related realities. Powell was; he is no more; and, in the moment captured by this photograph, he is on his way to death.

Barthes also borrowed two Latin words for his analysis: studium and punctum. The studium of a photograph is its ostensible subject matter and what we might imagine the photographer seeks to convey through the photograph. The punctum by contrast is the aspect that “pricks” or “wounds” the viewer. The experience of the punctum is wholly subjective. It is the aspect that disturbs the studium and jars the viewer. Regarding the Powell photograph, Barthes writes,

“The photograph is handsome, as is the boy: that is the studium. But the punctum is: he is going to die. I read at the same time: this will be and this has been; I observe with horror an anterior future of which death is the stake. By giving me the absolute past of the pose, the photograph tells me death in the future. What pricks me is the discovery of this equivalence.”

In my own experience, the studium was already the awareness of Powell’s impending death. The punctum was the modernity of Powell’s subjectivity. Still eager to account for the photograph’s effect, I turned from Barthes to historical sources that might shed light on the photographs.

The Gardner Photographs

The night of the assassination attempt, Powell entered the Seward residence claiming that he was asked to deliver medicine for Seward. When Seward’s son, Frederick, told Powell that he would take the medicine to his father, Powell handed it over, started to walk away, but then wheeled on Frederick and put a gun to his head. The gun misfired and Powell proceeded to beat Frederick over the head with it. He did so with sufficient force to crack Frederick’s skull and jam the gun.

Powell then pushed Seward’s daughter out of the way as he burst into the secretary of state’s room. He leapt onto Seward’s bed and repeatedly slashed at Seward with a knife. Seward was likely saved by an apparatus he was wearing to correct an injury to his jaw sustained days earlier. The apparatus deflected Powell’s blows from Seward’s jugular. Powell then wounded two other men, including another of Seward’s sons, as they attempted to pull him off of Seward. As he fled down the stairs, Powell also stabbed a messenger who had just arrived. Like everyone else who was wounded that evening, the messenger survived, but he was paralyzed for life.

Powell then rushed outside to discover that a panicky co-conspirator who was to help him make his getaway had abandoned him. Over the course of three days, Powell then made his way to a boardinghouse owned by Mary Surratt where Booth and his circle had plotted the assassinations. He arrived, however, just as Surratt was being questioned, and, not providing a very convincing account of himself, he was taken into custody. Shortly thereafter, Powell was picked out of a lineup by one of Seward’s servants and taken aboard the ironclad USS Saugus to await his trial.

It was aboard the Saugus that Powell was photographed by Alexander Gardner, a Scot who had made his way to America to work with Matthew Brady. According to Powell’s biographer, Betty Ownsbey, Powell resisted having his picture taken by vigorously shaking his head when Gardner prepared to take a photograph. Given the exposure time, this would have blurred his face beyond recognition. Annoyed by Powell’s antics, H. H. Wells, the officer in charge of the photo shoot, struck Powell’s arm with the side of his sword. At this, Major Eckert, an assistant to the secretary of war who was there to interrogate Powell, interposed and reprimanded Wells.

Powell then seems to have resigned himself to being photographed, and Gardner proceeded to take several shots of Powell. Gardner must have realized that he had something unique in these exposures because he went on to copyright six images of Powell. He didn’t bother to do so with any of the other pictures he took of the conspirators. Historian James Swanson explains:

“[Gardner’s] images of the other conspirators are routine portraits bound by the conventions of nineteenth century photography. In his images of Powell, however, Gardner achieved something more.  In one startling and powerful view, Powell leans back against a gun turret, relaxes his body, and gazes languidly at the viewer. There is a directness and modernity in Gardner’s Powell suite unseen in the other photographs.”

My intuition was re-affirmed, but the question remained: What accounted for the modernity of these photographs?

Resisting the Camera’s Gaze

Ownsbey’s account of the photo shoot contained an important clue: Powell’s subversive tactics. Powell clearly intuited something about his position before the camera that he didn’t like. He attempted one form of overt resistance, but appears to have decided that this choice was untenable. He then seems to acquiesce. But what if he wasn’t acquiescing? What if the modernity that radiates from these pictures arises out of Powell’s continued resistance by other means?

Powell could not avoid the gaze of the camera, but he could practice a studied indifference to it. In order to resist the gaze, he would carry on as if there were no gaze. To ward off the objectifying power of the camera, he had to play himself before the camera. Simply being himself was out of the question; the observer effect created by the camera’s presence so heightened one’s self-consciousness that it was no longer possible to simply be. Simply being assumed self-forgetfulness. The camera does not allow us to forget ourselves. In fact, as with all technologies of self-documentation, it heightens self-consciousness. In order to appear indifferent to the camera, Powell had to perform the part of Lewis Powell as Lewis Powell would appear were there no camera present.

In doing so, Powell stumbled upon the negotiated settlement with the gaze of the camera that eluded his contemporaries. He was a pioneer of subjectivity. Before the camera, many of his contemporaries either stared blankly, giving the impression of total vacuity, or else they played a role–the role of the brave soldier, or the statesman, or the lover, etc. Powell found another way. He played himself. There was nothing new about playing a role, of course. But playing yourself, that seems a watershed of consciousness. Playing a role entails a deliberate putting on of certain affectations; playing yourself suggests that there is nothing to the self but affectations. The anchor of identity in self-forgetfulness is lifted and the self is set adrift. Perhaps the violence that Powell had witnessed and perpetrated prepared him for this work against his psyche.

If indeed this was Powell’s mode of resistance, it was Pyrrhic: ultimately it entailed an even more profound surrender of subjectivity. It internalized the objectification of the self which the external the external presence of the camera elicited. This is what gave Powell’s photographs their eerie modernity. They were haunted by the future, not the past. It wasn’t Powell’s imminent death that made them uncanny; it was the glimpse of our own fractured subjectivity. Powell’s struggle before the camera, then, becomes a parable of human subjectivity in the age of pervasive documentation. We have learned to play ourselves with ease, and not only before the camera. The camera is now irrelevant.

In the short time that was left to him after the Gardner photographs were taken, Powell went on to become a minor celebrity. He was, according to Swanson, the star attraction at the trial of Booth’s co-conspirators. Powell “fascinated the press, the public, and his own guards.” He was, in the words of a contemporary account, “the observed of all observers, as he sat motionless and imperturbed, defiantly returning each gaze at his face and person.” But the performance had its limits. Although Ownsbey has raised reasonable doubts about the claim, it was widely reported that Powell had attempted suicide by repeatedly pounding his head against a wall.

On July 7, 1865, a little over two months since the Gardner photographs, Powell was hanged with three of his co-conspirators. It doesn’t require Barthes’ critical powers to realize that death saturates the Powell photographs, but death figured only incidentally in the reading I’ve offered here. It is not, however, irrelevant that this foray into modern consciousness was undertaken under the shadow of death. It is death, perhaps, that gave Powell’s performance its urgency. And perhaps it is now death that serves as the last lone anchor of the self.


I no longer post to this blog, but I continue to write The Convivial Society, a newsletter about tech and culture. You can sign up here.

The Humanities, the Sciences, and the Nature of Education

Over the last couple of months, Steven Pinker and Leon Wieseltier have been trading shots in a debate about the relationship between the sciences and the humanities. Pinker, a distinguished scientist whose work ranges from linguistics to cognitive psychology, kicked things off with an essay in The New Republic titled, “Science is Not Your Enemy.” This essay was published with a video response by Wieseltier, the The New Republic’s longstanding literary editor, already embedded. It was less than illuminating. A little while later, Wieseltier published a more formal response, which someone unfortunately titled, “Crimes Against the Humanities.” A little over a week ago, both Pinker and Wieseltier produced their final volleys in “Science v. the Humanities, Round III.”

I’ll spare you a play-by-play, or blow-by-blow as the case may be. If you’re interested, you can click over and read each essay. You might also want to take a look at Daniel Dennett’s comments on the initial exchange. The best I can do by way of summary is this: Pinker is encouraging embattled humanists to relax their suspicions and recognize the sciences as friends and ally from which they can learn a great deal. Wieseltier believes that any “consilience” with the sciences on the part of the humanities will amount to a surrender to an imperialist foe rather than a collaboration with an equal partner.

The point of contention, once some of the mildly heated rhetoric is accounted for, seems to be the terms of the relationship between the two sets of disciplines. Both agree that the sciences and the humanities should not be hermetically sealed off from one another, but they disagree about the conditions and fruitfulness of their exchanges.

If we must accept the categories, I think of myself as a humanist with interests that include the sciences. I’m generally predisposed to agree with Wieseltier to a certain extent, yet I found myself doing so rather tepidly. I can’t quite throw myself behind his defense of the humanities. Nor, however, can I be as sanguine as Pinker about the sort of consilience he imagines.

What I can affirm with some confidence is also the point Pinker and Wieseltier might agree upon: neither serious humanistic knowledge nor serious scientific knowledge appears to be flourishing in American culture. But then again, this surmise is mostly based on anecdotal evidence. I’d want to make this claim more precise and ground it in more substantive evidence.

That said, Pinker and Wieseltier both appear to have the professional sciences and humanities primarily view. My concern, however, is not only with the professional caste of humanists or scientists. My concern is also with the rest of us, myself included: those who are not professors or practitioners (strictly speaking), but who, despite our non-professional status, by virtue of our status as human beings seek genuine encounters with truth, goodness, and beauty.

To frame the matter in this way breaks free of the binary opposition that fuels the science/humanities wars. There is, ultimately, no zero-sum game for truth, goodness, and beauty, if these are what we’re after. The humanities and the sciences amount to a diverse set of paths, each, at their best, leading to a host of vantage points from which we might perceive the world truly, apprehend its goodness, and enjoy its beauty. Human culture would be a rather impoverished and bleak affair were only a very few of these path available to us.

I want to believe that most of us recognize all of this intuitively. The science/humanities binary is, in fact, a rather modern development. Distinctions among the various fields of human knowledge do have an ancient pedigree, of course. And it is also true that these various fields were typically ranked within a hierarchy that privileged certain forms of knowledge over others. However, and I’m happy to be corrected on this point, the ideal was nonetheless an openness to all forms of knowledge and a desire to integrate these various forms into a well-rounded understanding of the cosmos.

It was this ideal that, during the medieval era, yielded the first universities. It was this ideal, too, that animated the pursuit of the liberal arts, which entailed both humanistic and scientific disciplines (although to put it that way is anachronistic): grammar, logic, rhetoric, mathematics, music, geometry, and astronomy. The well-trained mind was to be conversant with each of these.

All well and good, you may say, but it seems as though seekers of truth, goodness, and beauty are few and far between. Hence, for instance, Pinker’s and Wieseltier’s respective complaints. The real lesson, after all, of their contentious exchange, one which Wieseltier seems to take at the end of last piece, is this: While certain professional humanists and scientists bicker about the relative prestige of their particular tribe, the cultural value of both humanistic and scientific knowledge diminishes.

Why might this be the case?

Here are a couple of preliminary thoughts–not quite answers, mind you–that I think relevant to the discussion.

1. Sustaining wonder is critical. 

The old philosophers taught that philosophy, the pursuit of wisdom, began with wonder. Wonder is something that we have plenty of as children, but somehow, for most of us anyway, the supply seems to run increasingly dry as we age. I’m sure that there are many reasons for this unfortunate development, but might it be the case that professional scientists and humanists both are partly to blame? And, so as not to place myself beyond criticism, perhaps professional teachers of the sciences and humanities are also part of the problem. Are we cultivating wonder, or are we complicit in its erosion?

2. Eduction is not merely the transmission of information

To borrow a formulation from T.S. Eliot: Information, that is an assortment of undifferentiated facts, is not the same as knowledge; and knowledge is not yet wisdom. One may, for example, have memorized all sorts of random historical facts, but that does not make one a historian. One may have learned a variety of mathematical operations or geometrical theorems, but that does not make one a mathematician. To say that one understands a particular discipline or field of knowledge is not necessarily to know every fact assembled under the purview of that field. Rather it is to be able to see the world through the perspective of that field. A mathematician is one who is able to see the world and to think mathematically. A historian is one who is able to see the world and to think historically.

Wonder, then, is not sustained by the accumulation of facts. It is sustained by the opening up of new vistas–historical, philosophical, mathematical, scientific, etc.–on reality that continually reveal its depth, complexity, and beauty.

Maybe it’s also the case that wonder must be sustained by love. Philosophy, to which wonder ought to lead, is etymologically the “love of wisdom.” Absent that love, the wonder dissipates and leaves behind no fruit. This possibility brings to mind a passage from Iris Murdoch’s, The Sovereignty of Good describing the work of learning Russian:

“I am confronted by an authoritative structure which commands my respect. The task is difficult and the goal is distant and perhaps never entirely attainable. My work is a progressive revelation of something which exists independently of me…. Love of Russian leads me away from myself towards something alien to me, something which my consciousness cannot take over, swallow up, deny or make unreal. The honesty and humility required of the student — not to pretend to know what one does not know — is the preparation for the honesty and humility of the scholar who does not even feel tempted to suppress the fact which damns his theory.”

We should get it out of our heads that education is chiefly or merely about training minds. It must address the whole human person. The mind, yes, but also the heart, the eyes, the ears, and the hands. It is a matter of character and habits and virtues and loves. The most serious reductionism is that which reduces education to the transfer of information. In which case, it makes little difference whether that information is of the humanistic or scientific variety.

As Hubert Dreyfuss pointed out serval years ago in his discussion of online education, the initial steps of skill acquisition most closely resemble the mere transmission of information. But passing beyond these early stages of education in any discipline involves the presence of another human being for a variety of significant reasons. Not least of these is the fact that we must come to love what we are learning and our loves tend to be formed in the context of personal relationships. They are caught, as it were, from another who has already come to love a certain kind of knowledge or a certain way of approaching the world embedded in a particular discipline.

What then is the sum of this meandering post? First, the sciences and the humanities are partners, but not primarily partners in the accomplishment of their own respective goals. They are partners in the education of human beings that are alive to fullness of the world they inhabit. Secondly, if that work is to yield fruit, then education in both the sciences and the humanities must be undertaken with a view to full complexity of the human person and the motives that drive and sustain the meaningful pursuit of knowledge. And that, I know, is far easier said than done.

Louis C.K. Was Almost Right About Smartphones, Loneliness, Sadness, the Meaning of Life, and Everything

“I think these things are toxic, especially for kids …” That’s Louis C.K. talking about smartphones on Conan O’Brien last week. You’ve probably already seen the clip; it exploded online the next day. In the off-chance that you’ve not seen the clip yet, here it is. It’s just under five minutes, and it’s worth considering.

Let me tell you, briefly, what I appreciated about this bit, and then I’ll offer a modest refinement to Louis C.K.’s perspective.

Here are the two key insights I took away from the exchange. First, the whole thing about empathy. Cyberbullying is a big deal, at least it’s one of the realities of online experience that gets a lot of press. And before cyberbullying was a thing we worried about, we complained about the obnoxious and vile manner in which individuals spoke to one another on blogs and online forums. The anonymity of online discourse took a lot of the blame for all of this. A cryptic username, after all, allowed people to act badly with impunity.

I’m sure anonymity was a factor. That people are more likely too act badly when they can’t be caught is an insight at least as old as Plato’s ring of Gyges illustration. But, insofar as this kind of behavior has survived the personalization of the Internet experience, it would seem that the blame cannot be fixed entirely on anonymity.

This is where Louis C.K. offers us a slightly different, and I think better, angle that fills the picture out a bit. He frames the problem as a matter of embodiment. Obviously, people can be cruel to one another in each other’s presence. It happens all the time. The question is whether or not there is something about online experience that somehow heightens the propensity toward cruelty, meanness, rudeness, etc. Here’s how I would answer that question: It’s not that there is something intrinsic to the online experience that heightens the propensity to be cruel. It’s that the online experience unfolds in the absence of a considerable mitigating condition: embodied presence.

In Graham Greene’s The Power and the Glory, his unnamed protagonist, the whiskey priest, comes to the following realization: “When you visualized a man or woman carefully, you could always begin to feel pity … that was a quality God’s image carried with it … when you saw the lines at the corners of the eyes, the shape of the mouth, how the hair grew, it was impossible to hate.”

This is, I think, what Louis C.K. is getting at. We like to think of ourselves as rational actors who make our way through life by careful reasoning and logic. For better or for worse, this is almost certainly not the case. We constantly rely on all sorts of pre-cognitive or non-conscious  or visceral operations. Most of these are grounded in our bodies and their perceptual equipment. When our bodies, and those magical mirror-neurons, are taken out of play, then the perceptual equipment that helps us act with a measure of empathy is also out of the picture, and then, it seems, cruelty proceeds with one less impediment.

The second insight I appreciated centered on the themes of loneliness and sadness. What Louis C.K. seems to be saying, in a way that still manages to be funny enough to bear, is that there’s something unavoidably sad about life and at the core of our being there is a profound emptiness. What’s more, it is when we are alone that we feel this sadness and recognize this emptiness. This is inextricably linked to what we might call the human condition, and the path to any kind of meaningful happiness is through this sadness and the loneliness that brings it on.

Because it’s worth reading over as text, here, one more time, is what Louis C.K. had to say about this:

“You need to build an ability to just be yourself and not be doing something. That’s what the phones are taking away, is the ability to just sit there. That’s being a person. Because underneath everything in your life there is that thing, that empty—forever empty. That knowledge that it’s all for nothing and that you’re alone. It’s down there.

And sometimes when things clear away, you’re not watching anything, you’re in your car, and you start going, ‘oh no, here it comes. That I’m alone.’ It’s starts to visit on you. Just this sadness. Life is tremendously sad, just by being in it…

That’s why we text and drive. I look around, pretty much 100 percent of the people driving are texting. And they’re killing, everybody’s murdering each other with their cars. But people are willing to risk taking a life and ruining their own because they don’t want to be alone for a second because it’s so hard.”

Okay, so I appreciated this part because I already agreed with it. I already agreed with it because I bought into this understanding of the human condition when I read Pascal years ago and because it resonates with my own experience. In his Pensées, Pascal wrote, “All of man’s misfortune comes from one thing, which is not knowing how to sit quietly in a room.”

Want to know what else he wrote? This:

“Anyone who does not see the vanity of the world is very vain himself.  So who does not see it, apart from young people whose lives are all noise, diversions, and thoughts for the future?  But take away their diversion and you will see them bored to extinction.  Then they feel their nullity without recognizing it, for nothing could be more wretched than to be intolerably depressed as soon as one is reduced to introspection with no means of diversion.”

Pascal wrote this stuff not quite 400 years ago. Four hundred years. Now, the question this raises is this: Doesn’t that undermine Louis C.K’s whole bit? If the problems he associates with smartphones clearly predate smartphones, then isn’t he fundamentally off-base in his criticisms?

Yes and no.

Let me borrow some comments from Alan Jacobs to clarify what I mean. Over at his recently revived blog, Text Patterns, Jacobs wound his way from a discussion of Leo Tolstoy’s influence on Mikhail Bakhtin to make a very useful point about how we understand technology:

It seems to me that most of our debates about recent digital technologies — about living in a connected state, about being endlessly networked, about becoming functional cyborgs — are afflicted by the same tendency to false systematization that, as Levin and Pierre discover, afflict ethical theory. Perhaps if we really want to learn to think well, and in the end act well, in a hyper-connected environment, we need to stop trying to generalize and instead become more attentive to what we are actually doing, minute by minute, and to the immediate consequences of those acts.

In other words, rather than generalizing about “smartphones” or “digital technology,” let’s pay attention to specific practices. Granting, of course, that Louis C.K. is a comedian giving a short routine, not a philosopher writing a long monograph, he might’ve done well to take a cue from Jacobs.

The smartphone itself is not the “real” problem. The “real” problem, if we can agree that it is a problem, is our inability to abide, at least sometimes, the existential loneliness and sadness that are somehow wrapped up in the package of realities that we call “being human.” That problem is not in any essential way connected with the smartphone (as Pascal’s observations attest).

But the smartphone is not altogether irrelevant. It is part of a practice that is itself a manifestation of the problem. The problem is not the smartphone, it’s this thing we’re doing with the smartphone, which, in the past, we have also done with countless other things.

Unfortunately, recognizing that the problem isn’t essentially connected to the smartphone leads some to discount the problem altogether. That would be a mistake. The problem is no less real. It’s just that smashing our smartphones is not a solution. If only it were that simple. That promise of simplicity, in fact, might be why it is so tempting to causally link personal and social problems to certain technologies. It offers a certain comfort to us because we don’t have to look to our own crooked hearts for the source of our problems, and it holds out the promise of a relatively painless and straightforward solution.

The opposite is the case. The problem here, and in most cases, is (in part at least) buried in our own being, and tending it requires a mindful vigilance that must abide complexity in the absence of silver bullets.

So, then, rather than opening his bit by saying “I think these things are toxic, especially for kids,” Louis C.K. should have said, “I think this thing we do is toxic, for all of us …”