Wizard or God, Which Would You Rather Be?

Dumbledore_and_Elder_WandOccasionally, I ask myself whether or not I’m really on to anything when I publish the “thinking out loud” that constitutes most of the posts on this blog. And occasionally the world answers back, politely, “Yes, yes you are.”

A few months ago, in a post on automation and smart homes, I ventured an off-the-cuff observation: the smart home populated animated by the Internet of Things amounted to a re-enchantment of the world by technological means. I further elaborated that hypothesis in a subsequent post:

“So then, we have three discernible stages–mechanization, automation, animation–in the technological enchantment of the human-built world. The technological enchantment of the human-built world is the unforeseen consequence of the disenchantment of the natural world described by sociologists of modernity, Max Weber being the most notable. These sociologists claimed that modernity entailed the rationalization of the world and the purging of mystery, but they were only partly right. It might be better to say that the world was not so much disenchanted as it was differently enchanted. This displacement and redistribution of enchantment may be just as important a factor in shaping modernity as the putative disenchantment of nature.

In an offhand, stream-of-consciousness aside, I ventured that the allure of the smart-home, and similar technologies, arose from a latent desire to re-enchant the world. I’m doubling-down on that hypothesis. Here’s the working thesis: the ongoing technological enchantment of the human-built world is a corollary of the disenchantment of the natural world. The first movement yields the second, and the two are interwoven. To call this process of technological animation an enchantment of the human-built world is not merely a figurative post-hoc gloss on what has actually happened. Rather, the work of enchantment has been woven into the process all along.”

Granted, those were fairly strong claims that as of yet need to be more thoroughly substantiated, but here’s a small bit of evidence that suggests that my little thesis had some merit. It is a short video clip the NY Time’s Technology channel about the Internet of Things in which “David Rose, the author of ‘Enchanted Objects,’ sees a future where we can all live like wizards.” Emphasis mine, of course.

I had some difficulty embedding the video, so you’ll have to click over to watch it here: The Internet of Things. Really, you should. It’ll take less than three minutes of your time.

So, there was that. Because, apparently, the Internet today felt like reinforcing my quirky thoughts about technology, there was also this on the same site: Playing God Games.

That video segment clocks in at just under two minutes. If you click through to watch, you’ll note that it is a brief story about apps that allow you to play a deity in your own virtual world, with your very own virtual “followers.”

You can read that in light of my more recent musings about the appeal of games in which our “action,” and by extension we ourselves, seem to matter.

Perhaps, then, this is the more modest shape the religion of technology takes in the age of simulation and diminished expectations: you may play the wizard in your re-enchanted smart home or you may play a god in a virtual world on your smartphone. I suspect this is not what Stewart Brand had in mind when he wrote, “We are as gods and might as well get good at it.”

Our Very Own Francis Bacon

Francis BaconFew individuals have done as much to chart the course of science and technology in the modern world as the the Elizabethan statesmen and intellectual, Francis Bacon. But Bacon’s defining achievement was not, strictly speaking, scientific or technological. Rather, Bacon’s achievement lay in the realm of human affairs we would today refer to as “public relations.” Bacon’s genius was Draper-esque: he wove together a compelling story about the place of techno-science in human affairs from the loose threads of post-Reformation religious and political culture and the scientific breakthroughs we loosely group together as the Scientific Revolution.

In story he told, knowledge mattered only insofar as it yielded power (the well-known formulation, “knowledge is power,” is Bacon’s), and that power mattered only insofar as it was directed toward “the relief of man’s estate.” To put that less archaically, we might say “the improvement of our quality of life.” But putting it that way obscures the theological overtones of Bacon’s formulation and its allusion to the curse under which humanity labored as a consequence of the Fall in the Christian understanding of the human condition. Our problem was both spiritual and material, and Bacon believed that in his day both facets of that problem were being solved. The improvement of humanity’s physical condition went hand in hand with the restoration of true religion occasioned by the English Reformation, and together they would lead straight to the full restoration of creation.

Bacon’s significance, then, lay in merging science and technology into one techno-scientific project and synthesizing this emerging project with the dominant world picture, thus charting it’s course and securing its prestige. It is just this sort of expansive vision driving technological development that I’ve had in mind in my recent posts (here and here) regarding culture, technology, and innovation.

My recent posts have also mentioned the entrepreneur Peter Thiel, who is increasingly assuming the role of Silicon Valley’s leading public intellectual–the Sage of Silicon Valley, if you will. This morning, I was re-affirmed in that evaluation of Thiel’s position by a pair of posts by political philosopher, Peter Lawler. In the first of these posts, Lawler comments on Thiel’s seeming ubiquity in certain circles, and he rehearses some of the by-now familiar aspects of Thiel’s intellectual affinities, notably for the sociologist cum philosopher Rene Girard and the political theorist Leo Strauss. Chiefly, Lawler discusses Thiel’s flirtations with transhumanism, particularly in his recently released Zero to One: Notes on Startups, or How to Build the Future, a distilled version of Thiel’s 2012 lecture course on start-ups at Stanford University.

(The book was prepared with Blake Masters, who had previously made available detailed notes on Thiel’s course. I’ll mention in passing that that tag line on Masters’ website runs as follows: “Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it.”)

As it turns out, Francis Bacon makes a notable appearance in Thiel’s work. Here is Lawler summarizing that portion of the book:

“In the chapter entitled ‘You Are Not a Lottery Ticket,’ Thiel writes of Francis Bacon’s modern project, which places “prolongation of life” as the noblest branch of medicine, as well the main point of the techno-development of science. That prolongation is at the core of the definite optimism that should drive ‘the intelligent design’ at the foundation of technological development. We (especially we founders) should do everything we can “to prioritize design over chance.” We should do everything we can to remove contingency from existence, especially, of course, each of our personal existences.”

The “intelligent deign” in view has nothing to do, so far as I can tell, with the theory of human origins that is the most common referent for that phrase. Rather, it is Thiel’s way of labeling the forces of consciously deployed thought and work striving to bring order out of the chaos of contingency. Intelligent design is how human beings assert control and achieve mastery over their world and their lives, and that is an explicitly Baconian chord to strike.

Thiel, worried by the technological stagnation he believes has set in over the last forty or so years, is seeking to reanimate the technological project by once again infusing it with an expansive, dare we say mythic, vision of its place in human affairs. It may not be too much of a stretch to say that he is seeking to play the role of Francis Bacon for our age.

Like Bacon, Thiel is attempting to fuse the disparate strands of emerging technologies together into a coherent narrative of grandiose scale. And his story, like Bacon’s, features distinctly theological undertones. The chief difference may be this: whereas the defining institution of the early modern period was the nation-state, itself a powerful innovation of the period, the defining institution in Thiel’s vision is the start-up. As Lawler puts it, “the startup has replaced the country as the object of the highest human ambition. And that’s the foundation of the future that comes from being ruled by the intelligent designers who are Silicon Valley founders.”

Lawler is right to conclude that “Peter Thiel has emerged as the most resolute and most imaginative defender of the distinctively modern part of Western civilization.” Bacon was, after all, one of the intellectual founders of modernity, on par, I would say, with the likes of Descartes and Locke. But, Lawler adds,

“that doesn’t mean that, when it comes to the libertarian displacement of the nation by the startup and the abolition of all contingency from particular personal lives, his imagination and his self-importance don’t trump his astuteness. They do. His theology of liberation is that we, made in the image of God, can do for ourselves what the Biblical Creator promised—free ourselves from the misery of being self-conscious mortals dependent on forces beyond our control.”

And that is, as Lawler notes in his follow-up post, a rather ancient aspiration. Indeed, Thiel, who professes an admittedly heterodox variety of Christianity, may do well to remember that to say we are made in the image of God is one way of saying we are not, the Whole Earth Catalog notwithstanding, gods ourselves. This, it would seem, is a hard lesson to learn.

_______________________________

Update: On Twitter, I was made aware of a talk by Thiel at SXSW in 2013 on the topic of the chapter discussed above. Here it is (via @carlamomo).

Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation

1939 World's Fair ProgressA couple of years back, while I was on my World’s Fair kick, I wrote a post or two (or three) about how we imagine the future, or, rather, how we fail to imagine the future. The World’s Fairs, particularly those held between the 1930’s and 70’s, offered a rather grand and ambitious vision for what the future would hold. Granted, much of what made up that vision never quite materialized, and much of it now seems a tad hokey. Additionally, much of it amounted to a huge corporate ad campaign. Nevertheless, the imagined future was impressive in its scope, it was utopian. The three posts linked above each suggested that, relative to the World’s Fairs of the mid-20th century, we seem to have a rather impoverished imagination when it comes to the future.

One of those posts cited a 2011 essay by Peter Thiel, “The End of the Future,” outlining the sources of Thiel’s pessimism about the rate of technological advance. More recently, Dan Wang has cataloged a series of public statements by Thiel supporting his contention that technological innovation has slowed, and dangerously so. Thiel, who made his mark and his fortune as a founder of PayPal, has emerged over the last few years as one of Silicon Valley’s leading intellectuals. His pessimism, then, seems to run against the grain of his milieu. Thiel, however, is not pessimistic about the potential of technology itself; rather, as I understand him, he is critical of our inability to more boldly imagine what we could do with technology. His view is neatly summed up in his well-known quip, “We wanted flying cars, instead we got 140 characters.”

Thiel is not the only one who thinks that we’ve been beset by a certain gloomy malaise when it comes to imagining the future. Last week, in the pages of the New York Times Magazine, Jayson Greene wondered, with thinly veiled exasperation, why contemporary science-fiction is so “glum” about AI? The article is a bit muddled at points–perhaps because the author, noting the assistance of his machines, believes it is not even half his–but it registers what seems to be an increasingly recurring complaint. Just last month, for instance, I noted a similar article in Wired that urged authors to stop writing dystopian science-fiction. Behind each of these pieces there lies an implicit question: Where has our ability to imagine a hopeful, positive vision for the future gone?

Kevin Kelly is wondering the same thing. In fact, he was willing to pay for someone to tell him a positive story about the future. I’ve long thought of Kelly as one of the most optimistic of contemporary tech writers, yet of late even he appears to be striking a more ambiguous note. Perhaps needing a fresh infusion of hope, he took to Twitter with this message:

“I’ll pay $100 for the best 100-word description of a plausible technological future in 100 years that I would like to live in. Email me.”

Kelly got 23 responses, and then he constructed his own 100-word vision for the future. It is instructive to read the submissions. By “instructive,” I mean intriguing, entertaining, disconcerting, and disturbing by turns. In fact, when I first read through them I thought I’d dedicate a post to analyzing these little techno-utopian vignettes. Suffice it to say, a few people, at least, are still nurturing an expansive vision for the future.

But are their stories the exceptions that prove the rule? To put it another way, is the dominant cultural zeitgeist dystopian or utopian with regards to the future? Of course, as C.S. Lewis once put, “What you see and what you hear depends a great deal on where you are standing. It also depends on what sort of person you are.” Whatever the case may be, there certainly seem to be a lot of people who think the zeitgeist is dystopian or, at best, depressingly unimaginative. I’m not sure they are altogether wrong about this, even if the whole story is more complicated. So why might this be?

To be clear before proceeding down this line of inquiry, I’m not so much concerned with whether we ought to be optimistic or pessimistic about the future. (The answer in any case is neither.) I’m not, in other words, approaching this topic from a normative perspective. Rather, I want to poke and prod the zeitgeist a little bit to see if we can’t figure out what is going on. So, in that spirit, here are few loosely organized thoughts.

First off, our culture is, in large measure, driven by consumerism. This, of course, is little more than a cliché, but it is no less true because of it. Consumerism is finally about the individual. Individual aspirations, by their very nature, tend to be narrow and short-sighted. It is as is if the potential creative force of our collective imagination is splintered into the millions of individual wills it is made to serve.

David Nye noted this devolution of our technological aspirations in his classic work on the American technological sublime. The sublime experience that once attended our encounters with nature and then our encounters with technological creations of awe-inspiring size and dynamism, has now given way to what Nye called the consumer sublime. “Unlike the Ford assembly line or Hoover Dam,” Nye explains, “Disneyland and Las Vegas have no use value. Their representations of sublimity and special effects are created solely for entertainment. Their epiphanies have no referents; they reveal not the existence of God, not the power of nature, not the majesty of human reason, but the titillation of representation itself.”

The consumer sublime, which Nye also calls an “egotistical sublime,” amounts to “an escape from the very work, rationality, and domination that once were embodied in the American technological sublime.”

Looking at the problem of consumerism from another vantage point, consider Nicholas Carr’s theory about the hierarchy of innovation. Carr’s point of departure included Peter Thiel’s complaint about the stagnation of technological innovation cited above. In response, Carr suggested that innovation proceeds along a path more or less parallel to Maslow’s famous hierarchy of human needs. We begin by seeking to satisfy very basic needs, those related to our survival. As those basic needs are met, we are able to think about more complex needs for social interaction, personal esteem, and self-actualization.

In Carr’s stimulating repurposing of Maslow’s hierarchy, technological innovation proceeds from technologies of survival to technologies of self-fulfillment. Carr doesn’t think that these levels of innovation are neatly realized in some clean, linear fashion. But he does think that at present the incentives, “monetary and reputational,” are, in a darkly eloquent phrasing, “bending the arc of innovation … toward decadence.” Away, that is, from grand, highly visible, transformative technologies.

The end game of this consumerist reduction of technological innovation may be what Ian Bogost recently called “future ennui.” “The excitement of a novel technology (or anything, really),” Bogost writes,

“has been replaced—or at least dampened—by the anguish of knowing its future burden. This listlessness might yet prove even worse than blind boosterism or cynical naysaying. Where the trauma of future shock could at least light a fire under its sufferers, future ennui exudes the viscous languor of indifferent acceptance. It doesn’t really matter that the Apple Watch doesn’t seem necessary, no more than the iPhone once didn’t too. Increasingly, change is not revolutionary, to use a word Apple has made banal, but presaged.”

Bogost adds, “When one is enervated by future ennui, there’s no vigor left even to ask if this future is one we even want.” The technological sublime, then, becomes the consumer sublime, which becomes future ennui. This is how technological innovation ends, not with a bang but a sigh.

The second point I want to make about the pessimistic zeitgeist centers on our Enlightenment inheritance. The Enlightenment bequeathed to us, among other things, two articles of faith. The first of these was the notion of inevitable moral progress, and the second was the notion of inevitable techno-scientific progress. Together they yielded what we tend to refer to simply as the Enlightenment’s notion of Progress. Together these articles of faith cultivate hope and incite action. Unfortunately, the two were sundered by the accumulation of tragedy and despair we call the twentieth century. Techno-scientific progress was a rosy notion so long as we imagined that moral progress advanced hand in hand with it. Techno-scientific progress decoupled from Enlightenment confidence in the perfectibility of humanity leaves us with the dystopian imagination.

Interestingly, the trajectory of the American World’s Fairs illustrates both of these points. Generally speaking, the World’s Fairs of the nineteenth and early twentieth century subsumed technology within their larger vision of social progress. By the 1930’s, the Fairs presented technology as the force upon which the realization of the utopian social vision depended. The 1939 New York Fair marked a turning point. It featured a utopian social vision powered by technological innovation. From that point forward, technological innovation increasingly became a goal in itself rather than a means toward a utopian society, and technological innovation was increasingly a consumer affair of diminishing scope.

That picture was painted in rather broad strokes, but I think it will bear scrutiny. Whether the illustration ultimately holds up or not, however, I certainly think the claim stands. The twentieth century shattered our collective optimism about human nature; consequently, empowering human beings with ever more powerful technologies became the stuff of nightmares rather than dreams.

Thirdly, technological innovation on a grand scale is an act of sublimation and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic-detachment, our default posture toward reality, destroys. Make of that theory what you will.

The last point builds on all that I’ve laid out thus far and perhaps even ties it all together … maybe. I want to approach it by noting one segment of the wider conversation about technology where a big, positive vision for the future is nurtured: the Transhumanist movement. This should go without saying, but I’ll say it anyway just to put it beyond doubt. I don’t endorse the Transhumanist vision. By saying that it is a “positive” vision I am only saying that it is understood as a positive vision by those who adhere to it. Now, with that out of the way, here is the thing to recognize about the Transhumanist vision, its aspirations are quasi-religious in character.

I mean that in at least a couple of ways. For instance, it may be understood as a reboot of Gnosticism, particularly given its disparagement of the human body and its attendant limitations. Relatedly, it often aspires to a disembodied, virtual existence that sounds a lot like the immortality of the soul espoused by Western religions. It is in this way a movement focused on technologies of the self, that highest order of innovation in Carr’s pyramid; but rather than seeking technologies that are mere accouterments of the self, they pursue technologies which work on the self to push the self along to the next evolutionary plane. Paradoxically, then, technology in the Transhumanist vision works on the self to transcend the self as it now exists.

Consequently, the scope of the Transhumanist vision stems from the Transhumanist quest for transcendence. The technologies of the self that Carr had in mind were technologies centered on the existing, immanent self. Putting all of this together, then, we might say that technologies of the immanent self devolve into gadgets with ever diminishing returns–consumerist ephemera–yielding future ennui. The imagined technologies of the would-be transcendent self, however, are seemingly more impressive in their aims and inspire cultish devotion in those who hope for them. But they are still technologies of the self. That is to say, they are not animated by a vision of social scope nor by a project of political consequence. This lends the whole movement a certain troubling naiveté.

Perhaps it also ultimately limits technological innovation. Grand technological projects of the sort that people like Thiel and Kelly would like to see us at least imagine are animated by a culturally diffused vision, often religious or transcendent in nature, that channels individual action away from the conscious pursuit of immediate satisfaction.

The other alternative, of course, is coerced labor. Hold that thought.

I want to begin drawing this over-long post to close by offering it as an overdue response to Pascal-Emmanuel Gobry’s discussion of Peter Thiel, the Church, and technological innovation. Gobry agreed with Thiel’s pessimism and lamented that the Church was not more active in driving technological innovation. He offered the great medieval cathedrals as an example of the sort of creation and innovation that the Church once inspired. I heartily endorse his estimation of the cathedrals as monumental works of astounding technical achievement, artistic splendor, and transcendent meaning. And, as Gobry notes, they were the first such monumental works not built on the back of forced labor.

For projects of that scale to succeed, individuals must either be animated by ideals that drive their willing participation or they must be forced by power or circumstance. In other words, cathedrals or pyramids. Cathedrals represent innovation born of freedom and transcendent ideals. The pyramids represent innovation born of forced labor and transcendent ideals.

The third alternative, of course, is the iPhone. I use the iPhone here to stand for consumer driven innovation. Innovation that is born of relative freedom (and forced labor) but absent a transcendent ideal to drive it beyond consumerist self-actualization. And that is where we are stuck, perhaps, with technological stagnation and future ennui.

But here’s the observation I want to leave you with. Our focus on technological innovation as the key to the future is a symptom of the problem; it suggests strongly that we are already compromised. The cathedrals were not built by people possessed merely of the desire to innovate. Technological innovation was a means to a culturally inspired end. [See the Adams’ quote below.] Insofar as we have reversed the relationship and allowed technological innovation to be our raison d’être we may find it impossible to imagine a better future, much less bring it about. With regards to the future of society, if the answer we’re looking for is technological, then we’re not asking the right questions.

_____________________________________

You can read a follow-up piece here.

N.B. The initial version of this post referred to “slave” labor with regards to the pyramids. A reader pointed out to me that the pyramids were not built by slaves but by paid craftsmen. This prompted me to do a little research. It does indeed seem to be the case that “slaves,” given what we mean by the term, were not the primary source of labor on the pyramids. However, the distinction seems to me to be a fine one. These workers appear to have been subject to various degrees of “obligatory” labor although also provided with food, shelter, and tax breaks. While not quite slave labor, it is not quite the labor of free people either. By contrast, you can read about the building of the cathedrals here. That said I’ve revised the post to omit the references to slavery.

Update: Henry Adams knew something of the cultural vision at work in the building of the cathedrals. Note the last line, especially:

“The architects of the twelfth and thirteenth centuries took the Church and the universe for truths, and tried to express them in a structure which should be final.  Knowing by an enormous experience precisely where the strains were to come, they enlarged their scale to the utmost point of material endurance, lightening the load and distributing the burden until the gutters and gargoyles that seem mere ornament, and the grotesques that seem rude absurdities, all do work either for the arch or for the eye; and every inch of material, up and down, from crypt to vault, from man to God, from the universe to the atom, had its task, giving support where support was needed, or weight where concentration was felt, but always with the condition of showing conspicuously to the eye the great lines which led to unity and the curves which controlled divergence; so that, from the cross on the flèche and the keystone of the vault, down through the ribbed nervures, the columns, the windows, to the foundation of the flying buttresses far beyond the walls, one idea controlled every line; and this is true of St. Thomas’ Church as it is of Amiens Cathedral.  The method was the same for both, and the result was an art marked by singular unity, which endured and served its purpose until man changed his attitude toward the universe.”

 

Arendt on Trial

arendtThe recent publication of an English translation of Bettina Stangneth’s Eichmann Before Jerusalem: The Unexamined Life of a Mass Murderer has yielded a handful of reviews and essays, like this one, framing the book as a devastating critique of Hannah Arendt’s Eichmann in Jerusalem: A Report on the Banality of Evil

The critics seem to assume that Arendt’s thesis amounted to a denial or diminishment of Eichmann’s wickedness. Arendt’s famous formulation, “the banality of evil,” is taken to mean that Eichmann was simply a thoughtless bureaucrat thoughtlessly following orders. Based on Stangneth’s exhaustive work, they conclude that Eichmann was anything but thoughtless in his orchestration of the death of millions of Jews. Ergo, Arendt was wrong about Eichmann.

But this casual dismissal of Arendt’s argument is built on a misunderstanding of her claims. Arendt certainly believed that Eichmann’s deeds were intentional and genuinely evil. She believed he deserved to die for his crimes. She was not taken in by his performance on the witness stand in Jerusalem. She did consider him thoughtless, but thoughtlessness as she intended the word was a more complex concept than what the critics have assumed.

At least two rejoinders have been published in an attempt to clarify and defend Arendt’s position. Both agree that Stangneth herself was not nearly as dismissive of Arendt as the second-hand critics, and both argue that Stangneth’s work does not undermine Arendt’s thesis, properly understood.

The first of these pieces, “Did Eichmann Think?” by Roger Berkowitz, appeared at The American Interest, and the second, “Who’s On Trial, Eichmann or Arendt?” by Seyla Benhabib, appeared at the NY Times’ philosophy blog, The Stone. Berkowitz’s piece is especially instructive. Here is the conclusion:

“In other words, evil originates in the neediness of lonely, alienated bourgeois people who live lives so devoid of higher meaning that they give themselves fully to movements. Such joiners are not stupid; they are not robots. But they are thoughtless in the sense that they abandon their independence, their capacity to think for themselves, and instead commit themselves absolutely to the fictional truth of the movement. It is futile to reason with them. They inhabit an echo chamber, having no interest in learning what others believe. It is this thoughtless commitment that permits idealists to imagine themselves as heroes and makes them willing to employ technological implements of violence in the name of saving the world.”

Do read the rest.

Waiting for Socrates … So We Can Kill Him Again and Post the Video on Youtube

It will come as no surprise, I’m sure, if I tell you that the wells of online discourse are poisoned. It will come as no surprise because critics have complained about the tone of online discourse for as long as people have interacted with one another online. In fact, we more or less take the toxic, volatile nature of online discourse for granted. “Don’t read the comments” is about as routine a piece of advice as “look both ways before crossing the street.” And, of course, it is also true that complaints about the coarsening of public discourse in general have been around for a lot longer than the Internet and digital media.

That said, I’ve been intrigued, heartened actually, by a recent round of posts bemoaning the state of online rhetoric from some of the most thoughtful people whose work I follow. Here is Freddie deBoer lamenting the rhetoric of the left, and here is Matthew Anderson noting much of the same on the right. Here is Alan Jacobs on why he’s stepping away from Twitter. Follow any of those links and you’ll find another series of links to thoughtful, articulate writers all telling us, more or less, that they’ve had enough. This piece urges civility and it suggests, hopefully (naively?), that the “Internet” will learn soon enough to police itself, but the evidence it cites along the way seems rather to undermine such hopefulness. I won’t bother to point you to some of the worst of what I’ve regrettably encountered online in recent weeks.

Why is this the case? Why, as David Sessions recently put it, is the state of the Internet awful?

Like everyone else, I have scattered thoughts about this. For one thing, the nature of the medium seems to encourage rancor, incivility, misunderstanding, and worse. Anonymity has something to do with this, and so does the abstraction of the body from the context of communication.

Along the same media-ecological lines, Walter Ong noted that oral discourse tends to be agonistic and literate discourse tends to be irenic. Online discourse tends to be conducted in writing, which might seem to challenge Ong’s characterization. But just as television and radio constituted what Ong called secondary orality, so might we say that social media is a form of secondary literacy, blurring the distinctions between orality and literacy. It is text based, but, like oral discourse, it brings people into a context of relative communicative immediacy. That is to say that through social media people are responding to one another in public and in short order, more as they would in a face-to-face encounter, for example, than in private letters exchanged over the course of months.

In theory, writing affords us the temporal space to be more thoughtful and precise in expressing our ideas, but, in practice, the expectations of immediacy in digital contexts collapse that space. So we lose the strengths of each medium: we get none of the meaning-making cues of face-to-face communication nor any of the time for reflection that written communication ordinarily grants. The media context, then, ends up being rife with misunderstanding and agonistic; it encourages performative pugilism.

Also, as the moral philosopher Alasdair MacIntyre pointed out some time ago, we no longer operate with a set of broadly shared assumptions about what is good and what shape a good life should take. Our ethical reasoning tends not to be built on the same foundation. Because we are reasoning from incompatible moral premises, the conclusions reached by two opposing parties tend to be interpreted as sheer stupidity or moral obtuseness. In other words, because our arguments, proceeding as they do from such disparate moral frameworks, fail to convince and persuade, we begin to assume that those who will not yield to our moral vision must thus be fools or worse. Moreover, we conclude, fools and miscreants cannot be argued with; they can only be shamed, shouted down, or otherwise silenced.

Digital dualism is also to blame. Some people seem to operate under the assumption that they are not really racists, misogynists, anti-Semites, etc.–they just play one on Twitter. It really is much too late in the game to play that tired card.

Perhaps, too, we’ve conflated truth and identity in such a way that we cannot conceive of a challenge to our views as anything other than a challenge to our humanity. Conversely, it seems that in some highly-charged contexts being wrong can cost you the basic respect one might be owed as a fellow human being.

Finally, the Internet is awful because, frankly, people are awful. We all are; at least we all can be under the right circumstances. As Solzhenitsyn put it, “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”

To that list, I want to offer just one more consideration: a little knowledge is a dangerous thing and there are few things the Internet does better than giving everyone a little knowledge. A little knowledge is a dangerous thing because it is just enough to give us the illusion of mastery and a sense of authority. This illusion, encouraged by the myth of having all the world’s information at our finger tips, has encouraged us to believe that by skimming an article here or reading the summary of a book there we thus become experts who may now liberally pontificate about the most complex and divisive issues with unbounded moral and intellectual authority. This is the worst kind of insufferable foolishness, that which mistakes itself for wisdom without a hint of irony.

Real knowledge on the other hand is constantly aware of all that it does not know. The more you learn, the more you realize how much you don’t know, and the more hesitant you’ll be to speak as if you’ve got everything figured out. Getting past that threshold of “a little knowledge” tends to breed humility and create the conditions that make genuine dialogue possible. But that threshold will never be crossed if all we ever do is skim the surface of reality, and this seems to be the mode of engagement encouraged by the information ecosystem sustained by digital media.

We’re in need of another Socrates who will teach us once again that the way of wisdom starts with a deep awareness of our own ignorance. Of course, we’d kill him too, after a good skewering on Twitter, and probably without the dignity of hemlock. A posthumous skewering would follow, naturally, after the video of his death got passed around on Reddit and Youtube.

I don’t want to leave things on that cheery note, but the fact is that I don’t have a grand scheme for making online discourse civil, informed, and thoughtful. I’m pretty sure, though, that things will not simply work themselves out for the better without deliberate and sustained effort. Consider how W.H. Auden framed the difference between traditional cultures and modernity:

“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues which were once nursed unconsciously by the forces of nature must now be recovered and fostered by a deliberate effort of the will and the intelligence. In the future, societies will not grow of themselves. They will be either made consciously or decay.”

For better or worse, or more likely both, this is where we find ourselves–either we deploy deliberate effort of will and intelligence or face perpetual decay. Who knows, maybe the best we can do is to form and maintain enclaves of civility and thoughtfulness amid the rancor, communities of discourse where meaningful conversation can be cultivated. These would probably remain small communities, but their success would be no small thing.

__________________________________

Update: After publishing, I read Nick Carr’s post on the revival of blogs and the decline of Big Internet. “So, yeah, I’m down with this retro movement,” Carr writes, “Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.” I thought that was good news in light of my closing paragraph.

And, just in case you need more by way of diagnosis, there’s this: “A Second Look At The Giant Garbage Pile That Is Online Media, 2014.”