The Technological Origins of Protestantism, or the Martin Luther Tech Myth

[Caveat lector: the first part of the title is a bit too grandiose for what follows. Also, this post addresses the relationship between technology and religion, more specifically, the relationship between technology and Protestant Christianity. This may narrow the audience, but I suspect there is something of interest here for most readers. Finally, big generalizations ahead. Carry on.]

This year marks the 500th anniversary of the start of the Protestant Reformation. The traditional date marking the beginning of the Reformation is October 31, 1517. It was on that day, All Hallow’s (or Saints) Eve, that Martin Luther posted his famous Ninety-five Theses on a church door in Wittenberg. It is fair to say that no one then present, including Luther, had any idea the magnitude of what was to follow.

Owing to the anniversary, you might encounter a slew of new books about Luther, the Reformation(s), and its consequences. You might stumble across a commemorative Martin Luther Playmobil set. You might even learn about a church in Wittenberg which has deployed a robot named … wait for it … BlessU-2 to dispense blessings and Bible verses to visitors (free of charge, Luther would have been glad to know).

Then, of course, there are the essays and articles in popular journals and websites, and, inevitably, the clever takes that link Luther to contemporary events. Take, for example, this piece in Foreign Policy arguing that Luther was the Donald Trump of 1517. The subtitle goes on to claim that “if the leader of the reformation could have tweeted the 95 theses, he would have.” I’ll get back to the subtitle in just a moment, but, let’s be clear, the comparison is ultimately absurd. Sure, there are some very superficial parallels one might draw, but even the author of the article understands their superficiality. Throughout the essay, he walks back and qualifies his claims. “But in the end Luther was a man of conscience,” he concedes, and that pretty much undermines the whole case.

But back to that line about tweeting the 95 theses. It is perhaps the most plausible claim in the whole piece, but, oddly, the author never elaborates further. I say that it is plausible not only because the theses are relatively short statements – roughly half of them or so could actually be tweeted (in their English translation, anyway) – and one might say that in their day they went viral, but also because it trades on an influential myth that continues to inform how many Protestants view technology.

The myth, briefly stated in intentionally anachronistic terms, runs something like this. Marin Luther’s success was owed to his visionary embrace of a cutting edge media technology, the printing press. While the Catholic church reacted with a moral panic about the religious and social consequences of easily accessible information and their inability to control it, Luther and his followers understood that information wanted to be free and institutions needed to be disrupted. And history testifies to the rightness of Luther’s attitude toward new technology.

In calling this story a myth, I don’t mean to suggest that it is altogether untrue. While the full account is more complicated, it is nonetheless true that Luther did embrace printing and appears to have understood its power. Indeed, under Luther’s auspices Wittenberg, an otherwise unremarkable university town, became one of the leading centers of printing in Europe. A good account of these matters can be found in Andrew Pettegree’s Brand Luther. “After Luther, print and public communication would never be the same again,” Pettegree rightly concludes. And it is probably safe to also conclude that apart from printing the Reformation does not happen.

Instead, I use the word myth to mean a story, particularly of a story of origins, which takes on a powerful explanatory and normative role in the life of a tradition or community. It is in this sense that we might speak of the Luther Tech Myth.

The problem with this myth is simple: it sanctions, indeed it encourages uncritical and unreflective adoption of technology. I might add that it also heightens the plausibility of Borg Complex claims: “churches* that do not adapt and adopt to new media will not survive,” etc.

For those who subscribe to the myth, intentionally or tacitly, this is not really a problem because the myth sustains and is sustained by certain unspoken premises regarding the nature of technology, particularly media technology: chiefly, that it is fundamentally neutral. They imagine that new media merely propagate the same message only more effectively. It rarely occurs to them that new media may transform the message in a subtle but not inconsequential manner and that new media may smuggle another sort of message (or, effect) with it, and that these may reconfigure the nature of the community, the practices of piety, and the content of the faith in ways they did not anticipate.

Let’s get back to Luther for a moment and take a closer look at the relationship between printing and Protestantism.

In The Reformation: A History, Oxford historian Diarmaid MacCulloch makes some instructive observations about printing. What is most notable about MacCulloch’s discussion is that it deals with the preparatory effect of printing in the years leading up to 1517. For example, citing historian Bernard Cottret, MacCulloch speaks of “the increase in Bibles [in the half century prior to 1517] created the Reformation rather than being created by it.” A thesis that will certainly surprise many Protestants today, if there are any left. (More on that last, seemingly absurd clause shortly.)

A little further on, MacCulloch correctly observes that the “effect of printing was more profound than simply making more books available more quickly.” For one thing, it “affected western Europe’s assumptions about knowledge and originality of thought.” Manuscript culture is “conscious of the fragility knowledge, and the need to preserve it,” fostering “an attitude that guards rather than spreads knowledge.” Manuscript culture is thus cautious, conservative, and pessimistic. On the other hand, the propensity toward decay is “much less obvious in the print medium: Optimism may be the mood rather than pessimism.” (A point on which MacCulloch cites the pioneering work of Elizabeth Eisenstein.) In other words, printing fostered a more daring cultural spirit that was conducive to the outbreak of a revolutionary movement of reform.

Finally, printing had already made it possible for reading to become “a more prominent part of religion for the laity.” Again, MacCulloch is not talking about the consequences of the Reformation; he is talking about the half century or so leading up to Luther’s break with Rome. Where reading became a more prominent feature of personal piety, “a more inward-looking, personalized devotion,” which is to say, anachronistically, a more characteristically Protestant devotion, emerged. “For someone who really delighted in reading,” MacCulloch adds, “religion might retreat out of the sphere of public ritual into the world of the mind and the imagination.”

“So,” MacCulloch concludes, “without any hint of doctrinal deviation, a new style of piety arose in that increasingly large section of society that valued book learning for both profit and pleasure.” This increasingly large section of the population “would form a ready audience for the Protestant message, with its contempt for so much of the old ritual of worship and devotion.”

All of this, then, is to say that Protestantism is as much an effect of the technology of printing as it is a movement that seized upon the new technology to spread its message. (I suspect, as an aside, that this story, which is obviously more complicated than the sketch I’m providing here would be an important element in Alan Jacobs’ project of uncovering the technological history of modernity.)

A few more thoughts before we wrap up, bear with me. Let’s consider the idea of “a new style of piety,” which preceded and sustained momentous doctrinal and ecclesial developments. This phrase is useful in so much as it is pairs nicely with the old maxim: Lex orandi, lex credendi (the law of prayer is the law belief). The idea is that as the church worships so it believes, or that in some sense worship precedes and constitutes belief. To put it another way, we might say that the worship of the church constitutes the plausibility structures of its faith. To speak of a “new style of piety,” then, is to speak of a set practices for worship, both in its communal forms and in its private forms. These new practices are, accordingly, a new form of worship that may potentially re-configure the church’s faith. This is important to our discussion insofar as practices of worship have a critical material/technological dimension. Bottom line: shifts in the material/technological artifacts and conditions of worship potentially restructure the form and practices of worship, which in turn may potentially reconfigure what is believed.

Of course, it is not only a matter of how print prepares the ground for Protestantism, it is also a matter of how Protestantism evolves in tandem with print. Protestantism is a religion of the book. Its piety is centered on the book; the sacred text, of course, but also the tide of books that become aides to spirituality, displacing icons, crucifixes, statues, relics, and the panoply of ritual gestures that enlisted the body in the service of spiritual formation. The pastor-scholar becomes the model minister. Faith becomes both a more individual affair and a more private matter. On the whole, it takes on a more intellectualist cast. Its devotion is centered more on correct belief rather than veneration. Its instruction is traditionally catechetical. Etc.

This brings us back to the Luther Tech Myth and whether or not there are any Protestants left. The myth is misleading because it oversimplifies a more complicated history, and the oversimplification obscures the degree to which new media technology is not neutral but rather formative.

Henry Jenkins has made an observation that I come back to frequently: “I often tell students that the history of new media has been shaped again and again by four key innovative groups — evangelists, pornographers, advertisers, and politicians, each of whom is constantly looking for new ways to interface with their public.”

The evangelists Jenkins refers to are evangelical Christians in the United States, who are descended from Luther and his fellow reformers. Jenkins is right. Evangelicals have been, as a rule, quick to adopt and adapt new media technologies to spread their message. In doing so, however, they have also been transformed by the tools they have implemented and deployed, from radio to television to the Internet. The reason for this is simple: new styles of piety that arise from new media generate new assumptions about community and authority and charisma (in the theological and sociological sense), and they alter the status and content of belief.

And for this reason traditional Protestantism is an endangered species. Even within theologically conservative branches of American Protestantism, it is rare to find the practice of traditional forms of Protestant piety. Naturally, this should not necessarily be read as a lament. It is, rather, an argument about the consequences of technological change and an encouragement to think more carefully about the adoption and implementation of new technology.

 

______________________________________________________

*  I hesitate to add mosques and synagogues only because I do not believe myself to be sufficiently informed to do so and also because they are obviously not within the traditions shaped by the life and work of Martin Luther. Jewish and Muslim readers, please feel free to add your perspectives about attitudes to technology in your communities in the comments below.


If you’ve appreciated what you’ve read, consider supporting the writer.

Algorithms Who Art in Apps, Hallowed Be Thy Code

If you want to understand the status of algorithms in our collective imagination, Ian Bogost proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

“the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.”

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence sometimes exhibited toward technology. It is, as he fears, an impediment to clear thinking. Indeed, he is not the only one calling for the secularization of our technological endeavors. Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, Lanier expressed his concerns this way:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

While Lanier’s concerns are similar to Bogost’s, it may be worth noting that Lanier’s use of religious categories is rather more concrete. As far as I can tell, Bogost deploys a religious frame as a rhetorical device, and rather effectively so. Lanier’s criticisms, however, have been aroused by religiously intoned expressions of a desire for transcendence voiced by denizens of the tech world themselves.

But such expressions are hardly new, nor are they relegated to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, David Noble rightly insisted that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

So that no one would misunderstand his meaning, he added,

“This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.”

Along with chapters on the space program, atomic weapons, and biotechnology, Noble devoted a chapter to the history AI, titled “The Immortal Mind.” Noble found that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec–all of them influential theorists and practitioners in the development of AI–find their consummation in the Singularity movement. The movement envisions a time, 2045 is frequently suggested, when the distinction between machines and humans will blur and humanity as we know it will eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging on the age-old quest for the immortal life. Noble, who died in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documented alternated between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself. Noble cites one Artificial Life researcher who explains, “I fee like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by Hans Moravec:

“Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.”

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention–the finding within–of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.” I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that would be otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate to the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What kind of motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon apart from the ideological frame provided Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. It’s evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman, Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In the essay with which we began, Ian Bogost framed the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that plagues our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. It wouldn’t be too far from the mark, I think, to say that Bogost is working with what we might classify as an Enlightenment understanding of Religion, one that characterizes it as Reason’s Other, i.e. as a-rational if not altogether irrational, superstitious, authoritarian, and pernicious. For his part, Lanier appears to be working with similar assumptions.

Noble’s work complicates this picture, to say the least. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put this another way, the Enlightenment–and, yes, we are painting with broad strokes here–did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment re-named these Progress, Utopia, and Technology respectively. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology within the context of Progress, an inexorable force driving history toward its Utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the greater the emphasis on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing it’s fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics–these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, is deeply contaminated by religion, often regarded by the same people as the most irrational of human projects.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post-Christian world that it has presently flourished.

It is Christian insofar as its aspirations that are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who referencing the “Judeo-Christian tradition” suggested that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip. “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Santa Fe, New Mexico, anthropologist Stefan Helmreich observed, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

It is a heresy insofar as it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, and the resurrection of the body, to name a few. Having said as much, it would seem that one could perhaps conceive of the religion of technology as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

______________________

Near the end of The Religion of Technology, David Noble forcefully articulated the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” Noble warned, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble added, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

There is another significant point at which the religion of technology departs from its antecedent: “The millenarian promise of restoring mankind to its original Godlike perfection–the underlying premise of the religion of technology–was never meant to be universal.” Instead, the salvation it promises is limited finally to the very few will be able to afford it; it is for neither the poor nor the weak. Nor, would it seem, is it for those who have found a measure of joy or peace or beauty within the bounds of the human condition as we now experience it, frail as it may be.

Lastly, it is worth noting that the religion of technology appears to have no doctrine of final judgment. This is not altogether surprising given that, as Bogost warned, the divinizing of technology carries the curious effect of absolving us of responsibility for the tools that we fashion and the uses to which they are put.

I have no neat series of solutions to tie all of this up; rather I will give the last word to Wendell Berry:

“To recover from our disease of limitlessness, we will have to give up the idea that we have a right to be godlike animals, that we are potentially omniscient and omnipotent, ready to discover ‘the secret of the universe.’ We will have to start over, with a different and much older premise: the naturalness and, for creatures of limited intelligence, the necessity, of limits. We must learn again to ask how we can make the most of what we are, what we have, what we have been given.”


You can subscribe to my newsletter on technology and society here.

Wizard or God, Which Would You Rather Be?

Dumbledore_and_Elder_WandOccasionally, I ask myself whether or not I’m really on to anything when I publish the “thinking out loud” that constitutes most of the posts on this blog. And occasionally the world answers back, politely, “Yes, yes you are.”

A few months ago, in a post on automation and smart homes, I ventured an off-the-cuff observation: the smart home populated animated by the Internet of Things amounted to a re-enchantment of the world by technological means. I further elaborated that hypothesis in a subsequent post:

“So then, we have three discernible stages–mechanization, automation, animation–in the technological enchantment of the human-built world. The technological enchantment of the human-built world is the unforeseen consequence of the disenchantment of the natural world described by sociologists of modernity, Max Weber being the most notable. These sociologists claimed that modernity entailed the rationalization of the world and the purging of mystery, but they were only partly right. It might be better to say that the world was not so much disenchanted as it was differently enchanted. This displacement and redistribution of enchantment may be just as important a factor in shaping modernity as the putative disenchantment of nature.

In an offhand, stream-of-consciousness aside, I ventured that the allure of the smart-home, and similar technologies, arose from a latent desire to re-enchant the world. I’m doubling-down on that hypothesis. Here’s the working thesis: the ongoing technological enchantment of the human-built world is a corollary of the disenchantment of the natural world. The first movement yields the second, and the two are interwoven. To call this process of technological animation an enchantment of the human-built world is not merely a figurative post-hoc gloss on what has actually happened. Rather, the work of enchantment has been woven into the process all along.”

Granted, those were fairly strong claims that as of yet need to be more thoroughly substantiated, but here’s a small bit of evidence that suggests that my little thesis had some merit. It is a short video clip the NY Time’s Technology channel about the Internet of Things in which “David Rose, the author of ‘Enchanted Objects,’ sees a future where we can all live like wizards.” Emphasis mine, of course.

I had some difficulty embedding the video, so you’ll have to click over to watch it here: The Internet of Things. Really, you should. It’ll take less than three minutes of your time.

So, there was that. Because, apparently, the Internet today felt like reinforcing my quirky thoughts about technology, there was also this on the same site: Playing God Games.

That video segment clocks in at just under two minutes. If you click through to watch, you’ll note that it is a brief story about apps that allow you to play a deity in your own virtual world, with your very own virtual “followers.”

You can read that in light of my more recent musings about the appeal of games in which our “action,” and by extension we ourselves, seem to matter.

Perhaps, then, this is the more modest shape the religion of technology takes in the age of simulation and diminished expectations: you may play the wizard in your re-enchanted smart home or you may play a god in a virtual world on your smartphone. I suspect this is not what Stewart Brand had in mind when he wrote, “We are as gods and might as well get good at it.”

Simulated Futures

There’s a lot of innovation talk going on right now, or maybe it is just that I’ve been more attuned to it of late. Either way, I keep coming across pieces that tackle the topic of technological innovation from a variety of angles.

While not narrowly focused on technological innovation, this wonderfully discursive post by Alan Jacobs raises a number of relevant considerations. Jacobs ranges far and wide, so I won’t try to summarize his thoughts here. You should read the whole piece, but here is the point I want to highlight. Taking a 2012 essay by David Graeber as his point of departure, Jacobs asks us to consider the following:

“How were we taught not even to dream of flying cars and jetpacks? — or, or for that matter, an end to world hunger, something that C. P. Snow, in his famous lecture on ‘the two cultures’ of the sciences and humanities, saw as clearly within our grasp more than half-a-century ago? To see ‘sophisticated simulations’ of the things we used to hope we’d really achieve as good enough?”

Here’s the relevant passage in Graeber’s essay. After watching one of the more recent Star Wars films, he wonders how impressed with the special effects audiences of the older, fifties-era sci-fi films would be. His answer upon reflection: not very. Why? Because “they thought we’d be doing this kind of thing by now. Not just figuring out more sophisticated ways to simulate it.” Graeber goes on to add,

“That last word—simulate—is key. The technologies that have advanced since the seventies are mainly either medical technologies or information technologies—largely, technologies of simulation. They are technologies of what Jean Baudrillard and Umberto Eco called the ‘hyper-real,’ the ability to make imitations that are more realistic than originals. The postmodern sensibility, the feeling that we had somehow broken into an unprecedented new historical period in which we understood that there is nothing new; that grand historical narratives of progress and liberation were meaningless; that everything now was simulation, ironic repetition, fragmentation, and pastiche—all this makes sense in a technological environment in which the only breakthroughs were those that made it easier to create, transfer, and rearrange virtual projections of things that either already existed, or, we came to realize, never would.”

Here again is the theme of technological stagnation, of the death of genuine innovation. You can read the rest of Graeber’s piece for his own theories about the causes of this stagnation. What interested me was the suggestion that we’ve swapped genuine innovation for simulations. Of course, this interested me chiefly because it seems to reinforce and expand a point I made in yesterday’s post, that our fascination with virtual worlds may stem from the failure of our non-virtual world to yield the kind of possibilities for meaningful action that human beings crave.

As our hopes for the future seem to recede, our simulations of that future become ever more compelling.

Elsewhere, Lee Billings reports on his experience at the 2007 Singularity Summit:

“Over vegetarian hors d’oeuvres and red wine at a Bay Area villa, I had chatted with the billionaire venture capitalist Peter Thiel, who planned to adopt an ‘aggressive’ strategy for investing in a ‘positive’ Singularity, which would be ‘the biggest boom ever,’ if it doesn’t first ‘blow up the whole world.’ I had talked with the autodidactic artificial-intelligence researcher Eliezer Yudkowsky about his fears that artificial minds might, once created, rapidly destroy the planet. At one point, the inventor-turned-proselytizer
 Ray Kurzweil teleconferenced in to discuss,
among other things, his plans for becoming transhuman, transcending his own biology to 
achieve some sort of
 eternal life. Kurzweil
 believes this is possible, 
even probable, provided he can just live to see
 The Singularity’s dawn, 
which he has pegged at 
sometime in the middle of the 21st century. To this end, he reportedly consumes some 150 vitamin supplements a day.”

Billings also noted that many of his conversations at the conference “carried a cynical sheen of eschatological hucksterism: Climb aboard, don’t delay, invest right now, and you, too, may be among the chosen who rise to power from the ashes of the former world!”

Eschatological hucksterism … well put, indeed. That’s a phrase I’ll be tucking away for future use.

And that leads me to the concluding chapter of David Noble’s The Religion of Technology: The Divinity of Man and the Spirit of Invention. After surveying the religiously infused motives and rhetoric animating technological projects as diverse as the pursuit of AI, space exploration, and genetic engineering, Noble wrote

“As we have seen, those given to such imaginings are in the vanguard of technological development, amply endowed and in every way encouraged to realize their escapist fantasies. Often displaying a pathological dissatisfaction with, and deprecation of, the human condition, they are taking flight from the world, pointing us away from the earth, the flesh, the familiar–‘offering salvation by technical fix,’ in Mary Midgley’s apt description–all the while making the world over to conform to their vision of perfection.”

A little further on he concluded,

“Can we any longer afford to abide this system of blind belief? Ironically, the technological enterprise upon which we now ever more depend for the preservation and enlargement of our lives betrays a disdainful disregard for, indeed an impatience with, life itself. If dreams of technological escape from the burdens of mortality once translated into some relief of the human estate, the pursuit of technological transcendence has now perhaps outdistanced such earthly ends. If the religion of technology once fostered visions of social renovation, it also fueled fantasies of escaping society altogether. Today these bolder imaginings have gained sway, according to which as on philosopher of technology recently observed, ‘everything which exists at present … is deemed disposable.’ The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.”

I’ll leave you with that.

Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation

1939 World's Fair ProgressA couple of years back, while I was on my World’s Fair kick, I wrote a post or two (or three) about how we imagine the future, or, rather, how we fail to imagine the future. The World’s Fairs, particularly those held between the 1930’s and 70’s, offered a rather grand and ambitious vision for what the future would hold. Granted, much of what made up that vision never quite materialized, and much of it now seems a tad hokey. Additionally, much of it amounted to a huge corporate ad campaign. Nevertheless, the imagined future was impressive in its scope, it was utopian. The three posts linked above each suggested that, relative to the World’s Fairs of the mid-20th century, we seem to have a rather impoverished imagination when it comes to the future.

One of those posts cited a 2011 essay by Peter Thiel, “The End of the Future,” outlining the sources of Thiel’s pessimism about the rate of technological advance. More recently, Dan Wang has cataloged a series of public statements by Thiel supporting his contention that technological innovation has slowed, and dangerously so. Thiel, who made his mark and his fortune as a founder of PayPal, has emerged over the last few years as one of Silicon Valley’s leading intellectuals. His pessimism, then, seems to run against the grain of his milieu. Thiel, however, is not pessimistic about the potential of technology itself; rather, as I understand him, he is critical of our inability to more boldly imagine what we could do with technology. His view is neatly summed up in his well-known quip, “We wanted flying cars, instead we got 140 characters.”

Thiel is not the only one who thinks that we’ve been beset by a certain gloomy malaise when it comes to imagining the future. Last week, in the pages of the New York Times Magazine, Jayson Greene wondered, with thinly veiled exasperation, why contemporary science-fiction is so “glum” about AI? The article is a bit muddled at points–perhaps because the author, noting the assistance of his machines, believes it is not even half his–but it registers what seems to be an increasingly recurring complaint. Just last month, for instance, I noted a similar article in Wired that urged authors to stop writing dystopian science-fiction. Behind each of these pieces there lies an implicit question: Where has our ability to imagine a hopeful, positive vision for the future gone?

Kevin Kelly is wondering the same thing. In fact, he was willing to pay for someone to tell him a positive story about the future. I’ve long thought of Kelly as one of the most optimistic of contemporary tech writers, yet of late even he appears to be striking a more ambiguous note. Perhaps needing a fresh infusion of hope, he took to Twitter with this message:

“I’ll pay $100 for the best 100-word description of a plausible technological future in 100 years that I would like to live in. Email me.”

Kelly got 23 responses, and then he constructed his own 100-word vision for the future. It is instructive to read the submissions. By “instructive,” I mean intriguing, entertaining, disconcerting, and disturbing by turns. In fact, when I first read through them I thought I’d dedicate a post to analyzing these little techno-utopian vignettes. Suffice it to say, a few people, at least, are still nurturing an expansive vision for the future.

But are their stories the exceptions that prove the rule? To put it another way, is the dominant cultural zeitgeist dystopian or utopian with regards to the future? Of course, as C.S. Lewis once put, “What you see and what you hear depends a great deal on where you are standing. It also depends on what sort of person you are.” Whatever the case may be, there certainly seem to be a lot of people who think the zeitgeist is dystopian or, at best, depressingly unimaginative. I’m not sure they are altogether wrong about this, even if the whole story is more complicated. So why might this be?

To be clear before proceeding down this line of inquiry, I’m not so much concerned with whether we ought to be optimistic or pessimistic about the future. (The answer in any case is neither.) I’m not, in other words, approaching this topic from a normative perspective. Rather, I want to poke and prod the zeitgeist a little bit to see if we can’t figure out what is going on. So, in that spirit, here are few loosely organized thoughts.

First off, our culture is, in large measure, driven by consumerism. This, of course, is little more than a cliché, but it is no less true because of it. Consumerism is finally about the individual. Individual aspirations, by their very nature, tend to be narrow and short-sighted. It is as is if the potential creative force of our collective imagination is splintered into the millions of individual wills it is made to serve.

David Nye noted this devolution of our technological aspirations in his classic work on the American technological sublime. The sublime experience that once attended our encounters with nature and then our encounters with technological creations of awe-inspiring size and dynamism, has now given way to what Nye called the consumer sublime. “Unlike the Ford assembly line or Hoover Dam,” Nye explains, “Disneyland and Las Vegas have no use value. Their representations of sublimity and special effects are created solely for entertainment. Their epiphanies have no referents; they reveal not the existence of God, not the power of nature, not the majesty of human reason, but the titillation of representation itself.”

The consumer sublime, which Nye also calls an “egotistical sublime,” amounts to “an escape from the very work, rationality, and domination that once were embodied in the American technological sublime.”

Looking at the problem of consumerism from another vantage point, consider Nicholas Carr’s theory about the hierarchy of innovation. Carr’s point of departure included Peter Thiel’s complaint about the stagnation of technological innovation cited above. In response, Carr suggested that innovation proceeds along a path more or less parallel to Maslow’s famous hierarchy of human needs. We begin by seeking to satisfy very basic needs, those related to our survival. As those basic needs are met, we are able to think about more complex needs for social interaction, personal esteem, and self-actualization.

In Carr’s stimulating repurposing of Maslow’s hierarchy, technological innovation proceeds from technologies of survival to technologies of self-fulfillment. Carr doesn’t think that these levels of innovation are neatly realized in some clean, linear fashion. But he does think that at present the incentives, “monetary and reputational,” are, in a darkly eloquent phrasing, “bending the arc of innovation … toward decadence.” Away, that is, from grand, highly visible, transformative technologies.

The end game of this consumerist reduction of technological innovation may be what Ian Bogost recently called “future ennui.” “The excitement of a novel technology (or anything, really),” Bogost writes,

“has been replaced—or at least dampened—by the anguish of knowing its future burden. This listlessness might yet prove even worse than blind boosterism or cynical naysaying. Where the trauma of future shock could at least light a fire under its sufferers, future ennui exudes the viscous languor of indifferent acceptance. It doesn’t really matter that the Apple Watch doesn’t seem necessary, no more than the iPhone once didn’t too. Increasingly, change is not revolutionary, to use a word Apple has made banal, but presaged.”

Bogost adds, “When one is enervated by future ennui, there’s no vigor left even to ask if this future is one we even want.” The technological sublime, then, becomes the consumer sublime, which becomes future ennui. This is how technological innovation ends, not with a bang but a sigh.

The second point I want to make about the pessimistic zeitgeist centers on our Enlightenment inheritance. The Enlightenment bequeathed to us, among other things, two articles of faith. The first of these was the notion of inevitable moral progress, and the second was the notion of inevitable techno-scientific progress. Together they yielded what we tend to refer to simply as the Enlightenment’s notion of Progress. Together these articles of faith cultivate hope and incite action. Unfortunately, the two were sundered by the accumulation of tragedy and despair we call the twentieth century. Techno-scientific progress was a rosy notion so long as we imagined that moral progress advanced hand in hand with it. Techno-scientific progress decoupled from Enlightenment confidence in the perfectibility of humanity leaves us with the dystopian imagination.

Interestingly, the trajectory of the American World’s Fairs illustrates both of these points. Generally speaking, the World’s Fairs of the nineteenth and early twentieth century subsumed technology within their larger vision of social progress. By the 1930’s, the Fairs presented technology as the force upon which the realization of the utopian social vision depended. The 1939 New York Fair marked a turning point. It featured a utopian social vision powered by technological innovation. From that point forward, technological innovation increasingly became a goal in itself rather than a means toward a utopian society, and technological innovation was increasingly a consumer affair of diminishing scope.

That picture was painted in rather broad strokes, but I think it will bear scrutiny. Whether the illustration ultimately holds up or not, however, I certainly think the claim stands. The twentieth century shattered our collective optimism about human nature; consequently, empowering human beings with ever more powerful technologies became the stuff of nightmares rather than dreams.

Thirdly, technological innovation on a grand scale is an act of sublimation and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic-detachment, our default posture toward reality, destroys. Make of that theory what you will.

The last point builds on all that I’ve laid out thus far and perhaps even ties it all together … maybe. I want to approach it by noting one segment of the wider conversation about technology where a big, positive vision for the future is nurtured: the Transhumanist movement. This should go without saying, but I’ll say it anyway just to put it beyond doubt. I don’t endorse the Transhumanist vision. By saying that it is a “positive” vision I am only saying that it is understood as a positive vision by those who adhere to it. Now, with that out of the way, here is the thing to recognize about the Transhumanist vision, its aspirations are quasi-religious in character.

I mean that in at least a couple of ways. For instance, it may be understood as a reboot of Gnosticism, particularly given its disparagement of the human body and its attendant limitations. Relatedly, it often aspires to a disembodied, virtual existence that sounds a lot like the immortality of the soul espoused by Western religions. It is in this way a movement focused on technologies of the self, that highest order of innovation in Carr’s pyramid; but rather than seeking technologies that are mere accouterments of the self, they pursue technologies which work on the self to push the self along to the next evolutionary plane. Paradoxically, then, technology in the Transhumanist vision works on the self to transcend the self as it now exists.

Consequently, the scope of the Transhumanist vision stems from the Transhumanist quest for transcendence. The technologies of the self that Carr had in mind were technologies centered on the existing, immanent self. Putting all of this together, then, we might say that technologies of the immanent self devolve into gadgets with ever diminishing returns–consumerist ephemera–yielding future ennui. The imagined technologies of the would-be transcendent self, however, are seemingly more impressive in their aims and inspire cultish devotion in those who hope for them. But they are still technologies of the self. That is to say, they are not animated by a vision of social scope nor by a project of political consequence. This lends the whole movement a certain troubling naiveté.

Perhaps it also ultimately limits technological innovation. Grand technological projects of the sort that people like Thiel and Kelly would like to see us at least imagine are animated by a culturally diffused vision, often religious or transcendent in nature, that channels individual action away from the conscious pursuit of immediate satisfaction.

The other alternative, of course, is coerced labor. Hold that thought.

I want to begin drawing this over-long post to close by offering it as an overdue response to Pascal-Emmanuel Gobry’s discussion of Peter Thiel, the Church, and technological innovation. Gobry agreed with Thiel’s pessimism and lamented that the Church was not more active in driving technological innovation. He offered the great medieval cathedrals as an example of the sort of creation and innovation that the Church once inspired. I heartily endorse his estimation of the cathedrals as monumental works of astounding technical achievement, artistic splendor, and transcendent meaning. And, as Gobry notes, they were the first such monumental works not built on the back of forced labor.

For projects of that scale to succeed, individuals must either be animated by ideals that drive their willing participation or they must be forced by power or circumstance. In other words, cathedrals or pyramids. Cathedrals represent innovation born of freedom and transcendent ideals. The pyramids represent innovation born of forced labor and transcendent ideals.

The third alternative, of course, is the iPhone. I use the iPhone here to stand for consumer driven innovation. Innovation that is born of relative freedom (and forced labor) but absent a transcendent ideal to drive it beyond consumerist self-actualization. And that is where we are stuck, perhaps, with technological stagnation and future ennui.

But here’s the observation I want to leave you with. Our focus on technological innovation as the key to the future is a symptom of the problem; it suggests strongly that we are already compromised. The cathedrals were not built by people possessed merely of the desire to innovate. Technological innovation was a means to a culturally inspired end. [See the Adams’ quote below.] Insofar as we have reversed the relationship and allowed technological innovation to be our raison d’être we may find it impossible to imagine a better future, much less bring it about. With regards to the future of society, if the answer we’re looking for is technological, then we’re not asking the right questions.

_____________________________________

You can read a follow-up piece here.

N.B. The initial version of this post referred to “slave” labor with regards to the pyramids. A reader pointed out to me that the pyramids were not built by slaves but by paid craftsmen. This prompted me to do a little research. It does indeed seem to be the case that “slaves,” given what we mean by the term, were not the primary source of labor on the pyramids. However, the distinction seems to me to be a fine one. These workers appear to have been subject to various degrees of “obligatory” labor although also provided with food, shelter, and tax breaks. While not quite slave labor, it is not quite the labor of free people either. By contrast, you can read about the building of the cathedrals here. That said I’ve revised the post to omit the references to slavery.

Update: Henry Adams knew something of the cultural vision at work in the building of the cathedrals. Note the last line, especially:

“The architects of the twelfth and thirteenth centuries took the Church and the universe for truths, and tried to express them in a structure which should be final.  Knowing by an enormous experience precisely where the strains were to come, they enlarged their scale to the utmost point of material endurance, lightening the load and distributing the burden until the gutters and gargoyles that seem mere ornament, and the grotesques that seem rude absurdities, all do work either for the arch or for the eye; and every inch of material, up and down, from crypt to vault, from man to God, from the universe to the atom, had its task, giving support where support was needed, or weight where concentration was felt, but always with the condition of showing conspicuously to the eye the great lines which led to unity and the curves which controlled divergence; so that, from the cross on the flèche and the keystone of the vault, down through the ribbed nervures, the columns, the windows, to the foundation of the flying buttresses far beyond the walls, one idea controlled every line; and this is true of St. Thomas’ Church as it is of Amiens Cathedral.  The method was the same for both, and the result was an art marked by singular unity, which endured and served its purpose until man changed his attitude toward the universe.”