Algorithms Who Art in Apps, Hallowed Be Thy Code

If you want to understand the status of algorithms in our collective imagination, Ian Bogost proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

“the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.”

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence sometimes exhibited toward technology. It is, as he fears, an impediment to clear thinking. Indeed, he is not the only one calling for the secularization of our technological endeavors. Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, Lanier expressed his concerns this way:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

While Lanier’s concerns are similar to Bogost’s, it may be worth noting that Lanier’s use of religious categories is rather more concrete. As far as I can tell, Bogost deploys a religious frame as a rhetorical device, and rather effectively so. Lanier’s criticisms, however, have been aroused by religiously intoned expressions of a desire for transcendence voiced by denizens of the tech world themselves.

But such expressions are hardly new, nor are they relegated to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, David Noble rightly insisted that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

So that no one would misunderstand his meaning, he added,

“This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.”

Along with chapters on the space program, atomic weapons, and biotechnology, Noble devoted a chapter to the history AI, titled “The Immortal Mind.” Noble found that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec–all of them influential theorists and practitioners in the development of AI–find their consummation in the Singularity movement. The movement envisions a time, 2045 is frequently suggested, when the distinction between machines and humans will blur and humanity as we know it will eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging on the age-old quest for the immortal life. Noble, who died in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documented alternated between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself. Noble cites one Artificial Life researcher who explains, “I fee like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by Hans Moravec:

“Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.”

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention–the finding within–of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.” I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that would be otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate to the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What kind of motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon apart from the ideological frame provided Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. It’s evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman, Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In the essay with which we began, Ian Bogost framed the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that plagues our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. It wouldn’t be too far from the mark, I think, to say that Bogost is working with what we might classify as an Enlightenment understanding of Religion, one that characterizes it as Reason’s Other, i.e. as a-rational if not altogether irrational, superstitious, authoritarian, and pernicious. For his part, Lanier appears to be working with similar assumptions.

Noble’s work complicates this picture, to say the least. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put this another way, the Enlightenment–and, yes, we are painting with broad strokes here–did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment re-named these Progress, Utopia, and Technology respectively. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology within the context of Progress, an inexorable force driving history toward its Utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the greater the emphasis on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing it’s fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics–these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, is deeply contaminated by religion, often regarded by the same people as the most irrational of human projects.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post-Christian world that it has presently flourished.

It is Christian insofar as its aspirations that are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who referencing the “Judeo-Christian tradition” suggested that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip. “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich observed, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

It is a heresy insofar as it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, and the resurrection of the body, to name a few. Having said as much, it would seem that one could perhaps conceive of the religion of technology as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

______________________

Near the end of The Religion of Technology, David Noble forcefully articulated the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” Noble warned, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble added, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

There is another significant point at which the religion of technology departs from its antecedent: “The millenarian promise of restoring mankind to its original Godlike perfection–the underlying premise of the religion of technology–was never meant to be universal.” Instead, the salvation it promises is limited finally to the very few will be able to afford it; it is for neither the poor nor the weak. Nor, would it seem, is it for those who have found a measure of joy or peace or beauty within the bounds of the human condition as we now experience it, frail as it may be.

Lastly, it is worth noting that the religion of technology appears to have no doctrine of final judgment. This is not altogether surprising given that, as Bogost warned, the divinizing of technology carries the curious effect of absolving us of responsibility for the tools that we fashion and the uses to which they are put.

I have no neat series of solutions to tie all of this up; rather I will give the last word to Wendell Berry:

“To recover from our disease of limitlessness, we will have to give up the idea that we have a right to be godlike animals, that we are potentially omniscient and omnipotent, ready to discover ‘the secret of the universe.’ We will have to start over, with a different and much older premise: the naturalness and, for creatures of limited intelligence, the necessity, of limits. We must learn again to ask how we can make the most of what we are, what we have, what we have been given.”

Wizard or God, Which Would You Rather Be?

Dumbledore_and_Elder_WandOccasionally, I ask myself whether or not I’m really on to anything when I publish the “thinking out loud” that constitutes most of the posts on this blog. And occasionally the world answers back, politely, “Yes, yes you are.”

A few months ago, in a post on automation and smart homes, I ventured an off-the-cuff observation: the smart home populated animated by the Internet of Things amounted to a re-enchantment of the world by technological means. I further elaborated that hypothesis in a subsequent post:

“So then, we have three discernible stages–mechanization, automation, animation–in the technological enchantment of the human-built world. The technological enchantment of the human-built world is the unforeseen consequence of the disenchantment of the natural world described by sociologists of modernity, Max Weber being the most notable. These sociologists claimed that modernity entailed the rationalization of the world and the purging of mystery, but they were only partly right. It might be better to say that the world was not so much disenchanted as it was differently enchanted. This displacement and redistribution of enchantment may be just as important a factor in shaping modernity as the putative disenchantment of nature.

In an offhand, stream-of-consciousness aside, I ventured that the allure of the smart-home, and similar technologies, arose from a latent desire to re-enchant the world. I’m doubling-down on that hypothesis. Here’s the working thesis: the ongoing technological enchantment of the human-built world is a corollary of the disenchantment of the natural world. The first movement yields the second, and the two are interwoven. To call this process of technological animation an enchantment of the human-built world is not merely a figurative post-hoc gloss on what has actually happened. Rather, the work of enchantment has been woven into the process all along.”

Granted, those were fairly strong claims that as of yet need to be more thoroughly substantiated, but here’s a small bit of evidence that suggests that my little thesis had some merit. It is a short video clip the NY Time’s Technology channel about the Internet of Things in which “David Rose, the author of ‘Enchanted Objects,’ sees a future where we can all live like wizards.” Emphasis mine, of course.

I had some difficulty embedding the video, so you’ll have to click over to watch it here: The Internet of Things. Really, you should. It’ll take less than three minutes of your time.

So, there was that. Because, apparently, the Internet today felt like reinforcing my quirky thoughts about technology, there was also this on the same site: Playing God Games.

That video segment clocks in at just under two minutes. If you click through to watch, you’ll note that it is a brief story about apps that allow you to play a deity in your own virtual world, with your very own virtual “followers.”

You can read that in light of my more recent musings about the appeal of games in which our “action,” and by extension we ourselves, seem to matter.

Perhaps, then, this is the more modest shape the religion of technology takes in the age of simulation and diminished expectations: you may play the wizard in your re-enchanted smart home or you may play a god in a virtual world on your smartphone. I suspect this is not what Stewart Brand had in mind when he wrote, “We are as gods and might as well get good at it.”

Simulated Futures

There’s a lot of innovation talk going on right now, or maybe it is just that I’ve been more attuned to it of late. Either way, I keep coming across pieces that tackle the topic of technological innovation from a variety of angles.

While not narrowly focused on technological innovation, this wonderfully discursive post by Alan Jacobs raises a number of relevant considerations. Jacobs ranges far and wide, so I won’t try to summarize his thoughts here. You should read the whole piece, but here is the point I want to highlight. Taking a 2012 essay by David Graeber as his point of departure, Jacobs asks us to consider the following:

“How were we taught not even to dream of flying cars and jetpacks? — or, or for that matter, an end to world hunger, something that C. P. Snow, in his famous lecture on ‘the two cultures’ of the sciences and humanities, saw as clearly within our grasp more than half-a-century ago? To see ‘sophisticated simulations’ of the things we used to hope we’d really achieve as good enough?”

Here’s the relevant passage in Graeber’s essay. After watching one of the more recent Star Wars films, he wonders how impressed with the special effects audiences of the older, fifties-era sci-fi films would be. His answer upon reflection: not very. Why? Because “they thought we’d be doing this kind of thing by now. Not just figuring out more sophisticated ways to simulate it.” Graeber goes on to add,

“That last word—simulate—is key. The technologies that have advanced since the seventies are mainly either medical technologies or information technologies—largely, technologies of simulation. They are technologies of what Jean Baudrillard and Umberto Eco called the ‘hyper-real,’ the ability to make imitations that are more realistic than originals. The postmodern sensibility, the feeling that we had somehow broken into an unprecedented new historical period in which we understood that there is nothing new; that grand historical narratives of progress and liberation were meaningless; that everything now was simulation, ironic repetition, fragmentation, and pastiche—all this makes sense in a technological environment in which the only breakthroughs were those that made it easier to create, transfer, and rearrange virtual projections of things that either already existed, or, we came to realize, never would.”

Here again is the theme of technological stagnation, of the death of genuine innovation. You can read the rest of Graeber’s piece for his own theories about the causes of this stagnation. What interested me was the suggestion that we’ve swapped genuine innovation for simulations. Of course, this interested me chiefly because it seems to reinforce and expand a point I made in yesterday’s post, that our fascination with virtual worlds may stem from the failure of our non-virtual world to yield the kind of possibilities for meaningful action that human beings crave.

As our hopes for the future seem to recede, our simulations of that future become ever more compelling.

Elsewhere, Lee Billings reports on his experience at the 2007 Singularity Summit:

“Over vegetarian hors d’oeuvres and red wine at a Bay Area villa, I had chatted with the billionaire venture capitalist Peter Thiel, who planned to adopt an ‘aggressive’ strategy for investing in a ‘positive’ Singularity, which would be ‘the biggest boom ever,’ if it doesn’t first ‘blow up the whole world.’ I had talked with the autodidactic artificial-intelligence researcher Eliezer Yudkowsky about his fears that artificial minds might, once created, rapidly destroy the planet. At one point, the inventor-turned-proselytizer
 Ray Kurzweil teleconferenced in to discuss,
among other things, his plans for becoming transhuman, transcending his own biology to 
achieve some sort of
 eternal life. Kurzweil
 believes this is possible, 
even probable, provided he can just live to see
 The Singularity’s dawn, 
which he has pegged at 
sometime in the middle of the 21st century. To this end, he reportedly consumes some 150 vitamin supplements a day.”

Billings also noted that many of his conversations at the conference “carried a cynical sheen of eschatological hucksterism: Climb aboard, don’t delay, invest right now, and you, too, may be among the chosen who rise to power from the ashes of the former world!”

Eschatological hucksterism … well put, indeed. That’s a phrase I’ll be tucking away for future use.

And that leads me to the concluding chapter of David Noble’s The Religion of Technology: The Divinity of Man and the Spirit of Invention. After surveying the religiously infused motives and rhetoric animating technological projects as diverse as the pursuit of AI, space exploration, and genetic engineering, Noble wrote

“As we have seen, those given to such imaginings are in the vanguard of technological development, amply endowed and in every way encouraged to realize their escapist fantasies. Often displaying a pathological dissatisfaction with, and deprecation of, the human condition, they are taking flight from the world, pointing us away from the earth, the flesh, the familiar–‘offering salvation by technical fix,’ in Mary Midgley’s apt description–all the while making the world over to conform to their vision of perfection.”

A little further on he concluded,

“Can we any longer afford to abide this system of blind belief? Ironically, the technological enterprise upon which we now ever more depend for the preservation and enlargement of our lives betrays a disdainful disregard for, indeed an impatience with, life itself. If dreams of technological escape from the burdens of mortality once translated into some relief of the human estate, the pursuit of technological transcendence has now perhaps outdistanced such earthly ends. If the religion of technology once fostered visions of social renovation, it also fueled fantasies of escaping society altogether. Today these bolder imaginings have gained sway, according to which as on philosopher of technology recently observed, ‘everything which exists at present … is deemed disposable.’ The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.”

I’ll leave you with that.

Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation

1939 World's Fair ProgressA couple of years back, while I was on my World’s Fair kick, I wrote a post or two (or three) about how we imagine the future, or, rather, how we fail to imagine the future. The World’s Fairs, particularly those held between the 1930’s and 70’s, offered a rather grand and ambitious vision for what the future would hold. Granted, much of what made up that vision never quite materialized, and much of it now seems a tad hokey. Additionally, much of it amounted to a huge corporate ad campaign. Nevertheless, the imagined future was impressive in its scope, it was utopian. The three posts linked above each suggested that, relative to the World’s Fairs of the mid-20th century, we seem to have a rather impoverished imagination when it comes to the future.

One of those posts cited a 2011 essay by Peter Thiel, “The End of the Future,” outlining the sources of Thiel’s pessimism about the rate of technological advance. More recently, Dan Wang has cataloged a series of public statements by Thiel supporting his contention that technological innovation has slowed, and dangerously so. Thiel, who made his mark and his fortune as a founder of PayPal, has emerged over the last few years as one of Silicon Valley’s leading intellectuals. His pessimism, then, seems to run against the grain of his milieu. Thiel, however, is not pessimistic about the potential of technology itself; rather, as I understand him, he is critical of our inability to more boldly imagine what we could do with technology. His view is neatly summed up in his well-known quip, “We wanted flying cars, instead we got 140 characters.”

Thiel is not the only one who thinks that we’ve been beset by a certain gloomy malaise when it comes to imagining the future. Last week, in the pages of the New York Times Magazine, Jayson Greene wondered, with thinly veiled exasperation, why contemporary science-fiction is so “glum” about AI? The article is a bit muddled at points–perhaps because the author, noting the assistance of his machines, believes it is not even half his–but it registers what seems to be an increasingly recurring complaint. Just last month, for instance, I noted a similar article in Wired that urged authors to stop writing dystopian science-fiction. Behind each of these pieces there lies an implicit question: Where has our ability to imagine a hopeful, positive vision for the future gone?

Kevin Kelly is wondering the same thing. In fact, he was willing to pay for someone to tell him a positive story about the future. I’ve long thought of Kelly as one of the most optimistic of contemporary tech writers, yet of late even he appears to be striking a more ambiguous note. Perhaps needing a fresh infusion of hope, he took to Twitter with this message:

“I’ll pay $100 for the best 100-word description of a plausible technological future in 100 years that I would like to live in. Email me.”

Kelly got 23 responses, and then he constructed his own 100-word vision for the future. It is instructive to read the submissions. By “instructive,” I mean intriguing, entertaining, disconcerting, and disturbing by turns. In fact, when I first read through them I thought I’d dedicate a post to analyzing these little techno-utopian vignettes. Suffice it to say, a few people, at least, are still nurturing an expansive vision for the future.

But are their stories the exceptions that prove the rule? To put it another way, is the dominant cultural zeitgeist dystopian or utopian with regards to the future? Of course, as C.S. Lewis once put, “What you see and what you hear depends a great deal on where you are standing. It also depends on what sort of person you are.” Whatever the case may be, there certainly seem to be a lot of people who think the zeitgeist is dystopian or, at best, depressingly unimaginative. I’m not sure they are altogether wrong about this, even if the whole story is more complicated. So why might this be?

To be clear before proceeding down this line of inquiry, I’m not so much concerned with whether we ought to be optimistic or pessimistic about the future. (The answer in any case is neither.) I’m not, in other words, approaching this topic from a normative perspective. Rather, I want to poke and prod the zeitgeist a little bit to see if we can’t figure out what is going on. So, in that spirit, here are few loosely organized thoughts.

First off, our culture is, in large measure, driven by consumerism. This, of course, is little more than a cliché, but it is no less true because of it. Consumerism is finally about the individual. Individual aspirations, by their very nature, tend to be narrow and short-sighted. It is as is if the potential creative force of our collective imagination is splintered into the millions of individual wills it is made to serve.

David Nye noted this devolution of our technological aspirations in his classic work on the American technological sublime. The sublime experience that once attended our encounters with nature and then our encounters with technological creations of awe-inspiring size and dynamism, has now given way to what Nye called the consumer sublime. “Unlike the Ford assembly line or Hoover Dam,” Nye explains, “Disneyland and Las Vegas have no use value. Their representations of sublimity and special effects are created solely for entertainment. Their epiphanies have no referents; they reveal not the existence of God, not the power of nature, not the majesty of human reason, but the titillation of representation itself.”

The consumer sublime, which Nye also calls an “egotistical sublime,” amounts to “an escape from the very work, rationality, and domination that once were embodied in the American technological sublime.”

Looking at the problem of consumerism from another vantage point, consider Nicholas Carr’s theory about the hierarchy of innovation. Carr’s point of departure included Peter Thiel’s complaint about the stagnation of technological innovation cited above. In response, Carr suggested that innovation proceeds along a path more or less parallel to Maslow’s famous hierarchy of human needs. We begin by seeking to satisfy very basic needs, those related to our survival. As those basic needs are met, we are able to think about more complex needs for social interaction, personal esteem, and self-actualization.

In Carr’s stimulating repurposing of Maslow’s hierarchy, technological innovation proceeds from technologies of survival to technologies of self-fulfillment. Carr doesn’t think that these levels of innovation are neatly realized in some clean, linear fashion. But he does think that at present the incentives, “monetary and reputational,” are, in a darkly eloquent phrasing, “bending the arc of innovation … toward decadence.” Away, that is, from grand, highly visible, transformative technologies.

The end game of this consumerist reduction of technological innovation may be what Ian Bogost recently called “future ennui.” “The excitement of a novel technology (or anything, really),” Bogost writes,

“has been replaced—or at least dampened—by the anguish of knowing its future burden. This listlessness might yet prove even worse than blind boosterism or cynical naysaying. Where the trauma of future shock could at least light a fire under its sufferers, future ennui exudes the viscous languor of indifferent acceptance. It doesn’t really matter that the Apple Watch doesn’t seem necessary, no more than the iPhone once didn’t too. Increasingly, change is not revolutionary, to use a word Apple has made banal, but presaged.”

Bogost adds, “When one is enervated by future ennui, there’s no vigor left even to ask if this future is one we even want.” The technological sublime, then, becomes the consumer sublime, which becomes future ennui. This is how technological innovation ends, not with a bang but a sigh.

The second point I want to make about the pessimistic zeitgeist centers on our Enlightenment inheritance. The Enlightenment bequeathed to us, among other things, two articles of faith. The first of these was the notion of inevitable moral progress, and the second was the notion of inevitable techno-scientific progress. Together they yielded what we tend to refer to simply as the Enlightenment’s notion of Progress. Together these articles of faith cultivate hope and incite action. Unfortunately, the two were sundered by the accumulation of tragedy and despair we call the twentieth century. Techno-scientific progress was a rosy notion so long as we imagined that moral progress advanced hand in hand with it. Techno-scientific progress decoupled from Enlightenment confidence in the perfectibility of humanity leaves us with the dystopian imagination.

Interestingly, the trajectory of the American World’s Fairs illustrates both of these points. Generally speaking, the World’s Fairs of the nineteenth and early twentieth century subsumed technology within their larger vision of social progress. By the 1930’s, the Fairs presented technology as the force upon which the realization of the utopian social vision depended. The 1939 New York Fair marked a turning point. It featured a utopian social vision powered by technological innovation. From that point forward, technological innovation increasingly became a goal in itself rather than a means toward a utopian society, and technological innovation was increasingly a consumer affair of diminishing scope.

That picture was painted in rather broad strokes, but I think it will bear scrutiny. Whether the illustration ultimately holds up or not, however, I certainly think the claim stands. The twentieth century shattered our collective optimism about human nature; consequently, empowering human beings with ever more powerful technologies became the stuff of nightmares rather than dreams.

Thirdly, technological innovation on a grand scale is an act of sublimation and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic-detachment, our default posture toward reality, destroys. Make of that theory what you will.

The last point builds on all that I’ve laid out thus far and perhaps even ties it all together … maybe. I want to approach it by noting one segment of the wider conversation about technology where a big, positive vision for the future is nurtured: the Transhumanist movement. This should go without saying, but I’ll say it anyway just to put it beyond doubt. I don’t endorse the Transhumanist vision. By saying that it is a “positive” vision I am only saying that it is understood as a positive vision by those who adhere to it. Now, with that out of the way, here is the thing to recognize about the Transhumanist vision, its aspirations are quasi-religious in character.

I mean that in at least a couple of ways. For instance, it may be understood as a reboot of Gnosticism, particularly given its disparagement of the human body and its attendant limitations. Relatedly, it often aspires to a disembodied, virtual existence that sounds a lot like the immortality of the soul espoused by Western religions. It is in this way a movement focused on technologies of the self, that highest order of innovation in Carr’s pyramid; but rather than seeking technologies that are mere accouterments of the self, they pursue technologies which work on the self to push the self along to the next evolutionary plane. Paradoxically, then, technology in the Transhumanist vision works on the self to transcend the self as it now exists.

Consequently, the scope of the Transhumanist vision stems from the Transhumanist quest for transcendence. The technologies of the self that Carr had in mind were technologies centered on the existing, immanent self. Putting all of this together, then, we might say that technologies of the immanent self devolve into gadgets with ever diminishing returns–consumerist ephemera–yielding future ennui. The imagined technologies of the would-be transcendent self, however, are seemingly more impressive in their aims and inspire cultish devotion in those who hope for them. But they are still technologies of the self. That is to say, they are not animated by a vision of social scope nor by a project of political consequence. This lends the whole movement a certain troubling naiveté.

Perhaps it also ultimately limits technological innovation. Grand technological projects of the sort that people like Thiel and Kelly would like to see us at least imagine are animated by a culturally diffused vision, often religious or transcendent in nature, that channels individual action away from the conscious pursuit of immediate satisfaction.

The other alternative, of course, is coerced labor. Hold that thought.

I want to begin drawing this over-long post to close by offering it as an overdue response to Pascal-Emmanuel Gobry’s discussion of Peter Thiel, the Church, and technological innovation. Gobry agreed with Thiel’s pessimism and lamented that the Church was not more active in driving technological innovation. He offered the great medieval cathedrals as an example of the sort of creation and innovation that the Church once inspired. I heartily endorse his estimation of the cathedrals as monumental works of astounding technical achievement, artistic splendor, and transcendent meaning. And, as Gobry notes, they were the first such monumental works not built on the back of forced labor.

For projects of that scale to succeed, individuals must either be animated by ideals that drive their willing participation or they must be forced by power or circumstance. In other words, cathedrals or pyramids. Cathedrals represent innovation born of freedom and transcendent ideals. The pyramids represent innovation born of forced labor and transcendent ideals.

The third alternative, of course, is the iPhone. I use the iPhone here to stand for consumer driven innovation. Innovation that is born of relative freedom (and forced labor) but absent a transcendent ideal to drive it beyond consumerist self-actualization. And that is where we are stuck, perhaps, with technological stagnation and future ennui.

But here’s the observation I want to leave you with. Our focus on technological innovation as the key to the future is a symptom of the problem; it suggests strongly that we are already compromised. The cathedrals were not built by people possessed merely of the desire to innovate. Technological innovation was a means to a culturally inspired end. [See the Adams’ quote below.] Insofar as we have reversed the relationship and allowed technological innovation to be our raison d’être we may find it impossible to imagine a better future, much less bring it about. With regards to the future of society, if the answer we’re looking for is technological, then we’re not asking the right questions.

_____________________________________

You can read a follow-up piece here.

N.B. The initial version of this post referred to “slave” labor with regards to the pyramids. A reader pointed out to me that the pyramids were not built by slaves but by paid craftsmen. This prompted me to do a little research. It does indeed seem to be the case that “slaves,” given what we mean by the term, were not the primary source of labor on the pyramids. However, the distinction seems to me to be a fine one. These workers appear to have been subject to various degrees of “obligatory” labor although also provided with food, shelter, and tax breaks. While not quite slave labor, it is not quite the labor of free people either. By contrast, you can read about the building of the cathedrals here. That said I’ve revised the post to omit the references to slavery.

Update: Henry Adams knew something of the cultural vision at work in the building of the cathedrals. Note the last line, especially:

“The architects of the twelfth and thirteenth centuries took the Church and the universe for truths, and tried to express them in a structure which should be final.  Knowing by an enormous experience precisely where the strains were to come, they enlarged their scale to the utmost point of material endurance, lightening the load and distributing the burden until the gutters and gargoyles that seem mere ornament, and the grotesques that seem rude absurdities, all do work either for the arch or for the eye; and every inch of material, up and down, from crypt to vault, from man to God, from the universe to the atom, had its task, giving support where support was needed, or weight where concentration was felt, but always with the condition of showing conspicuously to the eye the great lines which led to unity and the curves which controlled divergence; so that, from the cross on the flèche and the keystone of the vault, down through the ribbed nervures, the columns, the windows, to the foundation of the flying buttresses far beyond the walls, one idea controlled every line; and this is true of St. Thomas’ Church as it is of Amiens Cathedral.  The method was the same for both, and the result was an art marked by singular unity, which endured and served its purpose until man changed his attitude toward the universe.”

 

Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.