Simulated Futures

There’s a lot of innovation talk going on right now, or maybe it is just that I’ve been more attuned to it of late. Either way, I keep coming across pieces that tackle the topic of technological innovation from a variety of angles.

While not narrowly focused on technological innovation, this wonderfully discursive post by Alan Jacobs raises a number of relevant considerations. Jacobs ranges far and wide, so I won’t try to summarize his thoughts here. You should read the whole piece, but here is the point I want to highlight. Taking a 2012 essay by David Graeber as his point of departure, Jacobs asks us to consider the following:

“How were we taught not even to dream of flying cars and jetpacks? — or, or for that matter, an end to world hunger, something that C. P. Snow, in his famous lecture on ‘the two cultures’ of the sciences and humanities, saw as clearly within our grasp more than half-a-century ago? To see ‘sophisticated simulations’ of the things we used to hope we’d really achieve as good enough?”

Here’s the relevant passage in Graeber’s essay. After watching one of the more recent Star Wars films, he wonders how impressed with the special effects audiences of the older, fifties-era sci-fi films would be. His answer upon reflection: not very. Why? Because “they thought we’d be doing this kind of thing by now. Not just figuring out more sophisticated ways to simulate it.” Graeber goes on to add,

“That last word—simulate—is key. The technologies that have advanced since the seventies are mainly either medical technologies or information technologies—largely, technologies of simulation. They are technologies of what Jean Baudrillard and Umberto Eco called the ‘hyper-real,’ the ability to make imitations that are more realistic than originals. The postmodern sensibility, the feeling that we had somehow broken into an unprecedented new historical period in which we understood that there is nothing new; that grand historical narratives of progress and liberation were meaningless; that everything now was simulation, ironic repetition, fragmentation, and pastiche—all this makes sense in a technological environment in which the only breakthroughs were those that made it easier to create, transfer, and rearrange virtual projections of things that either already existed, or, we came to realize, never would.”

Here again is the theme of technological stagnation, of the death of genuine innovation. You can read the rest of Graeber’s piece for his own theories about the causes of this stagnation. What interested me was the suggestion that we’ve swapped genuine innovation for simulations. Of course, this interested me chiefly because it seems to reinforce and expand a point I made in yesterday’s post, that our fascination with virtual worlds may stem from the failure of our non-virtual world to yield the kind of possibilities for meaningful action that human beings crave.

As our hopes for the future seem to recede, our simulations of that future become ever more compelling.

Elsewhere, Lee Billings reports on his experience at the 2007 Singularity Summit:

“Over vegetarian hors d’oeuvres and red wine at a Bay Area villa, I had chatted with the billionaire venture capitalist Peter Thiel, who planned to adopt an ‘aggressive’ strategy for investing in a ‘positive’ Singularity, which would be ‘the biggest boom ever,’ if it doesn’t first ‘blow up the whole world.’ I had talked with the autodidactic artificial-intelligence researcher Eliezer Yudkowsky about his fears that artificial minds might, once created, rapidly destroy the planet. At one point, the inventor-turned-proselytizer
 Ray Kurzweil teleconferenced in to discuss,
among other things, his plans for becoming transhuman, transcending his own biology to 
achieve some sort of
 eternal life. Kurzweil
 believes this is possible, 
even probable, provided he can just live to see
 The Singularity’s dawn, 
which he has pegged at 
sometime in the middle of the 21st century. To this end, he reportedly consumes some 150 vitamin supplements a day.”

Billings also noted that many of his conversations at the conference “carried a cynical sheen of eschatological hucksterism: Climb aboard, don’t delay, invest right now, and you, too, may be among the chosen who rise to power from the ashes of the former world!”

Eschatological hucksterism … well put, indeed. That’s a phrase I’ll be tucking away for future use.

And that leads me to the concluding chapter of David Noble’s The Religion of Technology: The Divinity of Man and the Spirit of Invention. After surveying the religiously infused motives and rhetoric animating technological projects as diverse as the pursuit of AI, space exploration, and genetic engineering, Noble wrote

“As we have seen, those given to such imaginings are in the vanguard of technological development, amply endowed and in every way encouraged to realize their escapist fantasies. Often displaying a pathological dissatisfaction with, and deprecation of, the human condition, they are taking flight from the world, pointing us away from the earth, the flesh, the familiar–‘offering salvation by technical fix,’ in Mary Midgley’s apt description–all the while making the world over to conform to their vision of perfection.”

A little further on he concluded,

“Can we any longer afford to abide this system of blind belief? Ironically, the technological enterprise upon which we now ever more depend for the preservation and enlargement of our lives betrays a disdainful disregard for, indeed an impatience with, life itself. If dreams of technological escape from the burdens of mortality once translated into some relief of the human estate, the pursuit of technological transcendence has now perhaps outdistanced such earthly ends. If the religion of technology once fostered visions of social renovation, it also fueled fantasies of escaping society altogether. Today these bolder imaginings have gained sway, according to which as on philosopher of technology recently observed, ‘everything which exists at present … is deemed disposable.’ The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.”

I’ll leave you with that.

The Transhumanist Logic of Technological Innovation

What follows are a series of underdeveloped thoughts for your consideration:

Advances in robotics, AI, and automation promise to liberate human beings from labor.

The Programmable World promises to liberate us from mundane, routine, everyday tasks.

Big Data and algorithms promise to liberate us from the imperatives of understanding and deliberation.

Google promises to liberate us from the need to learn things, drive cars, or even become conscious of what we need before it is provided for us.

But what are we being liberated for? What is the end which this freedom will enable us to pursue?

What sort of person do these technologies invite us to become?

Or, if we maximized their affordances, what sort of engagement with the world would they facilitate?

In the late 1950s, Hannah Arendt worried that automated technology was closing in on the elusive promise of a world without labor at a point in history when human beings could understand themselves only as laborers. She knew that in earlier epochs the desire to transcend labor was animated by a political, philosophical, or theological anthropology that assumed there was a teleology inherent in human nature — the contemplation of the true, the good, and the beautiful or of the beatific vision of God.

But she also knew that no such teleology now animates Western culture. In fact, a case could be made that Western culture now assumes that such a teleology does not and could not exist. Unless, that is, we made it for ourselves. This is where transhumanism, extropianism, and singularity come in. If there is no teleology inherent to human nature, then the transcendence of human nature becomes the default teleology.

This quasi-religious pursuit has deep historical roots, but the logic of technological innovation may make the ideology more plausible.

Around this time last year, Nick Carr proposed that technological innovation tracks neatly with Maslow’s hierarchy of human needs (see Carr’s chart below). I found this a rather compelling and elegant thesis. But, what if innovation is finally determined by something other than strictly human needs? What if beyond self-actualization, there lay the realm of self-transcendence?

After all, when, as an article of faith, we must innovate, and no normative account of human nature serves to constrain innovation, then we arrive at a point where we ourselves will be the final field for innovation.

The technologies listed above, while not directly implicated in the transhumanist project (excepting perhaps dreams of a Google implant), tend in the same direction to the degree that they render human action in the world obsolete. The liberation they implicitly offer, in other words, is a liberation from fundamental aspects of what it has meant to be a human being.

hierarchy of innovation