Algorithms Who Art in Apps, Hallowed Be Thy Code

If you want to understand the status of algorithms in our collective imagination, Ian Bogost proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

“the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.”

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence sometimes exhibited toward technology. It is, as he fears, an impediment to clear thinking. Indeed, he is not the only one calling for the secularization of our technological endeavors. Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, Lanier expressed his concerns this way:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

While Lanier’s concerns are similar to Bogost’s, it may be worth noting that Lanier’s use of religious categories is rather more concrete. As far as I can tell, Bogost deploys a religious frame as a rhetorical device, and rather effectively so. Lanier’s criticisms, however, have been aroused by religiously intoned expressions of a desire for transcendence voiced by denizens of the tech world themselves.

But such expressions are hardly new, nor are they relegated to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, David Noble rightly insisted that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

So that no one would misunderstand his meaning, he added,

“This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.”

Along with chapters on the space program, atomic weapons, and biotechnology, Noble devoted a chapter to the history AI, titled “The Immortal Mind.” Noble found that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec–all of them influential theorists and practitioners in the development of AI–find their consummation in the Singularity movement. The movement envisions a time, 2045 is frequently suggested, when the distinction between machines and humans will blur and humanity as we know it will eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging on the age-old quest for the immortal life. Noble, who died in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documented alternated between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself. Noble cites one Artificial Life researcher who explains, “I fee like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by Hans Moravec:

“Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.”

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention–the finding within–of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.” I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that would be otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate to the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What kind of motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon apart from the ideological frame provided Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. It’s evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman, Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In the essay with which we began, Ian Bogost framed the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that plagues our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. It wouldn’t be too far from the mark, I think, to say that Bogost is working with what we might classify as an Enlightenment understanding of Religion, one that characterizes it as Reason’s Other, i.e. as a-rational if not altogether irrational, superstitious, authoritarian, and pernicious. For his part, Lanier appears to be working with similar assumptions.

Noble’s work complicates this picture, to say the least. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put this another way, the Enlightenment–and, yes, we are painting with broad strokes here–did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment re-named these Progress, Utopia, and Technology respectively. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology within the context of Progress, an inexorable force driving history toward its Utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the greater the emphasis on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing it’s fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics–these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, is deeply contaminated by religion, often regarded by the same people as the most irrational of human projects.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post-Christian world that it has presently flourished.

It is Christian insofar as its aspirations that are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who referencing the “Judeo-Christian tradition” suggested that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip. “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich observed, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

It is a heresy insofar as it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, and the resurrection of the body, to name a few. Having said as much, it would seem that one could perhaps conceive of the religion of technology as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

______________________

Near the end of The Religion of Technology, David Noble forcefully articulated the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” Noble warned, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble added, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

There is another significant point at which the religion of technology departs from its antecedent: “The millenarian promise of restoring mankind to its original Godlike perfection–the underlying premise of the religion of technology–was never meant to be universal.” Instead, the salvation it promises is limited finally to the very few will be able to afford it; it is for neither the poor nor the weak. Nor, would it seem, is it for those who have found a measure of joy or peace or beauty within the bounds of the human condition as we now experience it, frail as it may be.

Lastly, it is worth noting that the religion of technology appears to have no doctrine of final judgment. This is not altogether surprising given that, as Bogost warned, the divinizing of technology carries the curious effect of absolving us of responsibility for the tools that we fashion and the uses to which they are put.

I have no neat series of solutions to tie all of this up; rather I will give the last word to Wendell Berry:

“To recover from our disease of limitlessness, we will have to give up the idea that we have a right to be godlike animals, that we are potentially omniscient and omnipotent, ready to discover ‘the secret of the universe.’ We will have to start over, with a different and much older premise: the naturalness and, for creatures of limited intelligence, the necessity, of limits. We must learn again to ask how we can make the most of what we are, what we have, what we have been given.”

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

The Best Time to Take the Measure of a New Technology

In defense of brick and mortar bookstores, particularly used book stores, advocates frequently appeal to the virtue of serendipity and the pleasure of an unexpected discovery. You may know what you’re looking for, but you never know what you might find. Ostensibly, recommendation algorithms serve the same function in online contexts, but the effect is rather the opposite of serendipity and the discoveries are always expected.

Take, for instance, this book I stumbled on at a local used book store: Electric Language: A Philosophical Study of Word Processing by Michael Heim. The book is currently #3,577,358 in Amazon’s Bestsellers Ranking, and it has been bought so infrequently that no other book is linked to it. My chances of ever finding this book were vanishingly small, but on Amazon they were slimmer still.

I’m quite glad, though, that Electric Language did cross my path. Heim’s book is a remarkably rich meditation on the meaning of word processing, something we now take for granted and do not think about at all. Heim wrote his book in 1987. The article in which he first explored the topic appeared in 1984. In other words, Heim was contemplating word processing while the practice was still relatively new. Heim imagines that some might object that it was still too early to take the measure of word processing. Heim’s rejoinder is worth quoting at length:

“Yet it is precisely this point in time that causes us to become philosophical. For it is at the moment of such transitions that the past becomes clear as a past, as obsolescent, and the future becomes clear as destiny, a challenge of the unknown. A philosophical study of digital writing made five or ten years from now would be better than one written now in the sense of being more comprehensive, more fully certain in its grasp of the new writing. At the same time, however, the felt contrast with the older writing technology would have become faded by the gradually increasing distance from typewritten and mechanical writing. Like our involvement with the automobile, that with processing texts will grow in transparency–until it becomes a condition of our daily life, taken for granted.

But what is granted to us in each epoch was at one time a beginning, a start, a change that was startling. Though the conditions of daily living do become transparent, they still draw upon our energies and upon the time of our lives; they soon become necessary conditions and come to structure our lives. It is incumbent on us then to grow philosophical while we can still be startled, for philosophy, if Aristotle can be trusted, begins in wonder, and, as Heraclitus suggests, ‘One should not act or speak as if asleep.'”

It is when a technology is not yet taken for granted that it is available to thought. It is only when a living memory of the “felt contrast” remains that the significance of the new technology is truly evident. Counterintuitive conclusions, perhaps, but I think he’s right. There’s a way of understanding a new technology that is available only to those who live through its appearance and adoption, and who know, first hand, what it displaced. As I’ve written before, this explains, in part, why it is so tempting to view critics of new technologies as Chicken Littles:

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of myunsolicited points of advice to tech writers.

Now, while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

Heim wrote Electric Language on a portable Tandy 100.

Heim wrote Electric Language on a portable Tandy 100.

Thinking With the Past

In the last post, I cited a passage or two from Hannah Arendt in which she discusses “thinking without a bannister,” thinking that attempts to think “as though nobody had thought before.” I endorsed her challenge, but I hinted in passing at a certain unease with this formulation. This largely stemmed from my own sense that we must try to learn from the past. Arendt, however, does not mean to suggest that there is nothing at all that can be learned from the past. This is evident from the attentive care she gives to ancient sources in her efforts to illuminate the present state of things. Rather, she seems to believe that a coherent tradition of thought which we can trust to do our thinking for us, a tradition of thought that can set our intellectual defaults as it were–this kind of tradition is lost. The appearance of totalitarianism in the 20th century (and, I think, the scope and scale of modern technology) led Arendt to her conclusion that thinking must start over. But, again, not entirely without recourse to the tradition.

Here is Arendt expounding upon what she calls Walter Benjamin’s “gift of thinking poetically”:

“This thinking, fed by the present, works with the ‘thought fragments’ it can wrest from the past and gather about itself. Like a pearl diver who descends to the bottom of the sea, not to excavate the bottom and bring it to light but to pry loose the rich and the strange, the pearls and the coral in the depths of the past–but not in order to resuscitate it the way it was and to contribute to the renewal of the extinct ages. What guides this thinking is the conviction that although the living is subject to the ruin of the time, the process of decay is at the same time a process of crystallization, that in the depth of the sea, into which sinks and is dissolved what was once alive, some things suffer a ‘sea change’ and survive in new crystallized forms and shapes that remain immune from the elements, as though they waited only for the pearl diver who one day will come down to them and bring them up into the world of the living–as ‘thought fragments,’ as something ‘rich and strange,’ and perhaps even as everlasting Urphänomene [archetypal or pure phenomenon].”

As Richard Bernstein puts it in his essay, “Arendt on Thinking,” “what Arendt says in her eloquent essay on Walter Benjamin also might have been said about Arendt.” Bernstein goes on to explain that Arendt “links thinking together with remembrance and storytelling. Remembrance is one of the most important ‘modes of thought,’ and it requires story-telling in order to preserve those ‘small islands of freedom.'”

The tradition may have been broken, but it is not altogether lost to us. By the proper method, we may still pluck some pearls and repurpose them to help us make sense of the present.

That passage, in case your curious, comes from Arendt’s Introduction to a collection of Benjamin’s essays she edited titled Illuminations: Essays and Reflections. Bernstein’s essay may be found in The Cambridge Companion to Hannah Arendt.”

Elon Musk: Prophet of Cosmic Manifest Destiny

There’s a well-known story about C.S. Lewis and J.R.R. Tolkien’s agreement to write stories about Space and Time. Dissatisfied with the state of Space/Time stories in the 1930s, the two decided to write the kind of stories they wanted to read. Lewis agreed to write a story focused on Space, and Tolkien agreed to write a story focused on Time. Ultimately, Lewis followed through and produced the three books popularly known as his Space Trilogy. Tolkien never quite got around to writing his story about Time, he was too busy finishing some business about a ring.

SpaceXI relate that story because I was reminded of it as I read about SpaceX and Tesla founder, Elon Musk. I’ve written about Peter Thiel a time or two recently, but Thiel isn’t the only tech entrepreneur with an expansive vision for the future. Whereas Thiel’s interests seem to gravitate toward technologies associated with Transhumanism, however, fellow PayPal alum Elon Musk’s interests are interplanetary in scope. It is as if, not unlike Lewis and Tolkien, Musk and Thiel decided to split up Space and Time between them. They, of course, would do more than write–they would seek to conquer their respective fields. Thiel sets out to conquer Time through the radical human enhancement and Musk sets out to conquer Space through interplanetary colonization. Interestingly enough, the ultimate success of their wildest dreams rather depend on one another.

Musk was recently interviewed by Ross Anderson for Aeon. Anderson’s title for his nearly 7,000 word essay that resulted, “Exodus,” is apt on at least two counts. It encompasses both the central theme of the interview–interplanetary migration for the sake of species survival–and the religious themes evoked by Anderson.

It’s a long, interesting piece, but here are some of the highlights, particularly in light of recent posts considering technological innovation, culture, and the religion of technology.

First, a snapshot of Musk’s stated vision for space travel:

“I had come to SpaceX to talk to Musk about his vision for the future of space exploration, and I opened our conversation by asking him an old question: why do we spend so much money in space, when Earth is rife with misery, human and otherwise? It might seem like an unfair question. Musk is a private businessman, not a publicly funded space agency. But he is also a special case. His biggest customer is NASA and, more importantly, Musk is someone who says he wants to influence the future of humanity. He will tell you so at the slightest prompting, without so much as flinching at the grandiosity of it, or the track record of people who have used this language in the past. Musk enjoys making money, of course, and he seems to relish the billionaire lifestyle, but he is more than just a capitalist. Whatever else might be said about him, Musk has staked his fortune on businesses that address fundamental human concerns. And so I wondered, why space?

Musk did not give me the usual reasons. He did not claim that we need space to inspire people. He did not sell space as an R & D lab, a font for spin-off technologies like astronaut food and wilderness blankets. He did not say that space is the ultimate testing ground for the human intellect. Instead, he said that going to Mars is as urgent and crucial as lifting billions out of poverty, or eradicating deadly disease.

‘I think there is a strong humanitarian argument for making life multi-planetary,’ he told me, ‘in order to safeguard the existence of humanity in the event that something catastrophic were to happen, in which case being poor or having a disease would be irrelevant, because humanity would be extinct.”

While discussing our failure, thus far, to find intelligent life, Musk observed:

“At our current rate of technological growth, humanity is on a path to be godlike in its capabilities.”

He then went on to explain why he thinks we’ve not yet encountered intelligent life:

“Musk has a more sinister theory. ‘The absence of any noticeable life may be an argument in favour of us being in a simulation,’ he told me. ‘Like when you’re playing an adventure game, and you can see the stars in the background, but you can’t ever get there. If it’s not a simulation, then maybe we’re in a lab and there’s some advanced alien civilisation that’s just watching how we develop, out of curiosity, like mould in a petri dish.’ Musk flipped through a few more possibilities, each packing a deeper existential chill than the last, until finally he came around to the import of it all. ‘If you look at our current technology level, something strange has to happen to civilisations, and I mean strange in a bad way,’ he said. ‘And it could be that there are a whole lot of dead, one-planet civilisations.’”

A reminder dropped in by Anderson of the pedigree of Musk’s ambitions:

“In 1610, the astronomer Johannes Kepler wrote, in a letter to Galileo: ‘Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky-travellers, maps of the celestial bodies.'”

And then, toward the end of the piece, Anderson begins to play up the religion of technology jargon (emphasis mine):

“But a million people on Mars sounds like a techno-futurist fantasy, one that would make Ray Kurzweil blush. And yet, the very existence of SpaceX is fantasy. After talking with Musk, I took a stroll through his cathedral-like rocket factory.”

….

“This fear, that the sacred mission of SpaceX could be compromised, resurfaced when I asked Musk if he would one day go to Mars himself.  ‘I’d like to go, but if there is a high risk of death, I wouldn’t want to put the company in jeopardy,’ he told me. ‘I only want to go when I could be confident that my death wouldn’t result in the primary mission of the company falling away.’ It’s possible to read Musk as a Noah figure, a man obsessed with building a great vessel, one that will safeguard humankind against global catastrophe. But he seems to see himself as a Moses, someone who makes it possible to pass through the wilderness – the ‘empty wastes,’ as Kepler put it to Galileo – but never sets foot in the Promised Land.”

….

You can see why NASA has given Musk a shot at human spaceflight. He makes a great rocket but, more than that, he has the old vision in him. He is a revivalist, for those of us who still buy into cosmic manifest destiny. And he can preach. He says we are doomed if we stay here. He says we will suffer fire and brimstone, and even extinction. He says we should go with him, to that darkest and most treacherous of shores. He promises a miracle.