Wizard or God, Which Would You Rather Be?

Dumbledore_and_Elder_WandOccasionally, I ask myself whether or not I’m really on to anything when I publish the “thinking out loud” that constitutes most of the posts on this blog. And occasionally the world answers back, politely, “Yes, yes you are.”

A few months ago, in a post on automation and smart homes, I ventured an off-the-cuff observation: the smart home populated animated by the Internet of Things amounted to a re-enchantment of the world by technological means. I further elaborated that hypothesis in a subsequent post:

“So then, we have three discernible stages–mechanization, automation, animation–in the technological enchantment of the human-built world. The technological enchantment of the human-built world is the unforeseen consequence of the disenchantment of the natural world described by sociologists of modernity, Max Weber being the most notable. These sociologists claimed that modernity entailed the rationalization of the world and the purging of mystery, but they were only partly right. It might be better to say that the world was not so much disenchanted as it was differently enchanted. This displacement and redistribution of enchantment may be just as important a factor in shaping modernity as the putative disenchantment of nature.

In an offhand, stream-of-consciousness aside, I ventured that the allure of the smart-home, and similar technologies, arose from a latent desire to re-enchant the world. I’m doubling-down on that hypothesis. Here’s the working thesis: the ongoing technological enchantment of the human-built world is a corollary of the disenchantment of the natural world. The first movement yields the second, and the two are interwoven. To call this process of technological animation an enchantment of the human-built world is not merely a figurative post-hoc gloss on what has actually happened. Rather, the work of enchantment has been woven into the process all along.”

Granted, those were fairly strong claims that as of yet need to be more thoroughly substantiated, but here’s a small bit of evidence that suggests that my little thesis had some merit. It is a short video clip the NY Time’s Technology channel about the Internet of Things in which “David Rose, the author of ‘Enchanted Objects,’ sees a future where we can all live like wizards.” Emphasis mine, of course.

I had some difficulty embedding the video, so you’ll have to click over to watch it here: The Internet of Things. Really, you should. It’ll take less than three minutes of your time.

So, there was that. Because, apparently, the Internet today felt like reinforcing my quirky thoughts about technology, there was also this on the same site: Playing God Games.

That video segment clocks in at just under two minutes. If you click through to watch, you’ll note that it is a brief story about apps that allow you to play a deity in your own virtual world, with your very own virtual “followers.”

You can read that in light of my more recent musings about the appeal of games in which our “action,” and by extension we ourselves, seem to matter.

Perhaps, then, this is the more modest shape the religion of technology takes in the age of simulation and diminished expectations: you may play the wizard in your re-enchanted smart home or you may play a god in a virtual world on your smartphone. I suspect this is not what Stewart Brand had in mind when he wrote, “We are as gods and might as well get good at it.”

Simulated Futures

There’s a lot of innovation talk going on right now, or maybe it is just that I’ve been more attuned to it of late. Either way, I keep coming across pieces that tackle the topic of technological innovation from a variety of angles.

While not narrowly focused on technological innovation, this wonderfully discursive post by Alan Jacobs raises a number of relevant considerations. Jacobs ranges far and wide, so I won’t try to summarize his thoughts here. You should read the whole piece, but here is the point I want to highlight. Taking a 2012 essay by David Graeber as his point of departure, Jacobs asks us to consider the following:

“How were we taught not even to dream of flying cars and jetpacks? — or, or for that matter, an end to world hunger, something that C. P. Snow, in his famous lecture on ‘the two cultures’ of the sciences and humanities, saw as clearly within our grasp more than half-a-century ago? To see ‘sophisticated simulations’ of the things we used to hope we’d really achieve as good enough?”

Here’s the relevant passage in Graeber’s essay. After watching one of the more recent Star Wars films, he wonders how impressed with the special effects audiences of the older, fifties-era sci-fi films would be. His answer upon reflection: not very. Why? Because “they thought we’d be doing this kind of thing by now. Not just figuring out more sophisticated ways to simulate it.” Graeber goes on to add,

“That last word—simulate—is key. The technologies that have advanced since the seventies are mainly either medical technologies or information technologies—largely, technologies of simulation. They are technologies of what Jean Baudrillard and Umberto Eco called the ‘hyper-real,’ the ability to make imitations that are more realistic than originals. The postmodern sensibility, the feeling that we had somehow broken into an unprecedented new historical period in which we understood that there is nothing new; that grand historical narratives of progress and liberation were meaningless; that everything now was simulation, ironic repetition, fragmentation, and pastiche—all this makes sense in a technological environment in which the only breakthroughs were those that made it easier to create, transfer, and rearrange virtual projections of things that either already existed, or, we came to realize, never would.”

Here again is the theme of technological stagnation, of the death of genuine innovation. You can read the rest of Graeber’s piece for his own theories about the causes of this stagnation. What interested me was the suggestion that we’ve swapped genuine innovation for simulations. Of course, this interested me chiefly because it seems to reinforce and expand a point I made in yesterday’s post, that our fascination with virtual worlds may stem from the failure of our non-virtual world to yield the kind of possibilities for meaningful action that human beings crave.

As our hopes for the future seem to recede, our simulations of that future become ever more compelling.

Elsewhere, Lee Billings reports on his experience at the 2007 Singularity Summit:

“Over vegetarian hors d’oeuvres and red wine at a Bay Area villa, I had chatted with the billionaire venture capitalist Peter Thiel, who planned to adopt an ‘aggressive’ strategy for investing in a ‘positive’ Singularity, which would be ‘the biggest boom ever,’ if it doesn’t first ‘blow up the whole world.’ I had talked with the autodidactic artificial-intelligence researcher Eliezer Yudkowsky about his fears that artificial minds might, once created, rapidly destroy the planet. At one point, the inventor-turned-proselytizer
 Ray Kurzweil teleconferenced in to discuss,
among other things, his plans for becoming transhuman, transcending his own biology to 
achieve some sort of
 eternal life. Kurzweil
 believes this is possible, 
even probable, provided he can just live to see
 The Singularity’s dawn, 
which he has pegged at 
sometime in the middle of the 21st century. To this end, he reportedly consumes some 150 vitamin supplements a day.”

Billings also noted that many of his conversations at the conference “carried a cynical sheen of eschatological hucksterism: Climb aboard, don’t delay, invest right now, and you, too, may be among the chosen who rise to power from the ashes of the former world!”

Eschatological hucksterism … well put, indeed. That’s a phrase I’ll be tucking away for future use.

And that leads me to the concluding chapter of David Noble’s The Religion of Technology: The Divinity of Man and the Spirit of Invention. After surveying the religiously infused motives and rhetoric animating technological projects as diverse as the pursuit of AI, space exploration, and genetic engineering, Noble wrote

“As we have seen, those given to such imaginings are in the vanguard of technological development, amply endowed and in every way encouraged to realize their escapist fantasies. Often displaying a pathological dissatisfaction with, and deprecation of, the human condition, they are taking flight from the world, pointing us away from the earth, the flesh, the familiar–‘offering salvation by technical fix,’ in Mary Midgley’s apt description–all the while making the world over to conform to their vision of perfection.”

A little further on he concluded,

“Can we any longer afford to abide this system of blind belief? Ironically, the technological enterprise upon which we now ever more depend for the preservation and enlargement of our lives betrays a disdainful disregard for, indeed an impatience with, life itself. If dreams of technological escape from the burdens of mortality once translated into some relief of the human estate, the pursuit of technological transcendence has now perhaps outdistanced such earthly ends. If the religion of technology once fostered visions of social renovation, it also fueled fantasies of escaping society altogether. Today these bolder imaginings have gained sway, according to which as on philosopher of technology recently observed, ‘everything which exists at present … is deemed disposable.’ The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.”

I’ll leave you with that.

Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation

1939 World's Fair ProgressA couple of years back, while I was on my World’s Fair kick, I wrote a post or two (or three) about how we imagine the future, or, rather, how we fail to imagine the future. The World’s Fairs, particularly those held between the 1930’s and 70’s, offered a rather grand and ambitious vision for what the future would hold. Granted, much of what made up that vision never quite materialized, and much of it now seems a tad hokey. Additionally, much of it amounted to a huge corporate ad campaign. Nevertheless, the imagined future was impressive in its scope, it was utopian. The three posts linked above each suggested that, relative to the World’s Fairs of the mid-20th century, we seem to have a rather impoverished imagination when it comes to the future.

One of those posts cited a 2011 essay by Peter Thiel, “The End of the Future,” outlining the sources of Thiel’s pessimism about the rate of technological advance. More recently, Dan Wang has cataloged a series of public statements by Thiel supporting his contention that technological innovation has slowed, and dangerously so. Thiel, who made his mark and his fortune as a founder of PayPal, has emerged over the last few years as one of Silicon Valley’s leading intellectuals. His pessimism, then, seems to run against the grain of his milieu. Thiel, however, is not pessimistic about the potential of technology itself; rather, as I understand him, he is critical of our inability to more boldly imagine what we could do with technology. His view is neatly summed up in his well-known quip, “We wanted flying cars, instead we got 140 characters.”

Thiel is not the only one who thinks that we’ve been beset by a certain gloomy malaise when it comes to imagining the future. Last week, in the pages of the New York Times Magazine, Jayson Greene wondered, with thinly veiled exasperation, why contemporary science-fiction is so “glum” about AI? The article is a bit muddled at points–perhaps because the author, noting the assistance of his machines, believes it is not even half his–but it registers what seems to be an increasingly recurring complaint. Just last month, for instance, I noted a similar article in Wired that urged authors to stop writing dystopian science-fiction. Behind each of these pieces there lies an implicit question: Where has our ability to imagine a hopeful, positive vision for the future gone?

Kevin Kelly is wondering the same thing. In fact, he was willing to pay for someone to tell him a positive story about the future. I’ve long thought of Kelly as one of the most optimistic of contemporary tech writers, yet of late even he appears to be striking a more ambiguous note. Perhaps needing a fresh infusion of hope, he took to Twitter with this message:

“I’ll pay $100 for the best 100-word description of a plausible technological future in 100 years that I would like to live in. Email me.”

Kelly got 23 responses, and then he constructed his own 100-word vision for the future. It is instructive to read the submissions. By “instructive,” I mean intriguing, entertaining, disconcerting, and disturbing by turns. In fact, when I first read through them I thought I’d dedicate a post to analyzing these little techno-utopian vignettes. Suffice it to say, a few people, at least, are still nurturing an expansive vision for the future.

But are their stories the exceptions that prove the rule? To put it another way, is the dominant cultural zeitgeist dystopian or utopian with regards to the future? Of course, as C.S. Lewis once put, “What you see and what you hear depends a great deal on where you are standing. It also depends on what sort of person you are.” Whatever the case may be, there certainly seem to be a lot of people who think the zeitgeist is dystopian or, at best, depressingly unimaginative. I’m not sure they are altogether wrong about this, even if the whole story is more complicated. So why might this be?

To be clear before proceeding down this line of inquiry, I’m not so much concerned with whether we ought to be optimistic or pessimistic about the future. (The answer in any case is neither.) I’m not, in other words, approaching this topic from a normative perspective. Rather, I want to poke and prod the zeitgeist a little bit to see if we can’t figure out what is going on. So, in that spirit, here are few loosely organized thoughts.

First off, our culture is, in large measure, driven by consumerism. This, of course, is little more than a cliché, but it is no less true because of it. Consumerism is finally about the individual. Individual aspirations, by their very nature, tend to be narrow and short-sighted. It is as is if the potential creative force of our collective imagination is splintered into the millions of individual wills it is made to serve.

David Nye noted this devolution of our technological aspirations in his classic work on the American technological sublime. The sublime experience that once attended our encounters with nature and then our encounters with technological creations of awe-inspiring size and dynamism, has now given way to what Nye called the consumer sublime. “Unlike the Ford assembly line or Hoover Dam,” Nye explains, “Disneyland and Las Vegas have no use value. Their representations of sublimity and special effects are created solely for entertainment. Their epiphanies have no referents; they reveal not the existence of God, not the power of nature, not the majesty of human reason, but the titillation of representation itself.”

The consumer sublime, which Nye also calls an “egotistical sublime,” amounts to “an escape from the very work, rationality, and domination that once were embodied in the American technological sublime.”

Looking at the problem of consumerism from another vantage point, consider Nicholas Carr’s theory about the hierarchy of innovation. Carr’s point of departure included Peter Thiel’s complaint about the stagnation of technological innovation cited above. In response, Carr suggested that innovation proceeds along a path more or less parallel to Maslow’s famous hierarchy of human needs. We begin by seeking to satisfy very basic needs, those related to our survival. As those basic needs are met, we are able to think about more complex needs for social interaction, personal esteem, and self-actualization.

In Carr’s stimulating repurposing of Maslow’s hierarchy, technological innovation proceeds from technologies of survival to technologies of self-fulfillment. Carr doesn’t think that these levels of innovation are neatly realized in some clean, linear fashion. But he does think that at present the incentives, “monetary and reputational,” are, in a darkly eloquent phrasing, “bending the arc of innovation … toward decadence.” Away, that is, from grand, highly visible, transformative technologies.

The end game of this consumerist reduction of technological innovation may be what Ian Bogost recently called “future ennui.” “The excitement of a novel technology (or anything, really),” Bogost writes,

“has been replaced—or at least dampened—by the anguish of knowing its future burden. This listlessness might yet prove even worse than blind boosterism or cynical naysaying. Where the trauma of future shock could at least light a fire under its sufferers, future ennui exudes the viscous languor of indifferent acceptance. It doesn’t really matter that the Apple Watch doesn’t seem necessary, no more than the iPhone once didn’t too. Increasingly, change is not revolutionary, to use a word Apple has made banal, but presaged.”

Bogost adds, “When one is enervated by future ennui, there’s no vigor left even to ask if this future is one we even want.” The technological sublime, then, becomes the consumer sublime, which becomes future ennui. This is how technological innovation ends, not with a bang but a sigh.

The second point I want to make about the pessimistic zeitgeist centers on our Enlightenment inheritance. The Enlightenment bequeathed to us, among other things, two articles of faith. The first of these was the notion of inevitable moral progress, and the second was the notion of inevitable techno-scientific progress. Together they yielded what we tend to refer to simply as the Enlightenment’s notion of Progress. Together these articles of faith cultivate hope and incite action. Unfortunately, the two were sundered by the accumulation of tragedy and despair we call the twentieth century. Techno-scientific progress was a rosy notion so long as we imagined that moral progress advanced hand in hand with it. Techno-scientific progress decoupled from Enlightenment confidence in the perfectibility of humanity leaves us with the dystopian imagination.

Interestingly, the trajectory of the American World’s Fairs illustrates both of these points. Generally speaking, the World’s Fairs of the nineteenth and early twentieth century subsumed technology within their larger vision of social progress. By the 1930’s, the Fairs presented technology as the force upon which the realization of the utopian social vision depended. The 1939 New York Fair marked a turning point. It featured a utopian social vision powered by technological innovation. From that point forward, technological innovation increasingly became a goal in itself rather than a means toward a utopian society, and technological innovation was increasingly a consumer affair of diminishing scope.

That picture was painted in rather broad strokes, but I think it will bear scrutiny. Whether the illustration ultimately holds up or not, however, I certainly think the claim stands. The twentieth century shattered our collective optimism about human nature; consequently, empowering human beings with ever more powerful technologies became the stuff of nightmares rather than dreams.

Thirdly, technological innovation on a grand scale is an act of sublimation and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic-detachment, our default posture toward reality, destroys. Make of that theory what you will.

The last point builds on all that I’ve laid out thus far and perhaps even ties it all together … maybe. I want to approach it by noting one segment of the wider conversation about technology where a big, positive vision for the future is nurtured: the Transhumanist movement. This should go without saying, but I’ll say it anyway just to put it beyond doubt. I don’t endorse the Transhumanist vision. By saying that it is a “positive” vision I am only saying that it is understood as a positive vision by those who adhere to it. Now, with that out of the way, here is the thing to recognize about the Transhumanist vision, its aspirations are quasi-religious in character.

I mean that in at least a couple of ways. For instance, it may be understood as a reboot of Gnosticism, particularly given its disparagement of the human body and its attendant limitations. Relatedly, it often aspires to a disembodied, virtual existence that sounds a lot like the immortality of the soul espoused by Western religions. It is in this way a movement focused on technologies of the self, that highest order of innovation in Carr’s pyramid; but rather than seeking technologies that are mere accouterments of the self, they pursue technologies which work on the self to push the self along to the next evolutionary plane. Paradoxically, then, technology in the Transhumanist vision works on the self to transcend the self as it now exists.

Consequently, the scope of the Transhumanist vision stems from the Transhumanist quest for transcendence. The technologies of the self that Carr had in mind were technologies centered on the existing, immanent self. Putting all of this together, then, we might say that technologies of the immanent self devolve into gadgets with ever diminishing returns–consumerist ephemera–yielding future ennui. The imagined technologies of the would-be transcendent self, however, are seemingly more impressive in their aims and inspire cultish devotion in those who hope for them. But they are still technologies of the self. That is to say, they are not animated by a vision of social scope nor by a project of political consequence. This lends the whole movement a certain troubling naiveté.

Perhaps it also ultimately limits technological innovation. Grand technological projects of the sort that people like Thiel and Kelly would like to see us at least imagine are animated by a culturally diffused vision, often religious or transcendent in nature, that channels individual action away from the conscious pursuit of immediate satisfaction.

The other alternative, of course, is coerced labor. Hold that thought.

I want to begin drawing this over-long post to close by offering it as an overdue response to Pascal-Emmanuel Gobry’s discussion of Peter Thiel, the Church, and technological innovation. Gobry agreed with Thiel’s pessimism and lamented that the Church was not more active in driving technological innovation. He offered the great medieval cathedrals as an example of the sort of creation and innovation that the Church once inspired. I heartily endorse his estimation of the cathedrals as monumental works of astounding technical achievement, artistic splendor, and transcendent meaning. And, as Gobry notes, they were the first such monumental works not built on the back of forced labor.

For projects of that scale to succeed, individuals must either be animated by ideals that drive their willing participation or they must be forced by power or circumstance. In other words, cathedrals or pyramids. Cathedrals represent innovation born of freedom and transcendent ideals. The pyramids represent innovation born of forced labor and transcendent ideals.

The third alternative, of course, is the iPhone. I use the iPhone here to stand for consumer driven innovation. Innovation that is born of relative freedom (and forced labor) but absent a transcendent ideal to drive it beyond consumerist self-actualization. And that is where we are stuck, perhaps, with technological stagnation and future ennui.

But here’s the observation I want to leave you with. Our focus on technological innovation as the key to the future is a symptom of the problem; it suggests strongly that we are already compromised. The cathedrals were not built by people possessed merely of the desire to innovate. Technological innovation was a means to a culturally inspired end. [See the Adams’ quote below.] Insofar as we have reversed the relationship and allowed technological innovation to be our raison d’être we may find it impossible to imagine a better future, much less bring it about. With regards to the future of society, if the answer we’re looking for is technological, then we’re not asking the right questions.

_____________________________________

You can read a follow-up piece here.

N.B. The initial version of this post referred to “slave” labor with regards to the pyramids. A reader pointed out to me that the pyramids were not built by slaves but by paid craftsmen. This prompted me to do a little research. It does indeed seem to be the case that “slaves,” given what we mean by the term, were not the primary source of labor on the pyramids. However, the distinction seems to me to be a fine one. These workers appear to have been subject to various degrees of “obligatory” labor although also provided with food, shelter, and tax breaks. While not quite slave labor, it is not quite the labor of free people either. By contrast, you can read about the building of the cathedrals here. That said I’ve revised the post to omit the references to slavery.

Update: Henry Adams knew something of the cultural vision at work in the building of the cathedrals. Note the last line, especially:

“The architects of the twelfth and thirteenth centuries took the Church and the universe for truths, and tried to express them in a structure which should be final.  Knowing by an enormous experience precisely where the strains were to come, they enlarged their scale to the utmost point of material endurance, lightening the load and distributing the burden until the gutters and gargoyles that seem mere ornament, and the grotesques that seem rude absurdities, all do work either for the arch or for the eye; and every inch of material, up and down, from crypt to vault, from man to God, from the universe to the atom, had its task, giving support where support was needed, or weight where concentration was felt, but always with the condition of showing conspicuously to the eye the great lines which led to unity and the curves which controlled divergence; so that, from the cross on the flèche and the keystone of the vault, down through the ribbed nervures, the columns, the windows, to the foundation of the flying buttresses far beyond the walls, one idea controlled every line; and this is true of St. Thomas’ Church as it is of Amiens Cathedral.  The method was the same for both, and the result was an art marked by singular unity, which endured and served its purpose until man changed his attitude toward the universe.”

 

Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

Innovation, Technology, and the Church (Part Two)

What has Silicon Valley to do with Jerusalem?

More than you might think, but that question, of course, is a riff on Tertullian’s famous query, “What has Athens to do with Jerusalem?” It was a rhetorical question. By it, Tertullian implied that Christian theology, represented by Jerusalem, should steer clear of Greek philosophy, represented by Athens. I offer my question, in which Silicon Valley represents technological “innovation,” more straightforwardly and as a way of introducing this second post in conversation with Pascal-Emmanuel Gobry’s essay, “Peter Thiel and the Cathedral.”

In the first post, I raised some questions about terminology and the force of Gobry’s analogy: “The monastics were nothing if not innovators, and the [monastic] orders were the great startups of the day.” I was glad to get some feedback from Gobry, and you can read it here; you can also read my response below Gobry’s comment. Of course, Internet reading being what it is, it’s probably better if I just give you the gist of it. Gobry thought I made a bit too much of the definitional nuances while also making clear that he was well aware of the distinctions between a twenty-first century start up and a thirteenth century monastery.

For the record, I never doubted Gobry’s awareness of the fine points at issue. But when the fine points are relevant to the conversation, I think it best to bring them to the surface. It matters, though, what point is being made, and this may be where my response to Gobry’s essay missed the mark, or where Gobry and I might be in danger of talking past one another. The essay reads a bit like a manifesto, it is a call to action. Indeed, it explicitly ends as such. Given that rhetorical context, my approach may not have been entirely fair. In fact, it may be better to frame most of what I plan to write as being “inspired” by Gobry’s post, rather than as a response to it.

It would depend, I think, on the function of the historical analogies, and I’ll let Gobry clarify that for me. As I mentioned in my reply to his comment, it matters what function the historical analogies–e.g., monasteries as start-ups–are intended to play. Are they merely inspirational illustrations, or are they intended as morally compelling arguments. My initial response assumed the latter, thus my concern to clarify terminology and surface the nuance before moving on to a more formal evaluation of the claim.

The closing paragraphs of Gobry’s response to my post, however, suggested to me that I’d misread the import of the analogies. Twice Gobry clarified his interest in the comparisons:

“What interests me in the analogy between a startup and a monastic foundation is the element of risk and folly in pursuit of a specific goal,”

and

“What interests me in the analogy between monastic orders and startups is the distinct sense of mission, a mission which is accomplished through the daring, proficiency and determination of a small band of people, and through concrete ends.”

That sounds a bit more like an inspirational historical illustration than it does an argument by analogy based on the assumed moral force of historical precedent. Of course, that’s not a criticism. (Although, I’m not sure it’s such a great illustration for the same reasons I didn’t think it made a convincing argument.) It just means that I needed to recalibrate my own approach and that it might be best to untether these considerations a bit from Gobry’s post. Before doing so, I would just add this. If the crux of the analogy is the element of risk and folly in pursuit of a goal and a sense of mission executed by a devoted community, then the monastic tradition is just one of many possible religious and non-religious illustrations.

Fundamentally, though, even while Gobry and I approach it from different angles, I still do think we are both interested in the same issue: the religious/cultural matrix of technological innovation.

In Gobry’s view, we need to recover the innovative spirit illustrated within the monastic tradition and also by the building of the great medieval cathedrals. In a subsequent post, I’ll argue that a closer look at both helps us to see how the relationship between technology and culture has evolved in such a way that the strength of cultural institutions that ought to drive “innovation” has been sapped. In this light, Gobry’s plea for the church to take the up the mantle of innovation might be understood as a symptom of what has gone wrong with respect to technology’s relationship to religion, and culture more broadly. In short, the problem is that technological innovation is no longer a means directed by the church or some other cultural institution to some noble end, it is too frequently pursued as an end in itself. For the record, I don’t think this is what Gobry himself is advocating.

Gobry is right to raise questions about the relationship between technological innovation and, to borrow Lynne White’s phrasing, cultural climates. White himself argued that there was something about the cultural climate of medieval Europe that proved hospitable to technological innovation. But looking over the evolution of technology and culture over the subsequent centuries, it becomes apparent that the relationship between technology and culture has become disordered. In the next post, I’ll start with the medieval cathedrals to fill out that claim.