A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”
I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”
For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.
Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.
It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.
It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.
Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.
And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.
Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.
It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.
I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.
But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.
This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.
There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.
First of all, this was a really enjoyable piece and I want to thank you for sharing it. It certainly feeds my longtime interest in developments in AI as well as the ethical dimensions of AI development. Also, as an IT professional who works with managers who are technologically illiterate (not an insult, just a fact), I often find myself seeking ways of helping them understand the technology within a framework which is not native to our technological systems. If I were to try to explain the technology to them within the particular kinds of logical structures common to those systems, I would get absolutely nowhere. I’m grateful that their lack of expertise forces me to step outside of my discipline and think about technology within alternative frameworks, because I think that’s incredibly useful. Specifically, I think you’re right to question our assumption that efficiency in task completion is a good thing in every case. The efficiency of automation could for example keep us from developing the kinds of skills we need to live fulfilling lives as humans or posthuman beings.
I had a few responses to this post, Michael. Firstly, it encourages me to finally read and think a little more about transhumanism and posthumanism, which have been topics that feel scary and questionable as to their redeeming qualities.
Secondly, in regards to the extreme end games one may consider regarding artificial intelligence, robotics, nanotechnology, etc., I’m reminded of the fear some have regarding the high energy particle colliders and the creation of a black hole that would end the universe. (I remember you once asked my opinion on Virilio’s writing about black holes and the LHC, and I said that I thought he was misinformed in this particular area.) To most physicists this is a silly idea and a non-sequitur. At the same time, the idea has survived, in my view, partly to add danger and excitement to a topic the public may otherwise lose interest in. The scientists and engineers involved don’t believe this danger themselves, but in some way perpetuate the fear of it.
As for how to analyze technology without taking up the same assumptions, its an interesting question. I think its important to know what the assumptions are for the designers of a given technology. Looking at the question of optimizing efficiency, for example, as long as one agrees on the goals of a certain system, then it seems reasonable to try to make it efficient. This focus, however, tends to push other questions to the sideline, such as to whether the stated goals are really good goals, and whether the system may have other purposes. And getting even more general than technology, saying that people, or societies, or even economies should operate more efficiently assumes that one has already clarified the purposes and the goals of these entities.
I sometimes try to think about this question of efficiency when I think about French bureaucracy. Sometimes I find it very slow and frustrating. I get angry and imagine that if I had the power, I would try to make it more efficient. Sometimes, however, I realize that the system is serving different goals than I initially imagined. For example, this slow system provides job security for a large number of government workers. It also can provide a kind of resilience and primacy for human intervention that more narrowly focused technical systems may not have.
These were a few thoughts inspired by your post.
This is a really interesting piece and I like your call for a reflection on the instant value placed on anything enhancing efficiency, particularly combined with the Emerson quote on your latest blog post.
As a society we are so ready to embrace anything that enhances efficiency without stopping to think about what we actually lose when we think we are gaining something. Emails are a good example. At first they were probably considered a more efficient medium than the post. Now, they are so easy to send and to copy to so many people that many of us are inundated by emails and the truly pertinent information we need often gets lost or takes so long to find that this instant communication has, in fact, become ineffective.
When you combine anticipated efficiency with enhanced AI there is definitely a need to think through the implications of what this might mean in terms of human advancement. And, more importantly, to accept that we cannot know what the implications might be.
This piece is so well reasoned and insightful. You make some very good points about the way in which humanity approaches the edges of the future blindly feeling about with irrational fears and unfounded hope. The problem is that the people who are developing technologies are only focused on the thing they are developing and rarely think about how it will be used. Oppenheimer and Einstein made the atomic bombing of Japan a possibility, but were in no way thinking of that possibility when they were working. They were working to solve certain problems without the ability to extrapolate the new set of problems their solutions would engender. Human beings are short sighted creatures with enormous curiosity. But is it possible to contain that curiosity within a framework of logical reasoning when their are so many parameters and points of view to consider. We do need to find ways to look at progress through a lens that allows us to see our next steps more detachment and consider these steps from points of view that lie outside normal human experience, but is this possible? Our current health care systems are a good example of technology that has both enhanced and detracted from our quality of life. We live longer, but some of us are living trapped in failing bodies waiting to die, but what is the alternative and how do we arrive at it as a group, all 6 billion of us.