Jaron Lanier Wants to Secularize AI

In 2010, one of the earliest posts on this blog noted an op-ed in the NY Times by Jaron Lanier titled “The First Church of Robotic.” In it, Lanier lamented the rise quasi-religious aspirations animating many among the Silicon Valley elite. Describing the tangle of ideas and hopes usually associated with the Singularity and/or Transhumanism, Lanier concluded, “What we are seeing is a new religion, expressed through an engineering culture.” The piece wraps up rather straightforwardly: “We serve people best when we keep our religious ideas out of our work.”

In fact, the new religion Lanier has in view has a considerably older pedigree than what he imagines. Historian David Noble traced the roots of what he called the religion of technology back to the start of the last millennium. What Lanier identified was only the latest iteration of that venerable techno-religious tradition.

A couple of days ago, Edge posted a video (and transcript) of an extended discussion by Lanier, which was sparked by recent comments made by Stephen Hawking and Elon Musk about the existential threat to humanity AI may pose in the not-to-distant future. Lanier’s talk ranges impressively over a variety of related issues and registers a number of valuable insights. Consider, for instance, this passing critique of Big Data:

“I want to get to an even deeper problem, which is that there’s no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you’re getting big data by observing a lot of people who make choices, and then you’re doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there’s not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There’s no way to know.”

To which he adds a few moments later, “It’s not so much a rise of evil as a rise of nonsense. It’s a mass incompetence, as opposed to Skynet from the Terminator movies. That’s what this type of AI turns into.” Big Data as banal evil, perhaps.

Lanier is certainly not the only one pointing out that Big Data doesn’t magically render pure or objective sociological data. A host of voices have made some variation of this point in their critique of the ideology surrounding Big Data experiments conducted by the likes of Facebook and OkCupid. The point is simple enough: observation/measurement alters the observed/measured phenomena. It’s a paradox that haunts most forms of human knowledge, perhaps especially our knowledge of ourselves, and it seems to me that we are better off abiding the paradox rather than seeking to transcend it.

Lanier also scores an excellent point when he asks us to imagine two scenarios involving the possibility of 3-D printed killer drones that can be used to target individuals. In the first scenario, they are developed and deployed by terrorists; in the second they are developed and deployed by some sort of rogue AI along the lines that Musk and others have worried about. Lanier’s question is this: what difference does it make whether terrorists or rogue AI is to blame? The problem remains the same.

“The truth is that the part that causes the problem is the actuator. It’s the interface to physicality. It’s the fact that there’s this little killer drone thing that’s coming around. It’s not so much whether it’s a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there’s enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it’s a massive misdirection.”

It is a misdirection that entails an evasion of responsibility and a failure of political imagination.

All of this is well put, and there’s more along the same lines. Lanier’s chief concern, however, is to frame this as a problem of religious thinking infecting the work of technology. Early on, for instance, he says, “what I’m proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing. What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field.”

And toward the conclusion of his talk, Lanier elaborates:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

What Lanier proposes in response to this state of affairs is something like a wall of separation, not between the church and the state, but between religion and technology:

“To me, what would be ridiculous is for somebody to say, ‘Oh, you mustn’t study deep learning networks,’ or ‘you mustn’t study theorem provers,’ or whatever technique you’re interested in. Those things are incredibly interesting and incredibly useful. It’s the mythology that we have to become more self-aware of. This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease that apart and say this is the part that’s great, and this is the part that’s self-defeating. We have to do it exactly the same thing with AI now.”

I’m sure Lanier would admit that this is easier said than done. In fact, he suggests as much himself a few lines later. But it’s worth asking whether the kind of sorting out that Lanier proposes is not merely challenging, but, perhaps, unworkable. Just as mid-twentieth century theories of secularization have come on hard times owing to a certain recalcitrant religiosity (or spirituality, if you prefer), we might also find that the religion of technology cannot simply be wished away or bracketed.

Paradoxically, we might also say that something like the religion of technology emerges precisely to the (incomplete) degree that the process of secularization unfolded in the West. To put this another way, imagine that there is within Western consciousness a particular yearning for transcendence. Suppose, as well, that this yearning is so ingrained that it cannot be easily eradicated. Consequently, you end up having something like a whack-a-mole effect. Suppress one expression of this yearning, and it surfaces elsewhere. The yearning for transcendence never quite dissipates, it only transfigures itself. So the progress of secularization, to the degree that it successfully suppresses traditional expressions of the quest for transcendence, manages only to channel it into other cultural projects, namely techno-science. I certainly don’t mean to suggest that the entire techno-scientific project is an unmitigated expression of the religion of technology. That’s certainly not the case. But, as Noble made clear, particularly in his chapter on AI, the techno-religious impulse is hardly negligible.

One last thought, for now, arising out of my recent blogging through Frankenstein. Mary Shelley seemed to understand that one cannot easily disentangle the noble from the corrupt in human affairs: both are rooted in the same faculties and desires. Attempt to eradicate the baser elements altogether, and you may very well eliminate all that is admirable too. The heroic tendency is not safe, but neither is the attempt to tame it. I don’t think we’ve been well-served by our discarding of this essentially tragic vision in favor of a more cheery techno-utopianism.

Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

Robotic Zeitgeist

Robotics and AI are in the air. A sampling:

“Bot with boyish personality wins biggest Turing test”: “Eugene Goostman, a chatbot with the personality of a 13-year-old boy, won the biggest Turing test ever staged, on 23 June, the 100th anniversary of the birth of Alan Turing.”

“Time To Apply The First Law Of Robotics To Our Smartphones”: “We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day.”

“Robot Hand Beats You at Rock, Paper, Scissors 100% Of The Time”: “This robot hand will play a game of rock, paper, scissors with you. Sounds like fun, right? Not so much, because this particular robot wins every. Single. Time.”

Next, two on the same story coming out of Google’s research division:

“I See Cats”: “Google researchers connected 16,000 computer cores together into a huge neural net (like the network of neurons in your brain) and then used a software program to ask what it (the neural net) “saw” in a pool of 1 million pictures downloaded randomly from the internet.”

“The Triumph of Artificial Intelligence! 16,000 Processors Can Identify a Cat in a YouTube Video Sometimes”: “Perhaps this is not precisely what Turing had in mind.”

Much of this talk about AI has coincided with what would have been Turing’s 100th birthday. Most of it has celebrated the brilliant mathematician and lamented the tragic nature of his life and death. This next piece, however, takes a critical look at the course of AI (or better, the ideology of AI) since Turing:

“The Trouble with the Turing Test”: “But these are not our only alternatives; there is a third way, the way of agnosticism, which means accepting the fact that we have not yet achieved artificial intelligence, and have no idea if we ever will.”

And on a slightly different, post-humanist note (via Evan Selinger):

The International Journal of Machine Consciousness has devoted an entire issue to “Mind Uploading.”

There you go; enough to keep you thinking today.