Jaron Lanier Wants to Secularize AI

In 2010, one of the earliest posts on this blog noted an op-ed in the NY Times by Jaron Lanier titled “The First Church of Robotic.” In it, Lanier lamented the rise quasi-religious aspirations animating many among the Silicon Valley elite. Describing the tangle of ideas and hopes usually associated with the Singularity and/or Transhumanism, Lanier concluded, “What we are seeing is a new religion, expressed through an engineering culture.” The piece wraps up rather straightforwardly: “We serve people best when we keep our religious ideas out of our work.”

In fact, the new religion Lanier has in view has a considerably older pedigree than what he imagines. Historian David Noble traced the roots of what he called the religion of technology back to the start of the last millennium. What Lanier identified was only the latest iteration of that venerable techno-religious tradition.

A couple of days ago, Edge posted a video (and transcript) of an extended discussion by Lanier, which was sparked by recent comments made by Stephen Hawking and Elon Musk about the existential threat to humanity AI may pose in the not-to-distant future. Lanier’s talk ranges impressively over a variety of related issues and registers a number of valuable insights. Consider, for instance, this passing critique of Big Data:

“I want to get to an even deeper problem, which is that there’s no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you’re getting big data by observing a lot of people who make choices, and then you’re doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there’s not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There’s no way to know.”

To which he adds a few moments later, “It’s not so much a rise of evil as a rise of nonsense. It’s a mass incompetence, as opposed to Skynet from the Terminator movies. That’s what this type of AI turns into.” Big Data as banal evil, perhaps.

Lanier is certainly not the only one pointing out that Big Data doesn’t magically render pure or objective sociological data. A host of voices have made some variation of this point in their critique of the ideology surrounding Big Data experiments conducted by the likes of Facebook and OkCupid. The point is simple enough: observation/measurement alters the observed/measured phenomena. It’s a paradox that haunts most forms of human knowledge, perhaps especially our knowledge of ourselves, and it seems to me that we are better off abiding the paradox rather than seeking to transcend it.

Lanier also scores an excellent point when he asks us to imagine two scenarios involving the possibility of 3-D printed killer drones that can be used to target individuals. In the first scenario, they are developed and deployed by terrorists; in the second they are developed and deployed by some sort of rogue AI along the lines that Musk and others have worried about. Lanier’s question is this: what difference does it make whether terrorists or rogue AI is to blame? The problem remains the same.

“The truth is that the part that causes the problem is the actuator. It’s the interface to physicality. It’s the fact that there’s this little killer drone thing that’s coming around. It’s not so much whether it’s a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there’s enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it’s a massive misdirection.”

It is a misdirection that entails an evasion of responsibility and a failure of political imagination.

All of this is well put, and there’s more along the same lines. Lanier’s chief concern, however, is to frame this as a problem of religious thinking infecting the work of technology. Early on, for instance, he says, “what I’m proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing. What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field.”

And toward the conclusion of his talk, Lanier elaborates:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

What Lanier proposes in response to this state of affairs is something like a wall of separation, not between the church and the state, but between religion and technology:

“To me, what would be ridiculous is for somebody to say, ‘Oh, you mustn’t study deep learning networks,’ or ‘you mustn’t study theorem provers,’ or whatever technique you’re interested in. Those things are incredibly interesting and incredibly useful. It’s the mythology that we have to become more self-aware of. This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease that apart and say this is the part that’s great, and this is the part that’s self-defeating. We have to do it exactly the same thing with AI now.”

I’m sure Lanier would admit that this is easier said than done. In fact, he suggests as much himself a few lines later. But it’s worth asking whether the kind of sorting out that Lanier proposes is not merely challenging, but, perhaps, unworkable. Just as mid-twentieth century theories of secularization have come on hard times owing to a certain recalcitrant religiosity (or spirituality, if you prefer), we might also find that the religion of technology cannot simply be wished away or bracketed.

Paradoxically, we might also say that something like the religion of technology emerges precisely to the (incomplete) degree that the process of secularization unfolded in the West. To put this another way, imagine that there is within Western consciousness a particular yearning for transcendence. Suppose, as well, that this yearning is so ingrained that it cannot be easily eradicated. Consequently, you end up having something like a whack-a-mole effect. Suppress one expression of this yearning, and it surfaces elsewhere. The yearning for transcendence never quite dissipates, it only transfigures itself. So the progress of secularization, to the degree that it successfully suppresses traditional expressions of the quest for transcendence, manages only to channel it into other cultural projects, namely techno-science. I certainly don’t mean to suggest that the entire techno-scientific project is an unmitigated expression of the religion of technology. That’s certainly not the case. But, as Noble made clear, particularly in his chapter on AI, the techno-religious impulse is hardly negligible.

One last thought, for now, arising out of my recent blogging through Frankenstein. Mary Shelley seemed to understand that one cannot easily disentangle the noble from the corrupt in human affairs: both are rooted in the same faculties and desires. Attempt to eradicate the baser elements altogether, and you may very well eliminate all that is admirable too. The heroic tendency is not safe, but neither is the attempt to tame it. I don’t think we’ve been well-served by our discarding of this essentially tragic vision in favor of a more cheery techno-utopianism.

4 thoughts on “Jaron Lanier Wants to Secularize AI

  1. “Mary Shelley seemed to understand that one cannot easily disentangle the noble from the corrupt in human affairs: both are rooted in the same faculties and desires. Attempt to eradicate the baser elements altogether, and you may very well eliminate all that is admirable also.”

    Good piece. I really liked the above paragraph.

  2. hmmm… surely the best way to make AI is to take a head from a human and put it on a robot body or connected to a computer. Give it fake memory or perhaps just all the stored memory of the internet.

    This has obviously been attempted and achieved in private laboratories, intelligence, military areas. It just hasn’t been done openly.

    Its like saying the Raliean’s (or whatever they are called) cult, were the first to clone humans…. really?

    I think we are quite naive and gullible. I think this comes about because people like to say they are an expert in there field and therefore they “know” or would know if something like that had happened.

    One of the problems the ruling families have is that they cannot go against the group consciousness of mankind without severe repercussions. So when they would like to introduce something that they have developed already to society, they get a couple of their stouges, like musk to start a debate, which obviously polarises us.

    Thus bringing about the introduction of said technology 5 or 10 years down the road.

    Now to me that’s all quite easy to spot, but the interesting bit is will we fall for it, or can we even bring the whole thing back round to our advantage.

    What we really need is for humanity to realise that we all make mistakes in business and personal life. We can make things right, not to keep things secret anymore, but to talk and not to be manipulated by that area that has power over those who have shame.

    There is no shame, there is only putting things right. Perhaps a bit off topic, but anyway I feel better ;)

  3. Feels to me like there is a too Freudian quality to your argument (return of the repressed) which naturalizes religiosity (“there is within Western consciousness a particular yearning for transcendence.”) On the one hand, this does not go far enough: the yearning for transcendence is immanent to materiality (Derrida and Nancy [there is no mere materiality] > OOO). On the other hand, this requires an uncritical version of religion (bookish beliefs in anthropomorphic or at least animistic deities as have characterized ‘Western’ness). Might be more interesting to negotiate Lanier via Latour’s _Cult of the Modern Factishes_ which sees religion as obverse of technology: the project of making the absent (materially) present.

    More politically, Lanier’s call for secularization is a classically liberal pragmatist approach. The merit of the load-mouths running tech companies at the moment, thinking (their) money equals (their) insight(fulness), is that they are explicitating, in an accelerationist way, that their project is the constitution of religion, or more accurately a church with an elite priesthood in which the rest of us must have blind faith and pay our dues or go to hell.

Leave a comment