Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

The Transhumanist Promise: Happiness You Cannot Refuse

Transhumanism, a diverse movement aimed at transcending our present human limitations, continues to gravitate away from the fringes of public discussion toward the mainstream. It is an idea that, to many people, is starting to sound less like a wildly unrealistic science-fiction concept and more like a vaguely plausible future. I imagine that as the prospect of a transhumanist future begins to take on the air of plausibility, it will both exhilarate and mortify in roughly equal measure.

Recently, Jamie Bartlett wrote a short profile of the transhumanist project near the conclusion of which he observed, “Sometimes Tranhumanism [sic] does feel a bit like modern religion for an individualistic, technology-obsessed age.” As I read that line, I thought to myself, “Sometimes?”

To be fair, many transhumanist would be quick to flash their secular bona fides, but it is not too much of a stretch to say that the transhumanist movement traffics in the religious, quasi-religious, and mystical. Peruse, for example, the list of speakers at last year’s Global Future 2045 conference. The year 2045, of course, is the predicted dawn of the Singularity, the point at which machines and humans become practically indistinguishable.

In its aspirations for transcendence of bodily limitations, its pursuit of immortality, and its promise of perpetual well-being and the elimination of suffering, Transhumanism undeniably incorporates traditionally religious ambitions and desires. It is, in other words, functionally analogous to traditional religions, particularly the Western, monotheistic faiths. If you’re unfamiliar with the movement and are wondering whether I might have exaggerated their claims, I invite you to watch the following video introduction to Transhumanism put together by British Institute of Posthuman Studies (BIOPS):

All of this amounts to a particularly robust instance of what the historian David Noble called, “the religion of technology.” Noble’s work highlighted the long-standing entanglement of religious aspirations with the development of the Western technological project. You can read more about the religion of technology thesis in this earlier post. Here I will only note that the manifestation of the religion of technology apparent in the Transhumanist movement betrays a distinctly gnostic pedigree. Transhumanist rhetoric is laced with a palpable contempt for humanity in its actual state, and the contempt is directed with striking animus at the human body. Referring to the human body derisively as a “meat sack” or “meat bag” is a common trope among the more excitable transhumanist. As Katherine Hayles has put it, in Transhumanism bodies are “fashion accessories rather than the ground of being.”

posthumanism crossIn any case, the BIOPS video not too subtly suggests that Christianity has been one of the persistent distractions keeping us from viewing aging as we should, not as a “natural” aspect of the human condition, but as a disease to be combatted. This framing may convey an anti-religious posture, but what emerges on balance is not a dismissal of the religious aims, but rather the claim that they may be better realized through other, more effective means. The Posthumanist promise, then, is the promise of what the political philosopher Eric Voegelin called the immanentized eschaton. The traditional religious category for this is idolatry with a healthy sprinkling of classical Greek hubris for good measure.

After discussing “super-longevity” and “super-intelligence,” the BIOPS video goes on to discuss “super well-being.” This part of the video begins at the seven-minute mark, and it expresses some of the more troubling aspects of the Transhumanist vision, at least as embraced by this particular group. This third prong of the Transhumanist project seeks to “phase out suffering.” The segment begins by asking viewers to imagine that as parents they had the opportunity to opt their child out of “chronic depression,” a “low pain threshold,” and “anxiety.” Who would choose these for their own children? Of course, the implicit answer is that no well-meaning, responsible parent would. We all remember Gattaca, right?

A robust challenge to the Transhumanist vision is well-beyond the scope of this blog post, but it is a challenge that needs to be carefully and thoughtfully articulated. For the present, I’ll leave you with a few observations.

First, the nature of the risks posed by the technologies Posthumanists are banking on is not that of a single, clearly destructive cataclysmic accident. Rather, the risk is incremental and not ever obviously destructive. It takes on the character of the temptation experienced by the main character, Pahom, in Leo Tolstoy’s short story, “How Much Land Does a Man Need?” If you’ve never read the story, you should. In the story Pahom is presented with the temptation to acquire more and more land, but Tolstoy never paints Pahom as a greedy Ebenezer Scrooge type. Instead, at the point of each temptation, it appears perfectly rational, safe, and good to seize an opportunity to acquire more land. The end of all of these individual choices, however, is finally destructive.

Secondly, these risks are a good illustration of the ethical challenges posed by innovation that I articulated yesterday in my exchange with Adam Thierer. These risks would be socially distributed, but unevenly and possibly even unjustly so. In other words, technologies of radical human enhancement (we’ll allow that loaded descriptor to slide for now) would carry consequences for both those who chose such enhancements and also for those who did not or could not. This problem is not, however, unique to these sorts of technologies. We generally lack adequate mechanisms for adjudicating the socially distributed risks of technological innovation. (To be clear, I don’t pretend to have any solutions to this problem.) We tolerate this because we generally tend to assume that, on balance, the advance of technology is a tide that lifts all ships even if not evenly so. Additionally, given our anthropological and political assumptions, we have a hard time imagining a notion of the common good that might curtail individual freedom of action.

Lastly, the Transhumanist vision assumes a certain understanding of happiness when it speaks of the promise of “super well-being.” This vision seems to be narrowly equated with the absence of suffering. But it is not altogether obvious that this is the only or best way of understanding the perennially elusive state of affairs that we call happiness. The committed Transhumanist seems to lack the imagination to conceive of alternative pursuits of happiness, particularly those that encompass and incorporate certain forms of suffering and tribulation. But that will not matter.

abolish sufferingIn the Transhumanist future one path to happiness will be prescribed. It will be objected that this path will be offered not prescribed, but, of course, this is disingenuous because in this vision the technologies of enhancement confer not only happiness narrowly defined but power as well. As Gary Marcus and Christof Koch recently noted in their discussion of brain implants, “The augmented among us—those who are willing to avail themselves of the benefits of brain prosthetics and to live with the attendant risks—will outperform others in the everyday contest for jobs and mates, in science, on the athletic field and in armed conflict.” Those who opt out will be choosing to be disadvantaged and marginalized. This may be a choice, but not one without a pernicious strain of tacit coercion.

Years ago, just over seventy years ago in fact, C.S. Lewis anticipated what he called the abolition of man. The abolition of man would come about when science and technology found that the last frontier in the conquest of nature was humanity itself. “Human nature will be the last part of Nature to surrender to Man,” Lewis warned, and when it did a caste of Conditioners would be in the position to “cut out posterity in what shape they please.” Humanity, in other words, would become the unwilling subject of these Last Men and their final decisive exercise of the will to power over nature, the power to shape humanity in their own image.

Even as I write this, there is part of me that thinks this all sounds so outlandish, and that even to warn of it is an unseemly alarmism. After all, while some of the touted technologies appear to be within reach, many others seem to be well out of reach, perhaps forever so. But, then, I consider that many terrible things once seemed impossible and it may have been their seeming impossibility that abetted their eventual realization. Or, from a more positive perspective, perhaps it is sometimes the articulation of the seemingly far-fetched dangers and risks that ultimately helps us steer clear of them.



Obama Talks With A Computer

[Correction:  Mr. Scocca informs me via email that the dialog in his piece was an actual transcript of a session with Eliza.  So nothing “mock” or “contrived” about it.  All the more interesting, read on.]

Over at Slate, Tom Scocca has staged a mock dialogue with Eliza to good, even if somewhat contrived, effect.  Eliza, in case you’re wondering, is a computer program designed in the 1960’s to simulate human interaction based on certain scripts (such as DOCTOR which simulates a particular style of psychotherapy).

About half way through the dialogue we get these few lines which more or less encapsulate Scocca’s main concern:

TS: On Twitter, I am having a hard time telling the bots from the humans.

Eliza: How long have you been having a hard time telling the bots from the humans?

TS: Weeks and weeks now.

Eliza: Please go on.

TS: It is not because the bots are fluent or facile.

Eliza: Do any other reasons not come to mind?

TS: It is because so much of the human conversation has come down to bot level.

And a few lines later he adds,

TS: That’s not really someone who likes my blog post. It’s a pharma ad-bot that presumably grabs anything with “Obama” in it.

Eliza: Why do you say your blog post its a pharma adbot that presumably grabs anything with obama in it?

TS: Well, of course, by now I have been trained to put “Obama” into the headlines myself, for SEO purposes.

Eliza: Can you elaborate on that?

TS: My actual transactions with human readers are so machine-mediated, it’s necessary for me to keep machine-logic in mind while I’m writing.

I’m taking these observations as rather useful illustrations of how the language (or, logic) of a digital media platform shapes our communication to fit within its own limitations.  Borrowing linguist Roman Jakobson’s maxim regarding languages, I suggested a few posts down that, “Languages of digital media platforms differ essentially in what they cannot (or, encourage us not to) convey and not in what they may convey.”  In other words, we shape our communication to fit the constraints of the medium.  The follow up question then becomes, “do we adapt to these limitations and carry them over into other fields of discourse?”  Scocca provocatively suggests that if a computer ends up passing the Turing Test, it will not be because of an advance in computer language capability, but because of a retrogression in the way humans use language.

Keep in mind that you don’t have to be a professional writer working for a popular web magazine to experience machine mediated communication.  In fact, my guess is that a great deal, perhaps the majority, of our interaction with other people is routinely machine mediated, and in this sense we are already living in post-human age.

The mock dialog also suggests yet another adaptation of Jackobson’s principle, this time focused on the economic conditions at play within a digital media platform.  Tracking more closely with Jackobson’s original formulation, this adaptation might go something like this:  the languages of digital media platforms differ essentially in what their economic environment dictates they must convey.  In the case of Scocca, he has been trained to mention Obama for the purposes of search engine optimization, and this, of course, to drive traffic to his blog because traffic generates advertising revenue.  Not only do the constraints of the platform shape the content of communication, the logic of the wider economic system disciplines the writing as well.

None of this is, strictly speaking, necessary.  It is quite possible to creatively, and even aesthetically communicate within the constraints of a given digital media platform.  Any medium imposes certain constraints; what we do within those constraints remains the question.  Some media, it is true, impose more stringent constraints on human communication than others; the telegraph, for example, comes to mind.  But the wonder of human creativity is that it finds ways of flourishing within constraints; within limitations we manage to be ingenious, creative, humorous, artistic, etc.  Artistry, humor, creativity and all the rest wouldn’t even be possible without certain constraints to work with and against.

Yet aspiring to robust, playful, aesthetic, and meaningful communication is the path of greater resistance.  It is easier to fall into thoughtless and artless patterns of communication that uncritically bow to the constraints of a medium thus reducing and inhibiting the possibilities of human expression.  Without any studies or statistics to prove the point, it seems that the path of least resistance is our default for digital communication.  A little intentionality and subversiveness, however, may help us flourish as fully human beings in our computer-mediated, post-human times.

Besides, it would be much more interesting if a computer passed the Turing Test without any concessions on our part.

Oh, and sorry for the title, just trying to optimize my search engine results.

Jaron Lanier on the Religion of Singularity

From Jaron Lanier’s op-ed, “The First Church of Robotics,” in todays NY Times:

WHEN we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on — with the machines and with ourselves. So, why, aside from the theatrical appeal to consumers and reporters, must engineering results so often be presented in Frankensteinian light?

The answer is simply that computer scientists are human, and are as terrified by the human condition as anyone else. We, the technical elite, seek some way of thinking that gives us an answer to death, for instance. This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening.

Some think the newly sentient Internet would then choose to kill us; others think it would be generous and digitize us the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. Yes, this sounds like many different science fiction movies. Yes, it sounds nutty when stated so bluntly. But these are ideas with tremendous currency in Silicon Valley; these are guiding principles, not just amusements, for many of the most influential technologists.

. . . All thoughts about consciousness, souls and the like are bound up equally in faith, which suggests something remarkable: What we are seeing is a new religion, expressed through an engineering culture . . .

Katherine Hayles on Posthumanism

Hayles’ describes her project in How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics as an intervention.  “I view the present moment,” she explains in the first chapter, “as a critical juncture when interventions might be made to keep disembodiment from being rewritten, once again, into prevailing concepts of subjectivity.” (5)  Later on at the close of chapter two she writes, “I believe that our best hope to intervene constructively in this development is to put an interpretative spin on it – one that opens up the possibilities of seeing pattern and presence as complementary rather than antagonistic.” (48-49)  Writing in the late 1990’s, she clearly believes the shape and form of posthumanism to be as of yet undetermined.  No doubt she would acknowledge a multiplicity of possible and complex paths along which posthumanism might evolve, but she tends to speak in binaries.  Dream or nightmare, terror or pleasure – these are the options.  (4, 5, 47, 284-285)

As the first quotation above suggests, the preservation of embodiment is among Hayles’ chief objectives.  She notes that one prominent way of rendering posthumanism – the nightmare scenario in which bodies are regarded as “fashion accessories rather than the ground of being” – is not so much a posthumanism as it is a hyperhumanism, an extension and intensification of the modern, humanist notion of possessing a body rather than being a body.  (4-5)  This dualism has deep roots in the Western tradition; we may call it the Platonic temptation, or the Gnostic temptation, or the Manichaean temptation, etc.  Viewed within this genealogy, the cybernetic construction of the posthuman shares core assumptions not only with Renaissance and Enlightenment humanism, but it betrays a pedigree reaching much further back still into antiquity.

Against this long standing tendency and building upon the work of George Lakoff, Mark Johnson, and Pierre Bourdieu among others, Halyes masterfully argues for the significance of embodiment, for the formation of thought and knowledge.  The body that “exists in space and time … defines the parameters within which the cogitating mind can arrive at ‘certainties.’”  (203)  Citing Johnson, she reminds the reader that body writes discourse as much as discourse writes the body.  Briefly stated, embodied experience generates the deep and pervasive networks of metaphors and analogies by which we elaborate our understanding of the world.  Hayles goes on to add that “when people begin using their bodies in significantly different ways, either because of technological innovations or other cultural shifts, changing experiences of embodiment bubble up into language, affecting the metaphoric networks at play within culture.”  (206-207)  In this light, Electronic Literature can be understood as part of an ongoing attempt to direct posthumanism toward embodiment.   Hayles theorized electronic literature as a category of the literary that performs the sorts of ruptures in code (introduction of noise?) which make us conscious of our embodiment and embodied knowledge nudging us away from the disembodied nightmare scenario.

I’m cheering for Hayles’ version of the posthuman to win the day (if the outcome is still undetermined), but I am less than hopeful.  Not that I believe the Moravec scenario will in fact materialize, but that it will remain deeply appealing, more so than Hayles’ vision, and continue to shape our imaginings of the future.  For one thing, the dream of disembodiment and its concomitant fantasies of “unlimited power and disembodied immortality” have a long history and considerable momentum as was noted above.  For another, this dream has roots not only in Gnostic suspicion of the body and Cartesian dualism, but also in the modern apotheosis of the will which also has a long and distinguished history.  Embodiment in this context is the last obstacle to the unfettered will.  Hayles’ dream scenario includes the recognition and celebration of “finitude as a condition of human being,” but the entanglement of technological development with current economic and cultural structures and assumptions hardly suggests that we are in the habit of recognizing, much less celebrating, our limits.  “Mastery through the exercise of autonomous will” may “merely be the story consciousness tells itself,” but consciousness is a powerful story teller and it weaves compelling narratives.  (288)  These narratives are all the more seductive when they are reinforced by cultural liturgies of autopoietic consumption and the interests that advance them.