Our Very Own Francis Bacon

Francis BaconFew individuals have done as much to chart the course of science and technology in the modern world as the the Elizabethan statesmen and intellectual, Francis Bacon. But Bacon’s defining achievement was not, strictly speaking, scientific or technological. Rather, Bacon’s achievement lay in the realm of human affairs we would today refer to as “public relations.” Bacon’s genius was Draper-esque: he wove together a compelling story about the place of techno-science in human affairs from the loose threads of post-Reformation religious and political culture and the scientific breakthroughs we loosely group together as the Scientific Revolution.

In story he told, knowledge mattered only insofar as it yielded power (the well-known formulation, “knowledge is power,” is Bacon’s), and that power mattered only insofar as it was directed toward “the relief of man’s estate.” To put that less archaically, we might say “the improvement of our quality of life.” But putting it that way obscures the theological overtones of Bacon’s formulation and its allusion to the curse under which humanity labored as a consequence of the Fall in the Christian understanding of the human condition. Our problem was both spiritual and material, and Bacon believed that in his day both facets of that problem were being solved. The improvement of humanity’s physical condition went hand in hand with the restoration of true religion occasioned by the English Reformation, and together they would lead straight to the full restoration of creation.

Bacon’s significance, then, lay in merging science and technology into one techno-scientific project and synthesizing this emerging project with the dominant world picture, thus charting it’s course and securing its prestige. It is just this sort of expansive vision driving technological development that I’ve had in mind in my recent posts (here and here) regarding culture, technology, and innovation.

My recent posts have also mentioned the entrepreneur Peter Thiel, who is increasingly assuming the role of Silicon Valley’s leading public intellectual–the Sage of Silicon Valley, if you will. This morning, I was re-affirmed in that evaluation of Thiel’s position by a pair of posts by political philosopher, Peter Lawler. In the first of these posts, Lawler comments on Thiel’s seeming ubiquity in certain circles, and he rehearses some of the by-now familiar aspects of Thiel’s intellectual affinities, notably for the sociologist cum philosopher Rene Girard and the political theorist Leo Strauss. Chiefly, Lawler discusses Thiel’s flirtations with transhumanism, particularly in his recently released Zero to One: Notes on Startups, or How to Build the Future, a distilled version of Thiel’s 2012 lecture course on start-ups at Stanford University.

(The book was prepared with Blake Masters, who had previously made available detailed notes on Thiel’s course. I’ll mention in passing that that tag line on Masters’ website runs as follows: “Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it.”)

As it turns out, Francis Bacon makes a notable appearance in Thiel’s work. Here is Lawler summarizing that portion of the book:

“In the chapter entitled ‘You Are Not a Lottery Ticket,’ Thiel writes of Francis Bacon’s modern project, which places “prolongation of life” as the noblest branch of medicine, as well the main point of the techno-development of science. That prolongation is at the core of the definite optimism that should drive ‘the intelligent design’ at the foundation of technological development. We (especially we founders) should do everything we can “to prioritize design over chance.” We should do everything we can to remove contingency from existence, especially, of course, each of our personal existences.”

The “intelligent deign” in view has nothing to do, so far as I can tell, with the theory of human origins that is the most common referent for that phrase. Rather, it is Thiel’s way of labeling the forces of consciously deployed thought and work striving to bring order out of the chaos of contingency. Intelligent design is how human beings assert control and achieve mastery over their world and their lives, and that is an explicitly Baconian chord to strike.

Thiel, worried by the technological stagnation he believes has set in over the last forty or so years, is seeking to reanimate the technological project by once again infusing it with an expansive, dare we say mythic, vision of its place in human affairs. It may not be too much of a stretch to say that he is seeking to play the role of Francis Bacon for our age.

Like Bacon, Thiel is attempting to fuse the disparate strands of emerging technologies together into a coherent narrative of grandiose scale. And his story, like Bacon’s, features distinctly theological undertones. The chief difference may be this: whereas the defining institution of the early modern period was the nation-state, itself a powerful innovation of the period, the defining institution in Thiel’s vision is the start-up. As Lawler puts it, “the startup has replaced the country as the object of the highest human ambition. And that’s the foundation of the future that comes from being ruled by the intelligent designers who are Silicon Valley founders.”

Lawler is right to conclude that “Peter Thiel has emerged as the most resolute and most imaginative defender of the distinctively modern part of Western civilization.” Bacon was, after all, one of the intellectual founders of modernity, on par, I would say, with the likes of Descartes and Locke. But, Lawler adds,

“that doesn’t mean that, when it comes to the libertarian displacement of the nation by the startup and the abolition of all contingency from particular personal lives, his imagination and his self-importance don’t trump his astuteness. They do. His theology of liberation is that we, made in the image of God, can do for ourselves what the Biblical Creator promised—free ourselves from the misery of being self-conscious mortals dependent on forces beyond our control.”

And that is, as Lawler notes in his follow-up post, a rather ancient aspiration. Indeed, Thiel, who professes an admittedly heterodox variety of Christianity, may do well to remember that to say we are made in the image of God is one way of saying we are not, the Whole Earth Catalog notwithstanding, gods ourselves. This, it would seem, is a hard lesson to learn.

_______________________________

Update: On Twitter, I was made aware of a talk by Thiel at SXSW in 2013 on the topic of the chapter discussed above. Here it is (via @carlamomo).

Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

The Transhumanist Promise: Happiness You Cannot Refuse

Transhumanism, a diverse movement aimed at transcending our present human limitations, continues to gravitate away from the fringes of public discussion toward the mainstream. It is an idea that, to many people, is starting to sound less like a wildly unrealistic science-fiction concept and more like a vaguely plausible future. I imagine that as the prospect of a transhumanist future begins to take on the air of plausibility, it will both exhilarate and mortify in roughly equal measure.

Recently, Jamie Bartlett wrote a short profile of the transhumanist project near the conclusion of which he observed, “Sometimes Tranhumanism [sic] does feel a bit like modern religion for an individualistic, technology-obsessed age.” As I read that line, I thought to myself, “Sometimes?”

To be fair, many transhumanist would be quick to flash their secular bona fides, but it is not too much of a stretch to say that the transhumanist movement traffics in the religious, quasi-religious, and mystical. Peruse, for example, the list of speakers at last year’s Global Future 2045 conference. The year 2045, of course, is the predicted dawn of the Singularity, the point at which machines and humans become practically indistinguishable.

In its aspirations for transcendence of bodily limitations, its pursuit of immortality, and its promise of perpetual well-being and the elimination of suffering, Transhumanism undeniably incorporates traditionally religious ambitions and desires. It is, in other words, functionally analogous to traditional religions, particularly the Western, monotheistic faiths. If you’re unfamiliar with the movement and are wondering whether I might have exaggerated their claims, I invite you to watch the following video introduction to Transhumanism put together by British Institute of Posthuman Studies (BIOPS):

All of this amounts to a particularly robust instance of what the historian David Noble called, “the religion of technology.” Noble’s work highlighted the long-standing entanglement of religious aspirations with the development of the Western technological project. You can read more about the religion of technology thesis in this earlier post. Here I will only note that the manifestation of the religion of technology apparent in the Transhumanist movement betrays a distinctly gnostic pedigree. Transhumanist rhetoric is laced with a palpable contempt for humanity in its actual state, and the contempt is directed with striking animus at the human body. Referring to the human body derisively as a “meat sack” or “meat bag” is a common trope among the more excitable transhumanist. As Katherine Hayles has put it, in Transhumanism bodies are “fashion accessories rather than the ground of being.”

posthumanism crossIn any case, the BIOPS video not too subtly suggests that Christianity has been one of the persistent distractions keeping us from viewing aging as we should, not as a “natural” aspect of the human condition, but as a disease to be combatted. This framing may convey an anti-religious posture, but what emerges on balance is not a dismissal of the religious aims, but rather the claim that they may be better realized through other, more effective means. The Posthumanist promise, then, is the promise of what the political philosopher Eric Voegelin called the immanentized eschaton. The traditional religious category for this is idolatry with a healthy sprinkling of classical Greek hubris for good measure.

After discussing “super-longevity” and “super-intelligence,” the BIOPS video goes on to discuss “super well-being.” This part of the video begins at the seven-minute mark, and it expresses some of the more troubling aspects of the Transhumanist vision, at least as embraced by this particular group. This third prong of the Transhumanist project seeks to “phase out suffering.” The segment begins by asking viewers to imagine that as parents they had the opportunity to opt their child out of “chronic depression,” a “low pain threshold,” and “anxiety.” Who would choose these for their own children? Of course, the implicit answer is that no well-meaning, responsible parent would. We all remember Gattaca, right?

A robust challenge to the Transhumanist vision is well-beyond the scope of this blog post, but it is a challenge that needs to be carefully and thoughtfully articulated. For the present, I’ll leave you with a few observations.

First, the nature of the risks posed by the technologies Posthumanists are banking on is not that of a single, clearly destructive cataclysmic accident. Rather, the risk is incremental and not ever obviously destructive. It takes on the character of the temptation experienced by the main character, Pahom, in Leo Tolstoy’s short story, “How Much Land Does a Man Need?” If you’ve never read the story, you should. In the story Pahom is presented with the temptation to acquire more and more land, but Tolstoy never paints Pahom as a greedy Ebenezer Scrooge type. Instead, at the point of each temptation, it appears perfectly rational, safe, and good to seize an opportunity to acquire more land. The end of all of these individual choices, however, is finally destructive.

Secondly, these risks are a good illustration of the ethical challenges posed by innovation that I articulated yesterday in my exchange with Adam Thierer. These risks would be socially distributed, but unevenly and possibly even unjustly so. In other words, technologies of radical human enhancement (we’ll allow that loaded descriptor to slide for now) would carry consequences for both those who chose such enhancements and also for those who did not or could not. This problem is not, however, unique to these sorts of technologies. We generally lack adequate mechanisms for adjudicating the socially distributed risks of technological innovation. (To be clear, I don’t pretend to have any solutions to this problem.) We tolerate this because we generally tend to assume that, on balance, the advance of technology is a tide that lifts all ships even if not evenly so. Additionally, given our anthropological and political assumptions, we have a hard time imagining a notion of the common good that might curtail individual freedom of action.

Lastly, the Transhumanist vision assumes a certain understanding of happiness when it speaks of the promise of “super well-being.” This vision seems to be narrowly equated with the absence of suffering. But it is not altogether obvious that this is the only or best way of understanding the perennially elusive state of affairs that we call happiness. The committed Transhumanist seems to lack the imagination to conceive of alternative pursuits of happiness, particularly those that encompass and incorporate certain forms of suffering and tribulation. But that will not matter.

abolish sufferingIn the Transhumanist future one path to happiness will be prescribed. It will be objected that this path will be offered not prescribed, but, of course, this is disingenuous because in this vision the technologies of enhancement confer not only happiness narrowly defined but power as well. As Gary Marcus and Christof Koch recently noted in their discussion of brain implants, “The augmented among us—those who are willing to avail themselves of the benefits of brain prosthetics and to live with the attendant risks—will outperform others in the everyday contest for jobs and mates, in science, on the athletic field and in armed conflict.” Those who opt out will be choosing to be disadvantaged and marginalized. This may be a choice, but not one without a pernicious strain of tacit coercion.

Years ago, just over seventy years ago in fact, C.S. Lewis anticipated what he called the abolition of man. The abolition of man would come about when science and technology found that the last frontier in the conquest of nature was humanity itself. “Human nature will be the last part of Nature to surrender to Man,” Lewis warned, and when it did a caste of Conditioners would be in the position to “cut out posterity in what shape they please.” Humanity, in other words, would become the unwilling subject of these Last Men and their final decisive exercise of the will to power over nature, the power to shape humanity in their own image.

Even as I write this, there is part of me that thinks this all sounds so outlandish, and that even to warn of it is an unseemly alarmism. After all, while some of the touted technologies appear to be within reach, many others seem to be well out of reach, perhaps forever so. But, then, I consider that many terrible things once seemed impossible and it may have been their seeming impossibility that abetted their eventual realization. Or, from a more positive perspective, perhaps it is sometimes the articulation of the seemingly far-fetched dangers and risks that ultimately helps us steer clear of them.

 

 

The Transhumanist Logic of Technological Innovation

What follows are a series of underdeveloped thoughts for your consideration:

Advances in robotics, AI, and automation promise to liberate human beings from labor.

The Programmable World promises to liberate us from mundane, routine, everyday tasks.

Big Data and algorithms promise to liberate us from the imperatives of understanding and deliberation.

Google promises to liberate us from the need to learn things, drive cars, or even become conscious of what we need before it is provided for us.

But what are we being liberated for? What is the end which this freedom will enable us to pursue?

What sort of person do these technologies invite us to become?

Or, if we maximized their affordances, what sort of engagement with the world would they facilitate?

In the late 1950s, Hannah Arendt worried that automated technology was closing in on the elusive promise of a world without labor at a point in history when human beings could understand themselves only as laborers. She knew that in earlier epochs the desire to transcend labor was animated by a political, philosophical, or theological anthropology that assumed there was a teleology inherent in human nature — the contemplation of the true, the good, and the beautiful or of the beatific vision of God.

But she also knew that no such teleology now animates Western culture. In fact, a case could be made that Western culture now assumes that such a teleology does not and could not exist. Unless, that is, we made it for ourselves. This is where transhumanism, extropianism, and singularity come in. If there is no teleology inherent to human nature, then the transcendence of human nature becomes the default teleology.

This quasi-religious pursuit has deep historical roots, but the logic of technological innovation may make the ideology more plausible.

Around this time last year, Nick Carr proposed that technological innovation tracks neatly with Maslow’s hierarchy of human needs (see Carr’s chart below). I found this a rather compelling and elegant thesis. But, what if innovation is finally determined by something other than strictly human needs? What if beyond self-actualization, there lay the realm of self-transcendence?

After all, when, as an article of faith, we must innovate, and no normative account of human nature serves to constrain innovation, then we arrive at a point where we ourselves will be the final field for innovation.

The technologies listed above, while not directly implicated in the transhumanist project (excepting perhaps dreams of a Google implant), tend in the same direction to the degree that they render human action in the world obsolete. The liberation they implicitly offer, in other words, is a liberation from fundamental aspects of what it has meant to be a human being.

hierarchy of innovation