Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

Technology in the Classroom

I want to briefly draw your attention to a series of related posts about technology in the classroom, beginning with Clay Shirky’s recent post explaining his decision to have students put their wired digital devices away during class. Let me say that again: Clay Shirky has decided to ban lap tops from his classroom. Clay Shirky. Shirky has long been one of the Internet’s leading advocates and cheerleaders, so this seems to be a pretty telling indication of the scope of the problem.

I particularly appreciated the way Shirky focused on what we might call the ecosystem of the classroom. The problem is not simply that connected devices distract the student who uses them and hampers their ability to learn:

“Anyone distracted in class doesn’t just lose out on the content of the discussion, they create a sense of permission that opting out is OK, and, worse, a haze of second-hand distraction for their peers. In an environment like this, students need support for the better angels of their nature (or at least the more intellectual angels), and they need defenses against the powerful short-term incentives to put off complex, frustrating tasks. That support and those defenses don’t just happen, and they are not limited to the individual’s choices. They are provided by social structure, and that structure is disproportionately provided by the professor, especially during the first weeks of class.”

I came across Shirky’s post via Nick Carr, who also considers a handful of studies that appear to support the decision to create a relatively low-tech classroom environment. I recommend you click through to read the whole thing.

If you’re thinking that this is a rather retrograde, reactionary move to make, then I’d suggest taking a quick look at Alan Jacob’s brief comments on the matter.

You might also want to ask yourself why the late Steve Jobs; Chris Anderson, the former editor at Wired and CEO of a robotics company; Evan Williams, the founder of Blogger, Twitter, and Medium; and a host of other tech-industry heavyweights deploy seemingly draconian rules for how their own children relate to digital devices and the Internet. Here’s Anderson: “My kids accuse me and my wife of being fascists and overly concerned about tech, and they say that none of their friends have the same rules.”

Perhaps they are on to something, albeit in a “do-as-I-say-not-as-I-do” sort of way. Nick Bilton has the story here.

__________________________

Okay, and now a quick administrative note. Rather than create a separate entry for this, I thought it best just to raise the matter at the tail end of this shorter post. Depending on how you ordinarily get to this site, you may have noticed that the feed for this blog now only gives you a snippet view and asks you to click through to read the whole.

I initially made this change for rather self-serving reasons related to the architecture of WordPress, and it was also going to be a temporary change. However, I realized that this change resolved a couple of frustrations I’d had for awhile.

The first of these centered on my mildly obsessive nature when it came to editing and revising. Invariably, regardless of what care I took before publishing, posts would get out with at least one or two typos, inelegant phrases, etc. When I catch them later, I fix them, but those who get their posts via email never got the corrections. If you have to click over to read the whole, however, you would always see the latest, cleanest version. Relatedly, I sometimes find it preferable to update a post with some related information or new links rather than create a new post (e.g.). It would be unlikely that email subscribers would ever see those updates unless they were clicking to the site for the most updated version of the post.

Consequently, I’m considering keeping the snippet feed. I do realize, though, that this might be mildly annoying, involving as it does an extra click or two. So, my question to you is this: do you care? I have a small but dedicated readership, and I’d hate to make a change that might ultimately discourage you from continuing to read. If you have any thoughts on the matter, feel free to share in the comments below or via email.

Also, I’ve been quite negligent about replying to comments of late. When I get a chance to devote some time to this blog, which is not often, I’m opting to write instead. I really appreciate the comments, though, and I’ll do my best to interact as time allows.

A Few Items for Your Consideration

Here are a few glimpses of the future ranging from the near and plausible, to the distant and uncertain. In another world–one, I suppose, in which I get paid to write these posts–I’d write more about each. In this world, I simply pass them along for your consideration.

Google Glass App Reads Your Emotions

“A new Glassware App for Google Glass will uncover a person’s emotion, age range and gender just by facial recognition technology ….

Facial recognition has always been seen with nervousness, as people tend to prefer privacy over the ability to see a stranger’s age or gender. But these two apps prove sometimes letting a robot know you’re sad can help for a better relationship between fellow humans. Letting the robot lead has proven to increase human productivity and better the ebb and flow of a work space, a partnership, any situation dealing with human communication.

The SHORE app is currently not available for download, but you can try US+ now. May the robots guide us to a more humane future.”

GM Cars to Monitor Drivers

“General Motors, the largest US auto manufacturer by sales, is preparing to launch the world’s first mass-produced cars with eye- and head-tracking technology that can tell whether drivers are distracted, according to people with knowledge of the plans ….

The company is investing in technology that will be able to tell how hard a driver is thinking by monitoring the dilation of the pupils, and combines facial information with sensors for vital signs such as blood alcohol levels and heart rate.”

Electrical Brain Stimulation

“Transcranial direct current stimulation (TDCS), which passes small electrical currents directly on to the scalp, stimulates the nerve cells in the brain (neurons). It’s non-invasive, extremely mild and the US military even uses TDCS in an attempt to improve the performance of its drone pilots.

The idea is that it makes the neurons more likely to fire and preliminary research suggests electrical simulation can improve attention as well as have a positive impact on people with cognitive impairments and depression ….

And more worryingly for him, people are also increasingly making brain stimulation kits themselves. This easily ‘puts the technology in the realms of clever teenagers,’ adds Dr Davis.

An active forum on reddit is devoted to the technology, and people there have complained of ‘burning to the scalp’. Another user wrote that they ‘seemed to be getting angry frequently’ after using TDCS.”

Preparing for Superintelligent AI

“Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.

The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.”

We’ve got some thinking to do, folks, careful, patient thinking. Happily, we don’t have to do that thinking alone and in isolation. Here is Evan Selinger helping us think clearly about our digital tools with his usual, thoughtful analysis: “Why Your Devices Shouldn’t Do the Work of Being You.”

Here, too, is a critical appraisal of the religiously intoned hopes of the cult of the Singularity.

Finally, Nick Carr invites us to cautiously consider the potential long-term consequences of the recently unveiled Apple Watch since “never before have we had a tool that promises to be so intimate a companion and so diligent a monitor as the Apple Watch.”

Waiting for Socrates … So We Can Kill Him Again and Post the Video on Youtube

It will come as no surprise, I’m sure, if I tell you that the wells of online discourse are poisoned. It will come as no surprise because critics have complained about the tone of online discourse for as long as people have interacted with one another online. In fact, we more or less take the toxic, volatile nature of online discourse for granted. “Don’t read the comments” is about as routine a piece of advice as “look both ways before crossing the street.” And, of course, it is also true that complaints about the coarsening of public discourse in general have been around for a lot longer than the Internet and digital media.

That said, I’ve been intrigued, heartened actually, by a recent round of posts bemoaning the state of online rhetoric from some of the most thoughtful people whose work I follow. Here is Freddie deBoer lamenting the rhetoric of the left, and here is Matthew Anderson noting much of the same on the right. Here is Alan Jacobs on why he’s stepping away from Twitter. Follow any of those links and you’ll find another series of links to thoughtful, articulate writers all telling us, more or less, that they’ve had enough. This piece urges civility and it suggests, hopefully (naively?), that the “Internet” will learn soon enough to police itself, but the evidence it cites along the way seems rather to undermine such hopefulness. I won’t bother to point you to some of the worst of what I’ve regrettably encountered online in recent weeks.

Why is this the case? Why, as David Sessions recently put it, is the state of the Internet awful?

Like everyone else, I have scattered thoughts about this. For one thing, the nature of the medium seems to encourage rancor, incivility, misunderstanding, and worse. Anonymity has something to do with this, and so does the abstraction of the body from the context of communication.

Along the same media-ecological lines, Walter Ong noted that oral discourse tends to be agonistic and literate discourse tends to be irenic. Online discourse tends to be conducted in writing, which might seem to challenge Ong’s characterization. But just as television and radio constituted what Ong called secondary orality, so might we say that social media is a form of secondary literacy, blurring the distinctions between orality and literacy. It is text based, but, like oral discourse, it brings people into a context of relative communicative immediacy. That is to say that through social media people are responding to one another in public and in short order, more as they would in a face-to-face encounter, for example, than in private letters exchanged over the course of months.

In theory, writing affords us the temporal space to be more thoughtful and precise in expressing our ideas, but, in practice, the expectations of immediacy in digital contexts collapse that space. So we lose the strengths of each medium: we get none of the meaning-making cues of face-to-face communication nor any of the time for reflection that written communication ordinarily grants. The media context, then, ends up being rife with misunderstanding and agonistic; it encourages performative pugilism.

Also, as the moral philosopher Alasdair MacIntyre pointed out some time ago, we no longer operate with a set of broadly shared assumptions about what is good and what shape a good life should take. Our ethical reasoning tends not to be built on the same foundation. Because we are reasoning from incompatible moral premises, the conclusions reached by two opposing parties tend to be interpreted as sheer stupidity or moral obtuseness. In other words, because our arguments, proceeding as they do from such disparate moral frameworks, fail to convince and persuade, we begin to assume that those who will not yield to our moral vision must thus be fools or worse. Moreover, we conclude, fools and miscreants cannot be argued with; they can only be shamed, shouted down, or otherwise silenced.

Digital dualism is also to blame. Some people seem to operate under the assumption that they are not really racists, misogynists, anti-Semites, etc.–they just play one on Twitter. It really is much too late in the game to play that tired card.

Perhaps, too, we’ve conflated truth and identity in such a way that we cannot conceive of a challenge to our views as anything other than a challenge to our humanity. Conversely, it seems that in some highly-charged contexts being wrong can cost you the basic respect one might be owed as a fellow human being.

Finally, the Internet is awful because, frankly, people are awful. We all are; at least we all can be under the right circumstances. As Solzhenitsyn put it, “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”

To that list, I want to offer just one more consideration: a little knowledge is a dangerous thing and there are few things the Internet does better than giving everyone a little knowledge. A little knowledge is a dangerous thing because it is just enough to give us the illusion of mastery and a sense of authority. This illusion, encouraged by the myth of having all the world’s information at our finger tips, has encouraged us to believe that by skimming an article here or reading the summary of a book there we thus become experts who may now liberally pontificate about the most complex and divisive issues with unbounded moral and intellectual authority. This is the worst kind of insufferable foolishness, that which mistakes itself for wisdom without a hint of irony.

Real knowledge on the other hand is constantly aware of all that it does not know. The more you learn, the more you realize how much you don’t know, and the more hesitant you’ll be to speak as if you’ve got everything figured out. Getting past that threshold of “a little knowledge” tends to breed humility and create the conditions that make genuine dialogue possible. But that threshold will never be crossed if all we ever do is skim the surface of reality, and this seems to be the mode of engagement encouraged by the information ecosystem sustained by digital media.

We’re in need of another Socrates who will teach us once again that the way of wisdom starts with a deep awareness of our own ignorance. Of course, we’d kill him too, after a good skewering on Twitter, and probably without the dignity of hemlock. A posthumous skewering would follow, naturally, after the video of his death got passed around on Reddit and Youtube.

I don’t want to leave things on that cheery note, but the fact is that I don’t have a grand scheme for making online discourse civil, informed, and thoughtful. I’m pretty sure, though, that things will not simply work themselves out for the better without deliberate and sustained effort. Consider how W.H. Auden framed the difference between traditional cultures and modernity:

“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues which were once nursed unconsciously by the forces of nature must now be recovered and fostered by a deliberate effort of the will and the intelligence. In the future, societies will not grow of themselves. They will be either made consciously or decay.”

For better or worse, or more likely both, this is where we find ourselves–either we deploy deliberate effort of will and intelligence or face perpetual decay. Who knows, maybe the best we can do is to form and maintain enclaves of civility and thoughtfulness amid the rancor, communities of discourse where meaningful conversation can be cultivated. These would probably remain small communities, but their success would be no small thing.

__________________________________

Update: After publishing, I read Nick Carr’s post on the revival of blogs and the decline of Big Internet. “So, yeah, I’m down with this retro movement,” Carr writes, “Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.” I thought that was good news in light of my closing paragraph.

And, just in case you need more by way of diagnosis, there’s this: “A Second Look At The Giant Garbage Pile That Is Online Media, 2014.”

What Could Go Right?

Critic and humorist Joe Queenan took aim at the Internet of Things in this weekend’s Wall Street Journal. It’s a mildly entertaining consideration of what could go wrong when our appliances, devices, and online accounts are all networked together. For example:

“If the wireless subwoofers are linked to the voice-activated oven, which is linked to the Lexus, which is linked to the PC’s external drive, then hackers in Moscow could easily break in through your kid’s PlayStation and clean out your 401(k). The same is true if the snowblower is linked to the smoke detector, which is linked to the laptop, which is linked to your cash-strapped grandma’s bank account. A castle is only as strong as its weakest portcullis.”

He goes on to imagine hackers reprogramming your smart refrigerator to order “thousands of gallons of banana-flavored soy milk every week,” or your music library to play only “Il Divo, Il Divo, Il Divo, 24 hours a day.” Queenan gives readers a few more of these humorously intoned, marginally plausible scenarios that, with a light touch, point to some of the ways the Internet of Things could go wrong.

In any case, after reading Queenan’s playful lampoon of the Internet of Things, it occurred to me that more often than not our worries about new technology center on the question, “What could go wrong?” In fact, we often ask that sarcastically to suggest that some new technology is obviously fraught with risk. For instance: Geoengineering. Global-scale interventions in the delicate, imperfectly understood workings of the earth’s climate with potentially massive and irreversible consequences … what could go wrong?

Of course, this is a perfectly reasonable question to ask. We ask it, and engineers and technologists respond by assuring us that safety measures are in place, contingencies have been accounted for, precautions have been taken, etc. Or, alternatively, that the risks of doing nothing are greater than the risks of proceeding with some technological project. In other words, asking what could go wrong tends to lock us in the technocratic frame of mind. It invites cost/benefit analysis, rational planning, technological fixes to technological problems, all mixed through and through with sprinklings or heaps of hubris.

Very often, despite some initial failures and, one hopes, not-too-tragic accidents, the kinks do get worked out, disasters are averted (or mostly so), and the new technology stabilizes. The voices of critics who worried about what could go wrong suddenly sound a lot like a chorus of boys crying wolf. Enthusiasts wipe the sweat from their brows, take a deep breath, and confidently proclaim, “I told you so.”

All well and good. There’s only one problem. Maybe asking “What could go wrong?” is a short-sighted way of thinking about new technologies. Maybe we should also be asking, “What could go right?”

What if this new technology worked just as advertised? What if it became a barely-noticed feature of our technological landscape? What if it was seamlessly integrated into our social life? What if it delivered on its promise?

Accidents and disasters get our attention, their possibility makes us anxious. The more spectacular the promise of a new technology, the more nervous we might be about what could go wrong. But, if we are focused exclusively on the accident, we lose sight of the fact that the most consequential technologies are usually those that end up working. They are the ones that reorder our lives, reframe our experience, restructure our social lives, recalibrate our sense of time and place. Etc.

In his recent review of Jordan Ellenberg’s How Not to Be Wrong: The Power of Mathematical Thinking (a title with a mildly hubristic ring, to be sure), Peter Pesic opens with an anecdote about problem solving during World War II. Given the trade-offs involved in placing extra amor on fighter planes and bombers–increased weight, decreased range–where should military airplanes be reinforced? Noticing that returning planes had more bullet holes in the fuselage than in the engine, some suggested reinforcing the fuselage. There was one, seemingly obvious, problem with this line of thinking. As the mathematician Abraham Wald noted, this solution ignored the planes that didn’t make it back, most likely because they had been shot in the engine.

This little anecdote–from what seems like a fascinating book, by the way–reminds us that where you look sometimes makes all the difference. A truism, certainly, but no less true because of it. If in thinking about new technologies (or those old ones, which are no less consequential for having lost the radiance of novelty) we look only at the potential accident, then we may miss what matters most.

As more than a few critics have noted over the years, our thinking about technology is often already compromised by a technocratic frame of mind. We are, in such cases, already evaluating technology on its own terms. What we need then is to recover ways of thinking that don’t already assume technological standards. Admittedly, this can be a challenging project. It requires our breaking long-engrained habits of thought–habits of thought which are all the more difficult to escape because they take on the cast of common sense. My point here is to suggest that one step in that direction is to get loose of the assumption that any well working, smoothly operating technology is ipso facto a good and unproblematic technology.