What Emerson Knew About Google

As a rule, I don’t think of myself as an Emersonian–rather the opposite, in fact. But while I usually find myself arguing with Emerson as I read him, I find it a profitable argument to join and Emerson’s voice a spirited counterpoint to my own intellectual tendencies. That said, here’s a passage from “Self-Reliance” that jumped out at me today:

“The civilized man has built a coach, but has lost the use of his feet. He is supported on crutches, but lacks so much support of muscle. He has a fine Geneva watch, but he fails of the skill to tell the hour by the sun. A Greenwich nautical almanac he has, and so being sure of the information when he wants it, the man in the street does not know a star in the sky. The solstice he does not observe, the equinox he knows as little; and the whole bright calendar of the year is without a dial in his mind. His note-books impair his memory; his libraries overload his wit; the insurance-office increases the number of accidents; and it may be a question whether machinery does not encumber; [....]“

The Internet, of course, is our almanac.

Friday Night Links

Here’s another round of items for your consideration.

At Balkinization, Frank Pasquale is interviewed about his forthcoming book, The Black Box Society: The Secret Algorithms that Control Money and Information.

Mike Bulajewski offers a characteristically insightful and well-written review of the movie Her. And while at his site, I was reminded of his essay on civility from late last year. In light of the recent discussion about civility and its uses, I’d encourage you to read it.

At the New Yorker, Nick Paumgarten reflects on experience and memory in the age of GoPro.

In the LARB, Nick Carr has a sharp piece on Facebook’s social experiments early this year.

At Wired, Patrick Lin looks at robot cars with adjustable ethics settings and, at The Boston Globe, Leon Neyfakh asks, “Can Robots Be Too Nice?”

And lastly, Evan Selinger considers one critical review Nick Carr’s The Glass Cage: Automation and Us and takes a moment to explore some of the fallacies deployed against critics of technology.

Cheers!

Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

Technology in the Classroom

I want to briefly draw your attention to a series of related posts about technology in the classroom, beginning with Clay Shirky’s recent post explaining his decision to have students put their wired digital devices away during class. Let me say that again: Clay Shirky has decided to ban lap tops from his classroom. Clay Shirky has long been one of the Internet’s leading advocates and cheerleader’s, so this is a pretty telling indication of the scope of the problem.

I particularly appreciated the way Shirky focused on what we might call the ecosystem of the classroom. The problem is not simply that connected devices distract the student who uses them and hampers their ability to learn:

“Anyone distracted in class doesn’t just lose out on the content of the discussion, they create a sense of permission that opting out is OK, and, worse, a haze of second-hand distraction for their peers. In an environment like this, students need support for the better angels of their nature (or at least the more intellectual angels), and they need defenses against the powerful short-term incentives to put off complex, frustrating tasks. That support and those defenses don’t just happen, and they are not limited to the individual’s choices. They are provided by social structure, and that structure is disproportionately provided by the professor, especially during the first weeks of class.”

I came across Shirky’s post via Nick Carr, who also considers a handful of studies that appear to support the decision to create a relatively low-tech classroom environment. I recommend you click through to read the whole thing.

If you’re thinking that this is a rather retrograde, reactionary move to make, then I’d suggest taking a quick look at Alan Jacob’s brief comments on the matter.

You might also want to ask yourself why the late Steve Jobs; Chris Anderson, the former editor at Wired and CEO of a robotics company; Evan Williams, the founder of Blogger, Twitter, and Medium; and a host of other tech-industry heavyweights deploy seemingly draconian rules for how their own children relate to digital devices and the Internet. Here’s Anderson: “My kids accuse me and my wife of being fascists and overly concerned about tech, and they say that none of their friends have the same rules.”

Perhaps they are on to something, albeit in a “do-as-I-say-not-as-I-do” sort of way. Nick Bilton has the story here.

__________________________

Okay, and now a quick administrative note. Rather than create a separate entry for this, I thought it best just to raise the matter at the tail end of this shorter post. Depending on how you ordinarily get to this site, you may have noticed that the feed for this blog now only gives you a snippet view and asks you to click through to read the whole.

I initially made this change for rather self-serving reasons related to the architecture of WordPress, and it was also going to be a temporary change. However, I realized that this change resolved a couple of frustrations I’d had for awhile.

The first of these centered on my mildly obsessive nature when it came to editing and revising. Invariably, regardless of what care I took before publishing, posts would get out with at least one or two typos, inelegant phrases, etc. When I catch them later, I fix them, but those who get their posts via email never got the corrections. If you have to click over to read the whole, however, you would always see the latest, cleanest version. Relatedly, I sometimes find it preferable to update a post with some related information or new links rather than create a new post (e.g.). It would be unlikely that email subscribers would ever see those updates unless they were clicking to the site for the most updated version of the post.

Consequently, I’m considering keeping the snippet feed. I do realize, though, that this might be mildly annoying, involving as it does an extra click or two. So, my question to you is this: do you care? I have a small but dedicated readership, and I’d hate to make a change that might ultimately discourage you from continuing to read. If you have any thoughts on the matter, feel free to share in the comments below or via email.

Also, I’ve been quite negligent about replying to comments of late. When I get a chance to devote some time to this blog, which is not often, I’m opting to write instead. I really appreciate the comments, though, and I’ll do my best to interact as time allows.

A Few Items for Your Consideration

Here are a few glimpses of the future ranging from the near and plausible, to the distant and uncertain. In another world–one, I suppose, in which I get paid to write these posts–I’d write more about each. In this world, I simply pass them along for your consideration.

Google Glass App Reads Your Emotions

“A new Glassware App for Google Glass will uncover a person’s emotion, age range and gender just by facial recognition technology ….

Facial recognition has always been seen with nervousness, as people tend to prefer privacy over the ability to see a stranger’s age or gender. But these two apps prove sometimes letting a robot know you’re sad can help for a better relationship between fellow humans. Letting the robot lead has proven to increase human productivity and better the ebb and flow of a work space, a partnership, any situation dealing with human communication.

The SHORE app is currently not available for download, but you can try US+ now. May the robots guide us to a more humane future.”

GM Cars to Monitor Drivers

“General Motors, the largest US auto manufacturer by sales, is preparing to launch the world’s first mass-produced cars with eye- and head-tracking technology that can tell whether drivers are distracted, according to people with knowledge of the plans ….

The company is investing in technology that will be able to tell how hard a driver is thinking by monitoring the dilation of the pupils, and combines facial information with sensors for vital signs such as blood alcohol levels and heart rate.”

Electrical Brain Stimulation

“Transcranial direct current stimulation (TDCS), which passes small electrical currents directly on to the scalp, stimulates the nerve cells in the brain (neurons). It’s non-invasive, extremely mild and the US military even uses TDCS in an attempt to improve the performance of its drone pilots.

The idea is that it makes the neurons more likely to fire and preliminary research suggests electrical simulation can improve attention as well as have a positive impact on people with cognitive impairments and depression ….

And more worryingly for him, people are also increasingly making brain stimulation kits themselves. This easily ‘puts the technology in the realms of clever teenagers,’ adds Dr Davis.

An active forum on reddit is devoted to the technology, and people there have complained of ‘burning to the scalp’. Another user wrote that they ‘seemed to be getting angry frequently’ after using TDCS.”

Preparing for Superintelligent AI

“Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.

The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.”

We’ve got some thinking to do, folks, careful, patient thinking. Happily, we don’t have to do that thinking alone and in isolation. Here is Evan Selinger helping us think clearly about our digital tools with his usual, thoughtful analysis: “Why Your Devices Shouldn’t Do the Work of Being You.”

Here, too, is a critical appraisal of the religiously intoned hopes of the cult of the Singularity.

Finally, Nick Carr invites us to cautiously consider the potential long-term consequences of the recently unveiled Apple Watch since “never before have we had a tool that promises to be so intimate a companion and so diligent a monitor as the Apple Watch.”