Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation

1939 World's Fair ProgressA couple of years back, while I was on my World’s Fair kick, I wrote a post or two (or three) about how we imagine the future, or, rather, how we fail to imagine the future. The World’s Fairs, particularly those held between the 1930’s and 70’s, offered a rather grand and ambitious vision for what the future would hold. Granted, much of what made up that vision never quite materialized, and much of it now seems a tad hokey. Additionally, much of it amounted to a huge corporate ad campaign. Nevertheless, the imagined future was impressive in its scope, it was utopian. The three posts linked above each suggested that, relative to the World’s Fairs of the mid-20th century, we seem to have a rather impoverished imagination when it comes to the future.

One of those posts cited a 2011 essay by Peter Thiel, “The End of the Future,” outlining the sources of Thiel’s pessimism about the rate of technological advance. More recently, Dan Wang has cataloged a series of public statements by Thiel supporting his contention that technological innovation has slowed, and dangerously so. Thiel, who made his mark and his fortune as a founder of PayPal, has emerged over the last few years as one of Silicon Valley’s leading intellectuals. His pessimism, then, seems to run against the grain of his milieu. Thiel, however, is not pessimistic about the potential of technology itself; rather, as I understand him, he is critical of our inability to more boldly imagine what we could do with technology. His view is neatly summed up in his well-known quip, “We wanted flying cars, instead we got 140 characters.”

Thiel is not the only one who thinks that we’ve been beset by a certain gloomy malaise when it comes to imagining the future. Last week, in the pages of the New York Times Magazine, Jayson Greene wondered, with thinly veiled exasperation, why contemporary science-fiction is so “glum” about AI? The article is a bit muddled at points–perhaps because the author, noting the assistance of his machines, believes it is not even half his–but it registers what seems to be an increasingly recurring complaint. Just last month, for instance, I noted a similar article in Wired that urged authors to stop writing dystopian science-fiction. Behind each of these pieces there lies an implicit question: Where has our ability to imagine a hopeful, positive vision for the future gone?

Kevin Kelly is wondering the same thing. In fact, he was willing to pay for someone to tell him a positive story about the future. I’ve long thought of Kelly as one of the most optimistic of contemporary tech writers, yet of late even he appears to be striking a more ambiguous note. Perhaps needing a fresh infusion of hope, he took to Twitter with this message:

“I’ll pay $100 for the best 100-word description of a plausible technological future in 100 years that I would like to live in. Email me.”

Kelly got 23 responses, and then he constructed his own 100-word vision for the future. It is instructive to read the submissions. By “instructive,” I mean intriguing, entertaining, disconcerting, and disturbing by turns. In fact, when I first read through them I thought I’d dedicate a post to analyzing these little techno-utopian vignettes. Suffice it to say, a few people, at least, are still nurturing an expansive vision for the future.

But are their stories the exceptions that prove the rule? To put it another way, is the dominant cultural zeitgeist dystopian or utopian with regards to the future? Of course, as C.S. Lewis once put, “What you see and what you hear depends a great deal on where you are standing. It also depends on what sort of person you are.” Whatever the case may be, there certainly seem to be a lot of people who think the zeitgeist is dystopian or, at best, depressingly unimaginative. I’m not sure they are altogether wrong about this, even if the whole story is more complicated. So why might this be?

To be clear before proceeding down this line of inquiry, I’m not so much concerned with whether we ought to be optimistic or pessimistic about the future. (The answer in any case is neither.) I’m not, in other words, approaching this topic from a normative perspective. Rather, I want to poke and prod the zeitgeist a little bit to see if we can’t figure out what is going on. So, in that spirit, here are few loosely organized thoughts.

First off, our culture is, in large measure, driven by consumerism. This, of course, is little more than a cliché, but it is no less true because of it. Consumerism is finally about the individual. Individual aspirations, by their very nature, tend to be narrow and short-sighted. It is as is if the potential creative force of our collective imagination is splintered into the millions of individual wills it is made to serve.

David Nye noted this devolution of our technological aspirations in his classic work on the American technological sublime. The sublime experience that once attended our encounters with nature and then our encounters with technological creations of awe-inspiring size and dynamism, has now given way to what Nye called the consumer sublime. “Unlike the Ford assembly line or Hoover Dam,” Nye explains, “Disneyland and Las Vegas have no use value. Their representations of sublimity and special effects are created solely for entertainment. Their epiphanies have no referents; they reveal not the existence of God, not the power of nature, not the majesty of human reason, but the titillation of representation itself.”

The consumer sublime, which Nye also calls an “egotistical sublime,” amounts to “an escape from the very work, rationality, and domination that once were embodied in the American technological sublime.”

Looking at the problem of consumerism from another vantage point, consider Nicholas Carr’s theory about the hierarchy of innovation. Carr’s point of departure included Peter Thiel’s complaint about the stagnation of technological innovation cited above. In response, Carr suggested that innovation proceeds along a path more or less parallel to Maslow’s famous hierarchy of human needs. We begin by seeking to satisfy very basic needs, those related to our survival. As those basic needs are met, we are able to think about more complex needs for social interaction, personal esteem, and self-actualization.

In Carr’s stimulating repurposing of Maslow’s hierarchy, technological innovation proceeds from technologies of survival to technologies of self-fulfillment. Carr doesn’t think that these levels of innovation are neatly realized in some clean, linear fashion. But he does think that at present the incentives, “monetary and reputational,” are, in a darkly eloquent phrasing, “bending the arc of innovation … toward decadence.” Away, that is, from grand, highly visible, transformative technologies.

The end game of this consumerist reduction of technological innovation may be what Ian Bogost recently called “future ennui.” “The excitement of a novel technology (or anything, really),” Bogost writes,

“has been replaced—or at least dampened—by the anguish of knowing its future burden. This listlessness might yet prove even worse than blind boosterism or cynical naysaying. Where the trauma of future shock could at least light a fire under its sufferers, future ennui exudes the viscous languor of indifferent acceptance. It doesn’t really matter that the Apple Watch doesn’t seem necessary, no more than the iPhone once didn’t too. Increasingly, change is not revolutionary, to use a word Apple has made banal, but presaged.”

Bogost adds, “When one is enervated by future ennui, there’s no vigor left even to ask if this future is one we even want.” The technological sublime, then, becomes the consumer sublime, which becomes future ennui. This is how technological innovation ends, not with a bang but a sigh.

The second point I want to make about the pessimistic zeitgeist centers on our Enlightenment inheritance. The Enlightenment bequeathed to us, among other things, two articles of faith. The first of these was the notion of inevitable moral progress, and the second was the notion of inevitable techno-scientific progress. Together they yielded what we tend to refer to simply as the Enlightenment’s notion of Progress. Together these articles of faith cultivate hope and incite action. Unfortunately, the two were sundered by the accumulation of tragedy and despair we call the twentieth century. Techno-scientific progress was a rosy notion so long as we imagined that moral progress advanced hand in hand with it. Techno-scientific progress decoupled from Enlightenment confidence in the perfectibility of humanity leaves us with the dystopian imagination.

Interestingly, the trajectory of the American World’s Fairs illustrates both of these points. Generally speaking, the World’s Fairs of the nineteenth and early twentieth century subsumed technology within their larger vision of social progress. By the 1930’s, the Fairs presented technology as the force upon which the realization of the utopian social vision depended. The 1939 New York Fair marked a turning point. It featured a utopian social vision powered by technological innovation. From that point forward, technological innovation increasingly became a goal in itself rather than a means toward a utopian society, and technological innovation was increasingly a consumer affair of diminishing scope.

That picture was painted in rather broad strokes, but I think it will bear scrutiny. Whether the illustration ultimately holds up or not, however, I certainly think the claim stands. The twentieth century shattered our collective optimism about human nature; consequently, empowering human beings with ever more powerful technologies became the stuff of nightmares rather than dreams.

Thirdly, technological innovation on a grand scale is an act of sublimation and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic-detachment, our default posture toward reality, destroys. Make of that theory what you will.

The last point builds on all that I’ve laid out thus far and perhaps even ties it all together … maybe. I want to approach it by noting one segment of the wider conversation about technology where a big, positive vision for the future is nurtured: the Transhumanist movement. This should go without saying, but I’ll say it anyway just to put it beyond doubt. I don’t endorse the Transhumanist vision. By saying that it is a “positive” vision I am only saying that it is understood as a positive vision by those who adhere to it. Now, with that out of the way, here is the thing to recognize about the Transhumanist vision, its aspirations are quasi-religious in character.

I mean that in at least a couple of ways. For instance, it may be understood as a reboot of Gnosticism, particularly given its disparagement of the human body and its attendant limitations. Relatedly, it often aspires to a disembodied, virtual existence that sounds a lot like the immortality of the soul espoused by Western religions. It is in this way a movement focused on technologies of the self, that highest order of innovation in Carr’s pyramid; but rather than seeking technologies that are mere accouterments of the self, they pursue technologies which work on the self to push the self along to the next evolutionary plane. Paradoxically, then, technology in the Transhumanist vision works on the self to transcend the self as it now exists.

Consequently, the scope of the Transhumanist vision stems from the Transhumanist quest for transcendence. The technologies of the self that Carr had in mind were technologies centered on the existing, immanent self. Putting all of this together, then, we might say that technologies of the immanent self devolve into gadgets with ever diminishing returns–consumerist ephemera–yielding future ennui. The imagined technologies of the would-be transcendent self, however, are seemingly more impressive in their aims and inspire cultish devotion in those who hope for them. But they are still technologies of the self. That is to say, they are not animated by a vision of social scope nor by a project of political consequence. This lends the whole movement a certain troubling naiveté.

Perhaps it also ultimately limits technological innovation. Grand technological projects of the sort that people like Thiel and Kelly would like to see us at least imagine are animated by a culturally diffused vision, often religious or transcendent in nature, that channels individual action away from the conscious pursuit of immediate satisfaction.

The other alternative, of course, is coerced labor. Hold that thought.

I want to begin drawing this over-long post to close by offering it as an overdue response to Pascal-Emmanuel Gobry’s discussion of Peter Thiel, the Church, and technological innovation. Gobry agreed with Thiel’s pessimism and lamented that the Church was not more active in driving technological innovation. He offered the great medieval cathedrals as an example of the sort of creation and innovation that the Church once inspired. I heartily endorse his estimation of the cathedrals as monumental works of astounding technical achievement, artistic splendor, and transcendent meaning. And, as Gobry notes, they were the first such monumental works not built on the back of forced labor.

For projects of that scale to succeed, individuals must either be animated by ideals that drive their willing participation or they must be forced by power or circumstance. In other words, cathedrals or pyramids. Cathedrals represent innovation born of freedom and transcendent ideals. The pyramids represent innovation born of forced labor and transcendent ideals.

The third alternative, of course, is the iPhone. I use the iPhone here to stand for consumer driven innovation. Innovation that is born of relative freedom (and forced labor) but absent a transcendent ideal to drive it beyond consumerist self-actualization. And that is where we are stuck, perhaps, with technological stagnation and future ennui.

But here’s the observation I want to leave you with. Our focus on technological innovation as the key to the future is a symptom of the problem; it suggests strongly that we are already compromised. The cathedrals were not built by people possessed merely of the desire to innovate. Technological innovation was a means to a culturally inspired end. [See the Adams’ quote below.] Insofar as we have reversed the relationship and allowed technological innovation to be our raison d’être we may find it impossible to imagine a better future, much less bring it about. With regards to the future of society, if the answer we’re looking for is technological, then we’re not asking the right questions.

_____________________________________

You can read a follow-up piece here.

N.B. The initial version of this post referred to “slave” labor with regards to the pyramids. A reader pointed out to me that the pyramids were not built by slaves but by paid craftsmen. This prompted me to do a little research. It does indeed seem to be the case that “slaves,” given what we mean by the term, were not the primary source of labor on the pyramids. However, the distinction seems to me to be a fine one. These workers appear to have been subject to various degrees of “obligatory” labor although also provided with food, shelter, and tax breaks. While not quite slave labor, it is not quite the labor of free people either. By contrast, you can read about the building of the cathedrals here. That said I’ve revised the post to omit the references to slavery.

Update: Henry Adams knew something of the cultural vision at work in the building of the cathedrals. Note the last line, especially:

“The architects of the twelfth and thirteenth centuries took the Church and the universe for truths, and tried to express them in a structure which should be final.  Knowing by an enormous experience precisely where the strains were to come, they enlarged their scale to the utmost point of material endurance, lightening the load and distributing the burden until the gutters and gargoyles that seem mere ornament, and the grotesques that seem rude absurdities, all do work either for the arch or for the eye; and every inch of material, up and down, from crypt to vault, from man to God, from the universe to the atom, had its task, giving support where support was needed, or weight where concentration was felt, but always with the condition of showing conspicuously to the eye the great lines which led to unity and the curves which controlled divergence; so that, from the cross on the flèche and the keystone of the vault, down through the ribbed nervures, the columns, the windows, to the foundation of the flying buttresses far beyond the walls, one idea controlled every line; and this is true of St. Thomas’ Church as it is of Amiens Cathedral.  The method was the same for both, and the result was an art marked by singular unity, which endured and served its purpose until man changed his attitude toward the universe.”

 

Arendt on Trial

arendtThe recent publication of an English translation of Bettina Stangneth’s Eichmann Before Jerusalem: The Unexamined Life of a Mass Murderer has yielded a handful of reviews and essays, like this one, framing the book as a devastating critique of Hannah Arendt’s Eichmann in Jerusalem: A Report on the Banality of Evil

The critics seem to assume that Arendt’s thesis amounted to a denial or diminishment of Eichmann’s wickedness. Arendt’s famous formulation, “the banality of evil,” is taken to mean that Eichmann was simply a thoughtless bureaucrat thoughtlessly following orders. Based on Stangneth’s exhaustive work, they conclude that Eichmann was anything but thoughtless in his orchestration of the death of millions of Jews. Ergo, Arendt was wrong about Eichmann.

But this casual dismissal of Arendt’s argument is built on a misunderstanding of her claims. Arendt certainly believed that Eichmann’s deeds were intentional and genuinely evil. She believed he deserved to die for his crimes. She was not taken in by his performance on the witness stand in Jerusalem. She did consider him thoughtless, but thoughtlessness as she intended the word was a more complex concept than what the critics have assumed.

At least two rejoinders have been published in an attempt to clarify and defend Arendt’s position. Both agree that Stangneth herself was not nearly as dismissive of Arendt as the second-hand critics, and both argue that Stangneth’s work does not undermine Arendt’s thesis, properly understood.

The first of these pieces, “Did Eichmann Think?” by Roger Berkowitz, appeared at The American Interest, and the second, “Who’s On Trial, Eichmann or Arendt?” by Seyla Benhabib, appeared at the NY Times’ philosophy blog, The Stone. Berkowitz’s piece is especially instructive. Here is the conclusion:

“In other words, evil originates in the neediness of lonely, alienated bourgeois people who live lives so devoid of higher meaning that they give themselves fully to movements. Such joiners are not stupid; they are not robots. But they are thoughtless in the sense that they abandon their independence, their capacity to think for themselves, and instead commit themselves absolutely to the fictional truth of the movement. It is futile to reason with them. They inhabit an echo chamber, having no interest in learning what others believe. It is this thoughtless commitment that permits idealists to imagine themselves as heroes and makes them willing to employ technological implements of violence in the name of saving the world.”

Do read the rest.

What Emerson Knew About Google

As a rule, I don’t think of myself as an Emersonian–rather the opposite, in fact. But while I usually find myself arguing with Emerson as I read him, I find it a profitable argument to join and Emerson’s voice a spirited counterpoint to my own intellectual tendencies. That said, here’s a passage from “Self-Reliance” that jumped out at me today:

“The civilized man has built a coach, but has lost the use of his feet. He is supported on crutches, but lacks so much support of muscle. He has a fine Geneva watch, but he fails of the skill to tell the hour by the sun. A Greenwich nautical almanac he has, and so being sure of the information when he wants it, the man in the street does not know a star in the sky. The solstice he does not observe, the equinox he knows as little; and the whole bright calendar of the year is without a dial in his mind. His note-books impair his memory; his libraries overload his wit; the insurance-office increases the number of accidents; and it may be a question whether machinery does not encumber; [….]”

The Internet, of course, is our almanac.

Friday Night Links

Here’s another round of items for your consideration.

At Balkinization, Frank Pasquale is interviewed about his forthcoming book, The Black Box Society: The Secret Algorithms that Control Money and Information.

Mike Bulajewski offers a characteristically insightful and well-written review of the movie Her. And while at his site, I was reminded of his essay on civility from late last year. In light of the recent discussion about civility and its uses, I’d encourage you to read it.

At the New Yorker, Nick Paumgarten reflects on experience and memory in the age of GoPro.

In the LARB, Nick Carr has a sharp piece on Facebook’s social experiments early this year.

At Wired, Patrick Lin looks at robot cars with adjustable ethics settings and, at The Boston Globe, Leon Neyfakh asks, “Can Robots Be Too Nice?”

And lastly, Evan Selinger considers one critical review Nick Carr’s The Glass Cage: Automation and Us and takes a moment to explore some of the fallacies deployed against critics of technology.

Cheers!

Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.