Machines, Work, and the Value of People

Late last month, Microsoft released a “bot” that guesses your age based on an uploaded picture. The bot tended to be only marginally accurate and sometimes hilariously (or disconcertingly) wrong. What’s more, people quickly began having some fun with the program by uploading faces of actors playing fictional characters, such as Yoda or Gandalf. My favorite was Ian Bogost’s submission:

Shortly after the How Old bot had its fleeting moment of virality, Nathan Jurgenson tweeted the following:

This was an interesting observation, and it generated a few interesting replies. Jurgenson himself added, “much of the bigdata/algorithm debates miss how poor these often perform. many critiques presuppose & reify their untenable positivism.” He summed up this line of thought with this tweet: “so much ‘tech criticism’ starts first with uncritically buying all of the hype silicon valley spits out.”

Let’s pause here for a moment. All of this is absolutely true. Yet … it’s not all hype, not necessarily anyway. Let’s bracket the more outlandish claims made by the singularity crowd, of course. But take facial recognition software, for instance. It doesn’t strike me as wildly implausible that in the near future facial recognition programs will achieve a rather striking degree of accuracy.

Along these lines, I found Kyle Wrather’s replies to Jurgenson’s tweet particularly interesting. First, Wrather noted, “[How Old Bot] being wrong makes people more comfortable w/ facial recognition b/c it seems less threatening.” He then added, “I think people would be creeped out if we’re totally accurate. When it’s wrong, humans get to be ‘superior.'”

Wrather’s second comment points to an intriguing psychological dynamic. Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But as far as I know, it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? I’m not entirely sure, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

I think this latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

(It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”)

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions:  Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

So, to sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.

Do Things Want?

Alan Jacobs’ 79 Theses on Technology were offered in the spirit of a medieval disputation, and they succeeded in spurring a number of stimulating responses in a series of essays posted to the Infernal Machine over the last two weeks. Along with my response to Jacobs’ provocations, I wanted to engage a debate between Jacobs’ and Ned O’Gorman about whether or not we may meaningfully speak of what technologies want. Here’s a synopsis of the exchange with my own commentary along the way.

O’Gorman’s initial response focused on the following theses from Jacobs:

40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.

46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.

O’Gorman framed these theses by saying that he found it “perplexing” that Jacobs “is so seemingly unsympathetic to the meaningfulness of things, the class to which technologies belong.” I’m not sure, however, that Jacobs was denying the meaningfulness of things; rather, as I read him, he is contesting the claim that it is from technology that our lives derive their meaning. That may seem a fine distinction, but I think it is an important one. In any case, a little clarification about what exactly “meaning” entails, may go a long way in clarify that aspect of the discussion.

A little further on, O’Gorman shifts to the question of agency: “Our technological artifacts aren’t wholly distinct from human agency; they are bound up with it.” It is on this ground that the debate mostly unfolds, although there is more than a little slippage between the question of meaning and the question of agency.

O’Gorman appealed to Mary Carruthers’ fascinating study of the place of memory in medieval culture, The Book of Memory: A Study of Memory in Medieval Culture, to support his claim, but I’m not sure the passage he cites supports his claim. He is seeking to establish, as I read him, two claims. First, that technologies are things and things are meaningful. Second, that we may properly attribute agency to technology/things. Now here’s the passage he cites from Carruthers’ work (brackets and elipses ellipses are O’Gorman’s):

“[In the middle ages] interpretation is not attributed to any intention of the man [the author]…but rather to something understood to reside in the text itself.… [T]he important “intention” is within the work itself, as its res, a cluster of meanings which are only partially revealed in its original statement…. What keeps such a view of interpretation from being mere readerly solipsism is precisely the notion of res—the text has a sense within it which is independent of the reader, and which must be amplified, dilated, and broken-out from its words….”

“Things, in this instance manuscripts,” O’Gorman adds, “are indeed meaningful and powerful.” But in this instance, the thing (res) in view is not, in fact, the manuscripts. As Carruthers explains at various other points in The Book of Memory, the res in this context is not a material thing, but something closer to the pre-linguistic essence or idea or concept that the written words convey. It is an immaterial thing.

That said, there are interesting studies that do point to the significance of materiality in medieval context. Ivan Illich’s In the Vineyard of the Text, for example, dwells at length on medieval reading as a bodily experience, an “ascetic discipline focused by a technical object.” Then there’s Caroline Bynum’s fascinating Christian Materiality: An Essay on Religion in Late Medieval Europe, which explores the multifarious ways matter was experienced and theorized in the late middle ages.

Bynum concludes that “current theories that have mostly been used to understand medieval objects are right to attribute agency to objects, but it is an agency that is, in the final analysis, both too metaphorical and too literal.” She adds that insofar as modern theorizing “takes as self-evident the boundary between human and thing, part and whole, mimesis and material, animate and inanimate,” it may be usefully unsettled by an encounter with medieval theories and praxis, which “operated not from a modern need to break down such boundaries but from a sense that they were porous in some cases, nonexistent in others.”

Of course, taking up Bynum’s suggestion does not entail a re-imagining of our smartphone as a medieval relic, although one suspects that there is but a marginal difference in the degree of reverence granted to both objects. The question is still how we might best understand and articulate the complex relationship between our selves and our tools.

In his reply to O’Gorman, Jacobs focused on O’Gorman’s penultimate paragraph:

“Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve ‘wanting’ for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.”

It’s an interesting feature of the exchange from this point forward that O’Gorman and Jacobs at once emphatically disagree, and yet share very similar concerns. The disagreement is centered chiefly on the question of whether or not it is helpful or even meaningful to speak of technologies “wanting.” Their broad agreement, as I read their exchange, is about the inadequacy of what O’Gorman calls “philosophical volunatarianism” and “technological instrumentalism.”

In other words, if you begin by assuming that the most important thing about us is our ability to make rational and unencumbered choices, then you’ll also assume that technologies are neutral tools over which we can achieve complete mastery.

If O’Gorman means what I think he means by this–and what Jacobs takes him to mean–then I share his concerns as well. We cannot think well about technology if we think about technology as mere tools that we use for good or evil. This is the “guns don’t kill people, people kill people” approach to the ethics of technology, and it is, indeed, inadequate as a way of thinking about the ethical status of artifacts, as I’ve argued repeatedly.

Jacobs grants these concerns, but, with a nod to the Borg Complex, he also thinks that we do not help ourselves in facing them if we talk about technologies “wanting.” Here’s Jacobs’ conclusion:

“It seems that [O’Gorman] thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility, and an inability to confront the political dimensions of technological modernity.”

This seems basically right to me, but it prompted a second reply from O’Gorman that brought some further clarity to the debate. O’Gorman identified three distinct “directions” his disagreement with Jacobs takes: rhetorical, ontological, and ethical.

He frames his discussion of these three differences by insisting that technologies are meaningful by virtue of their “structure of intention,” which entails a technology’s affordances and the web of practices and discourse in which the technology is embedded. So far, so good, although I don’t think intention is the best choice of word. From here O’Gorman goes on to show why he thinks it is “rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.”

Rhetorically, O’Gorman appears to be advocating a Wittgenstein-ian, “look and see” approach. Let’s see how people are using language before we rush to delimit a word’s semantic range. To a certain degree, I can get behind this. I’ve advocated as much when it comes to the way we use the word “technology,” itself a term that abstracts and obfuscates. But I’m not sure that once we look we will find much. While our language may animate or personify our technology, I’m less sure that we typically speak about technology “wanting” anything.  We do not ordinarily say things like “my iPhone wants to be charged,” “the car wants to go out for a drive,” “the computer wants to play.” Although, I can think of an exception or two. I have heard, for example, someone explain to an anxious passenger that the airplane “wants” to stay in the air. The phrase, “what technology wants,” owes much of its currency, such as it is, to the title of Kevin Kelly’s book, and I’m pretty sure Kelly means more by it than what O’Gorman might be prepared to endorse.

Ontologically, O’Gorman is “skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.” “What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of ‘wanting’ that do not seem to be a product of our will?” he wonders. Fair enough, but all of the examples he cites are connected with beings that are, in a literal sense, alive. Of course I can’t attribute all of my desires to my conscious will, sure my dog wants to eat, and maybe in some sense my plant wants water. But there’s still a leap involved in saying that my clock wants to tell time. Wanting may not be neatly tied to willing, but I don’t see how it is not tied to sentience.

There’s one other point worth making at this juncture. I’m quite sympathetic to what is basically a phenomenological account of how our tools quietly slip into our subjective, embodied experience of the world. This is why I can embrace so much of O’Gorman’s case. Thinking back many years, I can distinctly remember a moment when I held a baseball in my hand and reflected on how powerfully I felt the urge to throw it, even though I was standing inside my home. This feeling is, I think, what O’Gorman wants us to recognize. The baseball wanted to be thrown! But how far does this kind of phenomenological account take us?

I think it runs into limits when we talk about technologies that do not enter quite so easily into the circuit of mind, body, and world. The case for the language of wanting is strongest the closer I am to my body; it weakens the further away we get from it. Even if we grant that the baseball in hand feels like it wants to be thrown, what exactly does the weather satellite in orbit want? I think this strongly suggests the degree to which the wanting is properly ours, even while acknowledging the degree to which it is activated by objects in our experience.

Finally, O’Gorman thinks that it is “perfectly legitimate and indeed ethically good and right to speak of technologies as ‘wanting.'” He believes this to be so because “wanting” is not only a matter of willing, it is “more broadly to embody a structure of intention within a given context or set of contexts.” Further, “Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.”

“It is good and right,” O’Gorman insists, “to call this ‘wanting’ because ‘wanting’ suggests that things, even machine things, have an active presence in our life—they are intentional” and, what’s more, their “active presence cannot be neatly traced back to their design and, ultimately, some intending human.”

I agree with O’Gorman that the ethical considerations are paramount, but I’m finally unpersuaded that we are on firmer ground when we speak of technologies wanting, even though I recognize the undeniable importance of the dynamics that O’Gorman wants to acknowledge by speaking so.

Consider what O’Gorman calls the “structure of intention.” I’m not sure intention is the best word to use here. Intentionality resides in the subjective experience of the “I,” but it is true, as phenomenologists have always recognized, that intentionality is not unilaterally directed by the self-consciously willing “I.” It has conscious and non-conscious dimensions, and it may be beckoned and solicited by the world that it simultaneously construes through the workings of perception.

I think we can get at what O’Gorman rightly wants us to acknowledge without attributing “wanting” to objects. We may say, for instance, that objects activate our wanting as they are intended to do by design and also in ways that are unintended by any person. But it’s best to think of this latter wanting as an unpredictable surplus of human intentionality rather than inject a non-human source of wanting. The wanting is always mine, but it may be prompted, solicited, activated, encouraged, fostered, etc. by aspects of the non-human world. So, we may correctly talk about a structure of desire that incorporates non-human aspects of the world and thereby acknowledge the situated nature of our own wanting. Within certain contexts, if we were so inclined, we may even call it a structure of temptation.

To fight the good fight, as it were, we must acknowledge how technology’s consequences exceed and slip loose of our cost/benefit analysis and our rational planning and our best intentions. We must take seriously how their use shapes our perception of the world and both enable and constrain our thinking and acting. But talk about what technology wants will ultimately obscure moral responsibility. “What the machine/algorithm wanted” too easily becomes the new “I was just following orders.” I believe this to be true because I believe that we have a proclivity to evade responsibility. Best, then, not to allow our language to abet our evasions.

Algorithms Who Art in Apps, Hallowed Be Thy Code

If you want to understand the status of algorithms in our collective imagination, Ian Bogost proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

“the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.”

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence sometimes exhibited toward technology. It is, as he fears, an impediment to clear thinking. Indeed, he is not the only one calling for the secularization of our technological endeavors. Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, Lanier expressed his concerns this way:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

While Lanier’s concerns are similar to Bogost’s, it may be worth noting that Lanier’s use of religious categories is rather more concrete. As far as I can tell, Bogost deploys a religious frame as a rhetorical device, and rather effectively so. Lanier’s criticisms, however, have been aroused by religiously intoned expressions of a desire for transcendence voiced by denizens of the tech world themselves.

But such expressions are hardly new, nor are they relegated to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, David Noble rightly insisted that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

So that no one would misunderstand his meaning, he added,

“This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.”

Along with chapters on the space program, atomic weapons, and biotechnology, Noble devoted a chapter to the history AI, titled “The Immortal Mind.” Noble found that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec–all of them influential theorists and practitioners in the development of AI–find their consummation in the Singularity movement. The movement envisions a time, 2045 is frequently suggested, when the distinction between machines and humans will blur and humanity as we know it will eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging on the age-old quest for the immortal life. Noble, who died in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documented alternated between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself. Noble cites one Artificial Life researcher who explains, “I fee like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by Hans Moravec:

“Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.”

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention–the finding within–of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.” I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that would be otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate to the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What kind of motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon apart from the ideological frame provided Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. It’s evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman, Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In the essay with which we began, Ian Bogost framed the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that plagues our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. It wouldn’t be too far from the mark, I think, to say that Bogost is working with what we might classify as an Enlightenment understanding of Religion, one that characterizes it as Reason’s Other, i.e. as a-rational if not altogether irrational, superstitious, authoritarian, and pernicious. For his part, Lanier appears to be working with similar assumptions.

Noble’s work complicates this picture, to say the least. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put this another way, the Enlightenment–and, yes, we are painting with broad strokes here–did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment re-named these Progress, Utopia, and Technology respectively. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology within the context of Progress, an inexorable force driving history toward its Utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the greater the emphasis on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing it’s fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics–these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, is deeply contaminated by religion, often regarded by the same people as the most irrational of human projects.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post-Christian world that it has presently flourished.

It is Christian insofar as its aspirations that are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who referencing the “Judeo-Christian tradition” suggested that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip. “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich observed, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

It is a heresy insofar as it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, and the resurrection of the body, to name a few. Having said as much, it would seem that one could perhaps conceive of the religion of technology as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

______________________

Near the end of The Religion of Technology, David Noble forcefully articulated the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” Noble warned, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble added, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

There is another significant point at which the religion of technology departs from its antecedent: “The millenarian promise of restoring mankind to its original Godlike perfection–the underlying premise of the religion of technology–was never meant to be universal.” Instead, the salvation it promises is limited finally to the very few will be able to afford it; it is for neither the poor nor the weak. Nor, would it seem, is it for those who have found a measure of joy or peace or beauty within the bounds of the human condition as we now experience it, frail as it may be.

Lastly, it is worth noting that the religion of technology appears to have no doctrine of final judgment. This is not altogether surprising given that, as Bogost warned, the divinizing of technology carries the curious effect of absolving us of responsibility for the tools that we fashion and the uses to which they are put.

I have no neat series of solutions to tie all of this up; rather I will give the last word to Wendell Berry:

“To recover from our disease of limitlessness, we will have to give up the idea that we have a right to be godlike animals, that we are potentially omniscient and omnipotent, ready to discover ‘the secret of the universe.’ We will have to start over, with a different and much older premise: the naturalness and, for creatures of limited intelligence, the necessity, of limits. We must learn again to ask how we can make the most of what we are, what we have, what we have been given.”

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

The Best Time to Take the Measure of a New Technology

In defense of brick and mortar bookstores, particularly used book stores, advocates frequently appeal to the virtue of serendipity and the pleasure of an unexpected discovery. You may know what you’re looking for, but you never know what you might find. Ostensibly, recommendation algorithms serve the same function in online contexts, but the effect is rather the opposite of serendipity and the discoveries are always expected.

Take, for instance, this book I stumbled on at a local used book store: Electric Language: A Philosophical Study of Word Processing by Michael Heim. The book is currently #3,577,358 in Amazon’s Bestsellers Ranking, and it has been bought so infrequently that no other book is linked to it. My chances of ever finding this book were vanishingly small, but on Amazon they were slimmer still.

I’m quite glad, though, that Electric Language did cross my path. Heim’s book is a remarkably rich meditation on the meaning of word processing, something we now take for granted and do not think about at all. Heim wrote his book in 1987. The article in which he first explored the topic appeared in 1984. In other words, Heim was contemplating word processing while the practice was still relatively new. Heim imagines that some might object that it was still too early to take the measure of word processing. Heim’s rejoinder is worth quoting at length:

“Yet it is precisely this point in time that causes us to become philosophical. For it is at the moment of such transitions that the past becomes clear as a past, as obsolescent, and the future becomes clear as destiny, a challenge of the unknown. A philosophical study of digital writing made five or ten years from now would be better than one written now in the sense of being more comprehensive, more fully certain in its grasp of the new writing. At the same time, however, the felt contrast with the older writing technology would have become faded by the gradually increasing distance from typewritten and mechanical writing. Like our involvement with the automobile, that with processing texts will grow in transparency–until it becomes a condition of our daily life, taken for granted.

But what is granted to us in each epoch was at one time a beginning, a start, a change that was startling. Though the conditions of daily living do become transparent, they still draw upon our energies and upon the time of our lives; they soon become necessary conditions and come to structure our lives. It is incumbent on us then to grow philosophical while we can still be startled, for philosophy, if Aristotle can be trusted, begins in wonder, and, as Heraclitus suggests, ‘One should not act or speak as if asleep.'”

It is when a technology is not yet taken for granted that it is available to thought. It is only when a living memory of the “felt contrast” remains that the significance of the new technology is truly evident. Counterintuitive conclusions, perhaps, but I think he’s right. There’s a way of understanding a new technology that is available only to those who live through its appearance and adoption, and who know, first hand, what it displaced. As I’ve written before, this explains, in part, why it is so tempting to view critics of new technologies as Chicken Littles:

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of myunsolicited points of advice to tech writers.

Now, while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

Heim wrote Electric Language on a portable Tandy 100.

Heim wrote Electric Language on a portable Tandy 100.