Consider the Traffic Light Camera

It looks like I may be getting a traffic citation in the mail within the next few days. A few nights ago, while making a left into my neighborhood, I was slowed by a car that made a creeping right ahead of me onto the same street. As I finally completed my turn, I saw a bright flash go off behind me. While I’ve noted the proliferation of traffic light cameras around town with mildly disconcerted interest, I hadn’t yet noticed the camera on this rather inconsequential intersection. A day or two later at the same spot, I found myself coming to an abrupt stop once the light hit yellow to ensure that I wasn’t caught completing my turn as the light turned red. Automated surveillance had done its job; I had internalized the gaze of the unblinking eye.

For some time now I’ve been unsettled by the proliferation of traffic light cameras, but I’ve not yet been able to articulate why exactly. While Big Brother fears may be part of the concern, I’m mostly troubled by how the introduction of automated surveillance and ticketing seems to encourage the replacement of human judgment, erroneous as it may often be, by unthinking, habituated behavior.

The traffic light camera knows only that you have crossed or not crossed a certain point at a certain time. Its logic is binary: you are either out of the intersection or in it. Context matters not at all; there is no room for deliberation. If we can imagine a limited set of valid reasons for proceeding through a red light, automated ticketing cannot entertain them. While the intermittently monitored yellow light invited judgment and practical wisdom; the unceasingly monitored yellow light tolerates only unwavering compliance.

In this way, it hints at a certain pattern in the relationship between human beings and the complex technological systems we create. Take the work of getting from here to there as an example. Our baseline is walking. We can walk from here to there in just about any way that the terrain will allow and at whatever rate our needs dictate. And while the journey may have its perils, they are not inherent in walking itself. After all, it would be a strange accident indeed if I were incapacitated as a result of bumping into someone or even stumbling on a stone.

But walking is not necessarily the fastest or most efficient way of getting from here to there, especially if I have a sizable load to bear. Horse-drawn conveyances relieve me of the work of walking and potentially increase my rate of speed without, it seems to me, radically increasing the risks. But they also tend to limit my freedom of motion, a decently kept road being rather more of a necessity than it would’ve been for the walker. Desire paths illustrate this point neatly. The walker may make his own path, and frequently does so.

desire pathThen the train comes along. The train radically increased the rate of speed at which human beings may travel, but it also elevated risks–a derailment, after all, is not quite the same thing as a stumble–and restricted freedom of motion to the tracks laid out for it. It’s worth noting that the railway system was one of the first expansive technological systems requiring, for its efficient and safe operation, rigidly regimented and coordinated action. We even owe the time zones to the systematizing demands of the railroads. The railway system, then, was a massive feat of system building, and would-be travelers were integrated into this system.

The automobile, too, is powerful and potentially dangerous, and must also be carefully managed. Consequently, we created an elaborate system of roads and rules to govern how we use this powerful machine; we created a mechanistic environment to manage the machine. Interestingly, the car allows for a bit more freedom of action than the train, illustrated nicely by the off-roading ideal, which is a fantasy of liberation. But, for the most part, our driving, in order to be safe and efficient, is rationalized and systematized. Apart from this over-arching systematization, the off-roading fantasy would have little appeal. All of this is, of course, a “good thing.” Safety is important, etc., etc. The clip below, filmed at an intersection in Addis Ababa, illustrates what driving looks like in the absence of such rationalization and regimentation.

In an ideal world, one in which all rules and practical guidelines are punctiliously obeyed, the traffic flows freely and safely. Of course, this is far from an ideal world; accidents happen, and they are a leading source of inefficiency, expense, and harm. When driving is conceived of as an engineering problem solved by the fabrication of elaborate systems, accidents are human glitches in the machinery of automobile transportation. So, lanes, signals, signs, traffic lights, etc.–all of it is designed to discipline our driving so that it may resemble the smooth operation of machines that follow rules flawlessly. The more machine-like our driving, the more efficient the system.

As an illustration of the basic principle, take UPS’s deployment of Orion, a complex algorithm designed to plot out the best delivery route for drivers. “Driver reaction to Orion is mixed,” according to a WSJ piece on the software,

“The experience can be frustrating for some who might not want to give up a degree of autonomy, or who might not follow Orion’s logic. For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. But Orion often can see a payoff, measured in small amounts of time and money that the average person might not see.”

Commenting on this story at Marginal Revolution, Alex Taborrok added, “Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.” However we might frame the matter, it remains the case that, given the logic of the system, the driver’s judgment is the glitch that needs to be eradicated to achieve the best results.

Let’s consider the traffic light camera from this angle. Setting aside the not insignificant function of raising municipal funds through increased ticketing and fines, traffic light cameras are designed to mitigate pesky and erratic human judgment. Always-on surveillance ensures that my actions are ever more strictly synchronized with the vast technological system that orders automobile traffic. The traffic light camera assures that I am ever more fully assimilated into the logic of the system, to put it a bit too grimly perhaps. The upside, of course, is the promise of ever greater efficiency and safety–the only values technological systems can recognize, of course.

Ultimately, however, we don’t make very good machines. We get drowsy and angry and drunk; we are easily distracted and, even at our best, we can only take in a limited slice of the environment around us. Enter self-driving cars and the promise of eliminating human error from the complex automotive transportation system.

The trajectory that leads to self-driving cars was already envisioned years before the modern highway system was built. In the film that accompanied GM’s Futurama exhibit at the 1939 New York World’s Fair, we see a model highway system of the future (1960), and we are told, beginning around the 14:30 mark, “Traffic moves at an unreduced rates of speed. Safe distance between cars is maintained by automatic radio control …. The keynote of this motorway? Safety. Safety with increased speed.”

Most pitches I’ve heard for self-driving cars trade on the same idea: increased safety through automation, i.e., the elimination of human error. That, and increased screen time, because if you’re not having to pay attention to the road, then you’re free to dive into your device of choice. Or look at the scenery, if you’re quaint that way, but let’s be serious. Take, for example, the self-driving car Mercedes-Benz displayed at this year’s CES, the F015: “For interaction within the vehicle, the passengers rely on six display screens located around the cabin. They also interact with the vehicle through gestures, eye-tracking or by touching the high-resolution screens.”

mercedes self-driving

But setting the ubiquity of screens aside, we can extract a general principle from the trajectory I’ve just sketched out.

In a system that works best the more machine-like we become, the human component becomes expendable as soon as a machine can outperform it. Or to put it another way, any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

Of course, it should be noted that, as per usual, the hype surrounding self-driving cars is just that. Writing for Fortune, Nicholas Carr, cited Ford’s chief engineer, Raj Nair, who, following his boss’s promise of automated cars rolling off the assembly line in five years time, “explained that ‘full automation’ would be possible only in limited circumstances, particularly ‘where high definition mapping is available along with favorable environmental conditions for the vehicle’s sensors.’” Carr added,

“While it may be relatively straightforward to design a car that can drive itself down a limited-access highway in good weather, programming it to navigate chaotic city or suburban streets or to make its way through a snowstorm or a downpour poses much harder challenges. Many engineers and automation experts believe it will take decades of further development to build a completely autonomous car, and some warn that it may never happen, at least not without a massive and very expensive overhaul of our road system.”

That said, the dream of full automation will probably direct research and development for years to come, and we will continue to see incremental steps in that direction. In retrospect, one of those steps will have been the advent of traffic light cameras, not because it advanced the technology of self-driving cars, but because it prepared us to assent to the assumption that we would be ultimately expendable. The point, then, of this rambling post might be put this way: Our attitude toward new technologies may be less a matter of conscious thought than of tacit assumptions internalized through practices and habits.

Lethal Autonomous Weapons and Thoughtlessness

In the mid-twentieth century, Hannah Arendt wrote extensively about the critical importance of learning to think in the aftermath of a great rupture in our tradition of thought. She wrote of the desperate situation when “it began to dawn upon modern man that he had come to live in a world in which his mind and his tradition of thought were not even capable of asking adequate, meaningful questions, let alone of giving answers to its own perplexities.”

Frequently, Arendt linked this rupture in the tradition, this loss of a framework that made our thinking meaningful to the appearance of totalitarianism in the early twentieth century. But she also recognized that the tradition had by then been unravelling for some time, and technology played a not insignificant role in this unraveling and the final rupture. In “Tradition and the Modern Age,” for example, she argues that the “priority of reason over doing, of the mind’s prescribing its rules to the actions of men” had been lost as a consequence of “the transformation of the world by the Industrial Revolution–a transformation the success of which seemed to prove that man’s doings and fabrications prescribe their rules to reason.”

Moreover, in the Prologue to The Human Condition, after reflecting on Sputnik, computer automation, and the pursuit of what we would today call bio-engineering, Arendt worried that our Thinking would prove inadequate to our technologically-enhanced Doing. “If it should turn out to be true,” she added, “that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

That seems as good an entry as any into a discussion of Lethal Autonomous Robots. A short Wired piece on the subject has been making the rounds the past day or two with the rather straightforward title, “We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea.” The story takes as its point of departure the recent pledge on the part of a robotics company, Clearpath Robotics, never to build “killer robots.”

Clearpath’s Chief Technology Officer, Ryan Gariepy, explained the decision: “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now, but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

Not everyone shares Gariepy’s trepidation. Writing for the blog of the National Defense Industrial Association, Sarah Sicard discussed the matter with Ronald Arkin, a dean at Georgia Tech’s School of Interactive Computing. “Unless regulated by international treaties,”Arkin believes, “lethal autonomy is inevitable.”

It’s worth pausing for a moment to explore the nature of this claim. It’s a Borg Complex claim, of course, although masked slightly by the conditional construction, but that doesn’t necessarily make it wrong. Indeed, claims of inevitability are especially plausible in the context of military technology, and it’s not hard to imagine why. Even if one nation entertained ethical reservations about a certain technology, they could never assure themselves that other nations would share their qualms. Better then to set their reservations aside than to be outpaced on the battlefield with disastrous consequences. The force of the logic is compelling. In such a case, however, the inevitability, such as it is, does not reside in the technology per se; it resides in human nature. But even to put it that way threatens to obscure the fact that choices are being and they could be made otherwise. The example set by Clearpath Robotics, a conscious decision to forego research and development on principle only reinforces this conclusion.

But Arkin doesn’t just believe the advent of Lethal Autonomous Robots to be inevitable; he seems to think that it will be a positive good. Arkin believes that the human beings are the “weak link” in the “kill chain.” The question for roboticists is this: “Can we find out ways that can make them outperform human warfighters with respect to ethical performance?” Arkin appears to be fairly certain that the answer will be a rather uncomplicated “yes.”

For a more complicated look at the issue, consider the report (PDF) on Lethal Autonomous Weapons presented to the UN’s Human Rights Council by special rapporteur, Christof Heyns. The report was published in 2013 and first brought to my attention by a post on Nick Carr’s blog. The report explores a variety of arguments for and against the development and deployment of autonomous weapons systems and concludes, “There is clearly a strong case for approaching the possible introduction of LARs with great caution.” It continues:

“If used, they could have far-reaching effects on societal values, including fundamentally on the protection and the value of life and on international stability and security. While it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements in many respects, it is foreseeable that they could comply under certain circumstances, especially if used alongside human soldiers. Even so, there is widespread concern that allowing LARs to kill people may denigrate the value of life itself.”

Among the more salient observations made by the report there is this note of concern about unintended consequences:

“Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”

As with the concern about the denigration of the value of life itself, this worry about the normalization of armed conflict is difficult to empirically verify (although US drone operations in Afghanistan, Pakistan, and the Arabian peninsula are certainly far from irrelevant to the discussion). Consequently, such considerations tend to carry little weight when the terms of the debate are already compromised by technocratic assumptions regarding what counts as compelling reasons, proofs, or evidence.

Such assumptions appear to be all that we have left to go on in light of the rupture Arendt described of the tradition of thought. Or, to put that a bit more precisely, it may not be all that we have left, but it is what we have gotten. We have precious little to fall back on when we begin to think about what we are doing when what we are doing involves, for instance, the fabrication of Lethal Autonomous Robots. There are no customs of thought and action, no traditions of justice, no culturally embodied wisdom to guide us, at least not in any straightforward and directly applicable fashion. We are thinking without a bannister, as Arendt put it elsewhere, if we are thinking at all.

Perhaps it is because I have been reading a good bit of Arendt lately, but I’m increasingly struck by situations we encounter, both ordinary and extraordinary, in which our default problem-solving, cost/benefit analysis mode of thinking fails us. In such situations, we must finally decide what is undecidable and take action, action for which we can be held responsible, action for which we can only hope for forgiveness, action made meaningful by our thinking.

Arendt distinguished this mode of thinking, that which seeks meaning and is a ground for action, from that which seeks to know with certainty what is true. This helps explain, I believe, what she meant in the passage I cited above when she feared that we would become “thoughtless” and slaves to our “know-how.” We are in such cases calculating and measuring, but not thinking, or willing, or judging. Consequently, under such circumstances, we are also perpetually deferring responsibility.

Considered in this light, Lethal Autonomous Weapons threaten to become a symbol of our age; not in their clinical lethality, but in their evacuation of human responsibility from one of the most profound and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mistakes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held accountable and they cannot be forgiven.

We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-scientific thinking. In the absence of such certainty, because we do not seem to know how to think or judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal necessity.

____________________________

Update: Dale Carrico has posted some helpful comments, particularly on Arendt.

Algorithms Who Art in Apps, Hallowed Be Thy Code

If you want to understand the status of algorithms in our collective imagination, Ian Bogost proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

“the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.”

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence sometimes exhibited toward technology. It is, as he fears, an impediment to clear thinking. Indeed, he is not the only one calling for the secularization of our technological endeavors. Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, Lanier expressed his concerns this way:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

While Lanier’s concerns are similar to Bogost’s, it may be worth noting that Lanier’s use of religious categories is rather more concrete. As far as I can tell, Bogost deploys a religious frame as a rhetorical device, and rather effectively so. Lanier’s criticisms, however, have been aroused by religiously intoned expressions of a desire for transcendence voiced by denizens of the tech world themselves.

But such expressions are hardly new, nor are they relegated to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, David Noble rightly insisted that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

So that no one would misunderstand his meaning, he added,

“This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.”

Along with chapters on the space program, atomic weapons, and biotechnology, Noble devoted a chapter to the history AI, titled “The Immortal Mind.” Noble found that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec–all of them influential theorists and practitioners in the development of AI–find their consummation in the Singularity movement. The movement envisions a time, 2045 is frequently suggested, when the distinction between machines and humans will blur and humanity as we know it will eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging on the age-old quest for the immortal life. Noble, who died in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documented alternated between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself. Noble cites one Artificial Life researcher who explains, “I fee like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by Hans Moravec:

“Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.”

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention–the finding within–of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.” I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that would be otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate to the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What kind of motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon apart from the ideological frame provided Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. It’s evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman, Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In the essay with which we began, Ian Bogost framed the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that plagues our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. It wouldn’t be too far from the mark, I think, to say that Bogost is working with what we might classify as an Enlightenment understanding of Religion, one that characterizes it as Reason’s Other, i.e. as a-rational if not altogether irrational, superstitious, authoritarian, and pernicious. For his part, Lanier appears to be working with similar assumptions.

Noble’s work complicates this picture, to say the least. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put this another way, the Enlightenment–and, yes, we are painting with broad strokes here–did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment re-named these Progress, Utopia, and Technology respectively. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology within the context of Progress, an inexorable force driving history toward its Utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the greater the emphasis on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing it’s fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics–these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, is deeply contaminated by religion, often regarded by the same people as the most irrational of human projects.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post-Christian world that it has presently flourished.

It is Christian insofar as its aspirations that are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who referencing the “Judeo-Christian tradition” suggested that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip. “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich observed, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

It is a heresy insofar as it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, and the resurrection of the body, to name a few. Having said as much, it would seem that one could perhaps conceive of the religion of technology as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

______________________

Near the end of The Religion of Technology, David Noble forcefully articulated the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” Noble warned, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble added, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

There is another significant point at which the religion of technology departs from its antecedent: “The millenarian promise of restoring mankind to its original Godlike perfection–the underlying premise of the religion of technology–was never meant to be universal.” Instead, the salvation it promises is limited finally to the very few will be able to afford it; it is for neither the poor nor the weak. Nor, would it seem, is it for those who have found a measure of joy or peace or beauty within the bounds of the human condition as we now experience it, frail as it may be.

Lastly, it is worth noting that the religion of technology appears to have no doctrine of final judgment. This is not altogether surprising given that, as Bogost warned, the divinizing of technology carries the curious effect of absolving us of responsibility for the tools that we fashion and the uses to which they are put.

I have no neat series of solutions to tie all of this up; rather I will give the last word to Wendell Berry:

“To recover from our disease of limitlessness, we will have to give up the idea that we have a right to be godlike animals, that we are potentially omniscient and omnipotent, ready to discover ‘the secret of the universe.’ We will have to start over, with a different and much older premise: the naturalness and, for creatures of limited intelligence, the necessity, of limits. We must learn again to ask how we can make the most of what we are, what we have, what we have been given.”

Quantify Thyself

A thought in passing this morning. Here’s a screen shot that purports to be from an ad for Microsoft’s new wearable device called Band:

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central

I say “purports” because I’ve not been able to find this particular shot and caption on any official Microsoft sites. I first encountered it in this story about Band from October of last year, and I also found it posted to a Reddit thread around the same time. You can watch the official ad here.

It may be that this image is hoax or that Microsoft decided it was a bit too disconcerting and pulled it. A more persistent sleuth should be able to determine which. Whether authentic or not, however, it is instructive.

In tweeting a link to the story in which I first saw the image, I commented: “Define ‘know,’ ‘self,’ and ‘human.'” Nick Seaver astutely replied: “that’s exactly what they’re doing, eh?”

Again, the “they” in this case appears to be a bit ambiguous. That said, the picture is instructive because it reminds us, as Seaver’s reply suggests, that more than our physical fitness is at stake in the emerging regime of quantification. If I were to expand my list of 41 questions about technology’s ethical dimensions, I would include this one: How will the use of this technology redefine my moral vocabulary? or What about myself will the use of this technology encourage me to value?

Consider all that is accepted when someone buys into the idea, even if tacitly so, that Microsoft Band will in fact deepen their knowledge of themselves. What assumptions are accepted about the nature of what it means to know and what there is to know and what can be known? What is implied about the nature of the self when we accept that a device like Band can help us understand it more effectively? We are, needless to say, rather far removed from the Delphic injunction, “Know thyself.”

It is not, of course, that I necessarily think users of Band will be so naive that they will consciously believe there is nothing more to their identity than what Band can measure. Rather, it’s that most of us do have a propensity to pay more attention to what we can measure, particularly when an element of competitiveness is introduced.

I’ll go a step further. Not only do we tend to pay more attention to what we can measure, we begin to care more about what can measure. Perhaps that is because measurement affords us a degree of ostensible control over whatever it is that we are able to measure. It makes self-improvement tangible and manageable, but it does so, in part, by a reduction of the self to those dimensions that register on whatever tool or device we happen to be using to take our measure.

I find myself frequently coming back to one line in a poem by Wendell Berry: “We live the given life, not the planned.” Indeed, and we might also say, “We live the given life, not the quantified.”

A certain vigilance is required to remember that our often marvelous tools of measurement always achieve their precision by narrowing, sometimes radically, what they take into consideration. To reveal one dimension of the whole, they must obscure the others. The danger lies in confusing the partial representation for the whole.

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.