Lethal Autonomous Weapons and Thoughtlessness

In the mid-twentieth century, Hannah Arendt wrote extensively about the critical importance of learning to think in the aftermath of a great rupture in our tradition of thought. She wrote of the desperate situation when “it began to dawn upon modern man that he had come to live in a world in which his mind and his tradition of thought were not even capable of asking adequate, meaningful questions, let alone of giving answers to its own perplexities.”

Frequently, Arendt linked this rupture in the tradition, this loss of a framework that made our thinking meaningful to the appearance of totalitarianism in the early twentieth century. But she also recognized that the tradition had by then been unravelling for some time, and technology played a not insignificant role in this unraveling and the final rupture. In “Tradition and the Modern Age,” for example, she argues that the “priority of reason over doing, of the mind’s prescribing its rules to the actions of men” had been lost as a consequence of “the transformation of the world by the Industrial Revolution–a transformation the success of which seemed to prove that man’s doings and fabrications prescribe their rules to reason.”

Moreover, in the Prologue to The Human Condition, after reflecting on Sputnik, computer automation, and the pursuit of what we would today call bio-engineering, Arendt worried that our Thinking would prove inadequate to our technologically-enhanced Doing. “If it should turn out to be true,” she added, “that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

That seems as good an entry as any into a discussion of Lethal Autonomous Robots. A short Wired piece on the subject has been making the rounds the past day or two with the rather straightforward title, “We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea.” The story takes as its point of departure the recent pledge on the part of a robotics company, Clearpath Robotics, never to build “killer robots.”

Clearpath’s Chief Technology Officer, Ryan Gariepy, explained the decision: “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now, but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

Not everyone shares Gariepy’s trepidation. Writing for the blog of the National Defense Industrial Association, Sarah Sicard discussed the matter with Ronald Arkin, a dean at Georgia Tech’s School of Interactive Computing. “Unless regulated by international treaties,”Arkin believes, “lethal autonomy is inevitable.”

It’s worth pausing for a moment to explore the nature of this claim. It’s a Borg Complex claim, of course, although masked slightly by the conditional construction, but that doesn’t necessarily make it wrong. Indeed, claims of inevitability are especially plausible in the context of military technology, and it’s not hard to imagine why. Even if one nation entertained ethical reservations about a certain technology, they could never assure themselves that other nations would share their qualms. Better then to set their reservations aside than to be outpaced on the battlefield with disastrous consequences. The force of the logic is compelling. In such a case, however, the inevitability, such as it is, does not reside in the technology per se; it resides in human nature. But even to put it that way threatens to obscure the fact that choices are being and they could be made otherwise. The example set by Clearpath Robotics, a conscious decision to forego research and development on principle only reinforces this conclusion.

But Arkin doesn’t just believe the advent of Lethal Autonomous Robots to be inevitable; he seems to think that it will be a positive good. Arkin believes that the human beings are the “weak link” in the “kill chain.” The question for roboticists is this: “Can we find out ways that can make them outperform human warfighters with respect to ethical performance?” Arkin appears to be fairly certain that the answer will be a rather uncomplicated “yes.”

For a more complicated look at the issue, consider the report (PDF) on Lethal Autonomous Weapons presented to the UN’s Human Rights Council by special rapporteur, Christof Heyns. The report was published in 2013 and first brought to my attention by a post on Nick Carr’s blog. The report explores a variety of arguments for and against the development and deployment of autonomous weapons systems and concludes, “There is clearly a strong case for approaching the possible introduction of LARs with great caution.” It continues:

“If used, they could have far-reaching effects on societal values, including fundamentally on the protection and the value of life and on international stability and security. While it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements in many respects, it is foreseeable that they could comply under certain circumstances, especially if used alongside human soldiers. Even so, there is widespread concern that allowing LARs to kill people may denigrate the value of life itself.”

Among the more salient observations made by the report there is this note of concern about unintended consequences:

“Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”

As with the concern about the denigration of the value of life itself, this worry about the normalization of armed conflict is difficult to empirically verify (although US drone operations in Afghanistan, Pakistan, and the Arabian peninsula are certainly far from irrelevant to the discussion). Consequently, such considerations tend to carry little weight when the terms of the debate are already compromised by technocratic assumptions regarding what counts as compelling reasons, proofs, or evidence.

Such assumptions appear to be all that we have left to go on in light of the rupture Arendt described of the tradition of thought. Or, to put that a bit more precisely, it may not be all that we have left, but it is what we have gotten. We have precious little to fall back on when we begin to think about what we are doing when what we are doing involves, for instance, the fabrication of Lethal Autonomous Robots. There are no customs of thought and action, no traditions of justice, no culturally embodied wisdom to guide us, at least not in any straightforward and directly applicable fashion. We are thinking without a bannister, as Arendt put it elsewhere, if we are thinking at all.

Perhaps it is because I have been reading a good bit of Arendt lately, but I’m increasingly struck by situations we encounter, both ordinary and extraordinary, in which our default problem-solving, cost/benefit analysis mode of thinking fails us. In such situations, we must finally decide what is undecidable and take action, action for which we can be held responsible, action for which we can only hope for forgiveness, action made meaningful by our thinking.

Arendt distinguished this mode of thinking, that which seeks meaning and is a ground for action, from that which seeks to know with certainty what is true. This helps explain, I believe, what she meant in the passage I cited above when she feared that we would become “thoughtless” and slaves to our “know-how.” We are in such cases calculating and measuring, but not thinking, or willing, or judging. Consequently, under such circumstances, we are also perpetually deferring responsibility.

Considered in this light, Lethal Autonomous Weapons threaten to become a symbol of our age; not in their clinical lethality, but in their evacuation of human responsibility from one of the most profound and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mistakes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held accountable and they cannot be forgiven.

We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-scientific thinking. In the absence of such certainty, because we do not seem to know how to think or judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal necessity.

____________________________

Update: Dale Carrico has posted some helpful comments, particularly on Arendt.

Algorithms Who Art in Apps, Hallowed Be Thy Code

If you want to understand the status of algorithms in our collective imagination, Ian Bogost proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

“the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.”

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence sometimes exhibited toward technology. It is, as he fears, an impediment to clear thinking. Indeed, he is not the only one calling for the secularization of our technological endeavors. Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, Lanier expressed his concerns this way:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

While Lanier’s concerns are similar to Bogost’s, it may be worth noting that Lanier’s use of religious categories is rather more concrete. As far as I can tell, Bogost deploys a religious frame as a rhetorical device, and rather effectively so. Lanier’s criticisms, however, have been aroused by religiously intoned expressions of a desire for transcendence voiced by denizens of the tech world themselves.

But such expressions are hardly new, nor are they relegated to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, David Noble rightly insisted that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

So that no one would misunderstand his meaning, he added,

“This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.”

Along with chapters on the space program, atomic weapons, and biotechnology, Noble devoted a chapter to the history AI, titled “The Immortal Mind.” Noble found that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec–all of them influential theorists and practitioners in the development of AI–find their consummation in the Singularity movement. The movement envisions a time, 2045 is frequently suggested, when the distinction between machines and humans will blur and humanity as we know it will eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging on the age-old quest for the immortal life. Noble, who died in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documented alternated between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself. Noble cites one Artificial Life researcher who explains, “I fee like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by Hans Moravec:

“Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.”

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention–the finding within–of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.” I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that would be otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate to the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What kind of motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon apart from the ideological frame provided Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. It’s evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman, Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In the essay with which we began, Ian Bogost framed the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that plagues our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. It wouldn’t be too far from the mark, I think, to say that Bogost is working with what we might classify as an Enlightenment understanding of Religion, one that characterizes it as Reason’s Other, i.e. as a-rational if not altogether irrational, superstitious, authoritarian, and pernicious. For his part, Lanier appears to be working with similar assumptions.

Noble’s work complicates this picture, to say the least. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put this another way, the Enlightenment–and, yes, we are painting with broad strokes here–did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment re-named these Progress, Utopia, and Technology respectively. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology within the context of Progress, an inexorable force driving history toward its Utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the greater the emphasis on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing it’s fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics–these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, is deeply contaminated by religion, often regarded by the same people as the most irrational of human projects.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post-Christian world that it has presently flourished.

It is Christian insofar as its aspirations that are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who referencing the “Judeo-Christian tradition” suggested that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip. “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich observed, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

It is a heresy insofar as it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, and the resurrection of the body, to name a few. Having said as much, it would seem that one could perhaps conceive of the religion of technology as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

______________________

Near the end of The Religion of Technology, David Noble forcefully articulated the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” Noble warned, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble added, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

There is another significant point at which the religion of technology departs from its antecedent: “The millenarian promise of restoring mankind to its original Godlike perfection–the underlying premise of the religion of technology–was never meant to be universal.” Instead, the salvation it promises is limited finally to the very few will be able to afford it; it is for neither the poor nor the weak. Nor, would it seem, is it for those who have found a measure of joy or peace or beauty within the bounds of the human condition as we now experience it, frail as it may be.

Lastly, it is worth noting that the religion of technology appears to have no doctrine of final judgment. This is not altogether surprising given that, as Bogost warned, the divinizing of technology carries the curious effect of absolving us of responsibility for the tools that we fashion and the uses to which they are put.

I have no neat series of solutions to tie all of this up; rather I will give the last word to Wendell Berry:

“To recover from our disease of limitlessness, we will have to give up the idea that we have a right to be godlike animals, that we are potentially omniscient and omnipotent, ready to discover ‘the secret of the universe.’ We will have to start over, with a different and much older premise: the naturalness and, for creatures of limited intelligence, the necessity, of limits. We must learn again to ask how we can make the most of what we are, what we have, what we have been given.”

Quantify Thyself

A thought in passing this morning. Here’s a screen shot that purports to be from an ad for Microsoft’s new wearable device called Band:

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central

I say “purports” because I’ve not been able to find this particular shot and caption on any official Microsoft sites. I first encountered it in this story about Band from October of last year, and I also found it posted to a Reddit thread around the same time. You can watch the official ad here.

It may be that this image is hoax or that Microsoft decided it was a bit too disconcerting and pulled it. A more persistent sleuth should be able to determine which. Whether authentic or not, however, it is instructive.

In tweeting a link to the story in which I first saw the image, I commented: “Define ‘know,’ ‘self,’ and ‘human.'” Nick Seaver astutely replied: “that’s exactly what they’re doing, eh?”

Again, the “they” in this case appears to be a bit ambiguous. That said, the picture is instructive because it reminds us, as Seaver’s reply suggests, that more than our physical fitness is at stake in the emerging regime of quantification. If I were to expand my list of 41 questions about technology’s ethical dimensions, I would include this one: How will the use of this technology redefine my moral vocabulary? or What about myself will the use of this technology encourage me to value?

Consider all that is accepted when someone buys into the idea, even if tacitly so, that Microsoft Band will in fact deepen their knowledge of themselves. What assumptions are accepted about the nature of what it means to know and what there is to know and what can be known? What is implied about the nature of the self when we accept that a device like Band can help us understand it more effectively? We are, needless to say, rather far removed from the Delphic injunction, “Know thyself.”

It is not, of course, that I necessarily think users of Band will be so naive that they will consciously believe there is nothing more to their identity than what Band can measure. Rather, it’s that most of us do have a propensity to pay more attention to what we can measure, particularly when an element of competitiveness is introduced.

I’ll go a step further. Not only do we tend to pay more attention to what we can measure, we begin to care more about what can measure. Perhaps that is because measurement affords us a degree of ostensible control over whatever it is that we are able to measure. It makes self-improvement tangible and manageable, but it does so, in part, by a reduction of the self to those dimensions that register on whatever tool or device we happen to be using to take our measure.

I find myself frequently coming back to one line in a poem by Wendell Berry: “We live the given life, not the planned.” Indeed, and we might also say, “We live the given life, not the quantified.”

A certain vigilance is required to remember that our often marvelous tools of measurement always achieve their precision by narrowing, sometimes radically, what they take into consideration. To reveal one dimension of the whole, they must obscure the others. The danger lies in confusing the partial representation for the whole.

What Do We Want, Really?

I was in Amish country last week. Several times a day I heard the clip-clop of horse hooves and the whirring of buggy wheels coming down the street and then receding into the distance–a rather soothing Doppler effect. While there, I was reminded of an anecdote about the Amish relayed by a reader in the comments to a recent post:

I once heard David Kline tell of Protestant tourists sight-seeing in an Amish area. An Amishman is brought on the bus and asked how Amish differ from other Christians. First, he explained similarities: all had DNA, wear clothes (even if in different styles), and like to eat good food.

Then the Amishman asked: “How many of you have a TV?”

Most, if not all, the passengers raised their hands.

“How many of you believe your children would be better off without TV?”

Most, if not all, the passengers raised their hands.

“How many of you, knowing this, will get rid of your TV when you go home?”

No hands were raised.

“That’s the difference between the Amish and others,” the man concluded.

I like the Amish. As I’ve said before, the Amish are remarkably tech-savvy. They understand that technologies have consequences, and they are determined to think very hard about how different technologies will affect the life of their communities. Moreover, they are committed to sacrificing the benefits a new technology might bring if they deem the costs too great to bear. This takes courage and resolve. We may not agree with all of the choices made by Amish communities, but it seems to me that we must admire both their resolution to think about what they are doing and their willingness to make the sacrifices necessary to live according to their principles.

Image via Wikicommons

Image via Wikicommons

The Amish are a kind of sign to us, especially as we come upon the start of a new year and consider, again, how we might better live our lives. Let me clarify what I mean by calling the Amish a sign. It is not that their distinctive way of life points the way to the precise path we must all follow. Rather, it is that they remind us of the costs we must be prepared to incur and the resoluteness we must be prepared to demonstrate if we are to live a principled life.

It is perhaps a symptom of our disorder that we seem to believe that all can be made well merely by our making a few better choices along the way. Rarely do we imagine that what might be involved in the realization of our ideals is something more radical and more costly. It is easier for us to pretend that all that is necessary are a few simple tweaks and minor adjustments to how we already conduct our lives, nothing that will makes us too uncomfortable. If and when it becomes impossible to sustain that fiction, we take comfort in fatalism: nothing can ever change, really, and so it is not worth trying to change anything at all.

What is often the case, however, is that we have not been honest with ourselves about what it is that we truly value. Perhaps an example will help. My wife and I frequently discuss what, for lack of a better way of putting it, I’ll call the ethics of eating. I will not claim to have thought very deeply, yet, about all of the related issues, but I can say that we care about what has been involved in getting food to our table. We care about the labor involved, the treatment of animals, and the use of natural resources. We care, as well, about the quality of the food and about the cultural practices of cooking and eating. I realize, of course, that it is rather fashionable to care about such things, and I can only hope that our caring is not merely a matter of fashion. I do not think it is.

But it is another thing altogether for us to consider how much we really care about these things. Acting on principle in this arena is not without its costs. Do we care enough to bear those costs? Do we care enough to invest the time necessary to understand all the relevant complex considerations? Are we prepared to spend more money? Are we willing to sacrifice convenience? And then it hits me that what we are talking about is not simply making a different consumer choice here and there. If we really care about the things we say we care about, then we are talking about changing the way we live our lives.

In cases like this, and they are many, I’m reminded of a paragraph in sociologist James Hunter’s book about varying approaches to moral education in American schools. “We say we want the renewal of character in our day,” Hunter writes,

“but we do not really know what to ask for. To have a renewal of character is to have a renewal of a creedal order that constrains, limits, binds, obligates, and compels. This price is too high for us to pay. We want character without conviction; we want strong morality but without the emotional burden of guilt or shame; we want virtue but without particular moral justifications that invariably offend; we want good without having to name evil; we want decency without the authority to insist upon it; we want moral community without any limitations to personal freedom. In short, we want what we cannot possibly have on the terms that we want it.”

You may not agree with Hunter about the matter of moral education, but it is his conclusion that I want you to note: we want what we cannot possibly have on the terms that we want it.

This strikes me as being a widely applicable diagnosis of our situation. Across so many different domains of our lives, private and public, this dynamic seems to hold. We say we want something, often something very noble and admirable, but in reality we are not prepared to pay the costs required to obtain the thing we say we want. We are not prepared to be inconvenienced. We are not prepared to reorder our lives. We may genuinely desire that noble, admirable thing, whatever it may be; but we want some other, less noble thing more.

At this point, I should probably acknowledge that many of the problems we face as individuals and as a society are not the sort that would be solved by our own individual thoughtfulness and resolve, no matter how heroic. But very few problems, private or public, will be solved without an honest reckoning of the price to be paid and the work to be done.

So what then? I’m presently resisting the temptation to now turn this short post toward some happy resolution, or at least toward some more positive considerations. Doing so would be disingenuous. Mostly, I simply wanted to draw our attention, mine no less than yours, toward the possibly unpleasant work of counting the costs. As we thought about the new year looming before us and contemplated how we might live it better than the last, I wanted us to entertain the possibility that what will be required of us to do so might be nothing less than a fundamental reordering of our lives. At the very least, I wanted to impress upon myself the importance of finding the space to think at length and the courage to act.

Saturday Evening Links

Below are a few links for your reading pleasure this weekend.

Researcher Believes 3D Printing May Lead to the Creation of Superhuman Organs Providing Humans with New Abilities: “This God-like ability will be made possible thanks in part to the latest breakthroughs in bioprinting. If companies and researchers are coming close to having the ability to 3D print and implant entire organs, then why wouldn’t it be possible to create our own unique organs, which provide us with superhuman abilities?”

Future perfect: how the Victorians invented the future: “It was only around the beginning of the 1800s, as new attitudes towards progress, shaped by the relationship between technology and society, started coming together, that people started thinking about the future as a different place, or an undiscovered country – an idea that seems so familiar to us now that we often forget how peculiar it actually is.”

Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised? Paper by John Danaher: “Soon there will be sex robots. The creation of such devices raises a host of social, legal and ethical questions. In this article, I focus in on one of them. What if these sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse? Should the creation and use of such robots be criminalised, even if no person is harmed by the acts performed? I offer an argument for thinking that they should be.” (Link to article provided.)

Enthusiasts and Skeptics Debate Artificial Intelligence: “… the Singularitarians’ belief that we’re biological machines on the verge of evolving into not entirely biological super-machines has a distinctly religious fervor and certainty. ‘I think we are going to start to interconnect as a human species in a fashion that is intimate and magical,’ Diamandis told me. ‘What I would imagine in the future is a meta-intelligence where we are all connected by the Internet [and] achieve a new level of sentience. . . . Your readers need to understand: It’s not stoppable. It doesn’t matter what they want. It doesn’t matter how they feel.'”

Artificial Intelligence Isn’t a Threat—Yet: “The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.”

Is it okay to torture or murder a robot?: “What’s clear is that there is a spectrum of “aliveness” in robots, from basic simulations of cute animal behaviour, to future robots that acquire a sense of suffering. But as Darling’s Pleo dinosaur experiment suggested, it doesn’t take much to trigger an emotional response in us. The question is whether we can – or should – define the line beyond which cruelty to these machines is unacceptable. Where does the line lie for you? If a robot cries out in pain, or begs for mercy? If it believes it is hurting? If it bleeds?”

A couple of housekeeping notes. Reading Frankenstein posts will resume at the start of next week. Also, you may have noticed that an Index for the blog is in progress. I’ve always wanted to find a way to make older posts more accessible, so I’ve settled on an selective index for People and Topics. You can check it out by clicking the “Index” tab above.

Cheers!