The Conscience of a Machine

Recently, Gary Marcus predicted that within the next two to three decades we would enter an era “in which it will no longer be optional for machines to have ethical systems.” Marcus invites us to imagine the following driverless car scenario: “Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?”

In this scenario, a variation of the trolley car problem, the computer operating the car would need to make a decision (although I suspect putting it that way is an anthropomorphism). Were a human being called upon to make such a decision, it would be considered a choice of moral consequence. Consequently, writing about Marcus’ piece, Nicholas Carr concluded, “We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”

Of course, there is a sense in which autonomous machines of this sort are not really ethical agents. To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or even moral intuition among human beings. They will do as they are programmed to do. The question is, What will they be programmed to do in such circumstances? What ethical system will animate the programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be Benthamites, calculating the greatest good for the greatest number?

There is an interesting sense, though, in which an autonomous machine of the sort envisioned in these scenarios is an agent, even if we might hesitate to call it an ethical agent. What’s interesting is not that a machine may cause harm or even death. We’ve been accustomed to this for generations. But in such cases, a machine has ordinarily malfunctioned, or else some human action was at fault. In the scenarios proposed by Marcus, an action that causes harm would be the result of a properly functioning machine and it would have not been the result of direct human action. The machines decided to take an action that resulted in harm, even if it was in some sense the lesser harm. In fact, such machines might rightly be called the first truly malfunctioning machines.

There is little chance that our world will not one day be widely populated by autonomous machines of the sort that will require a “conscience” or “ethical systems.” Determining what moral calculus should inform such “moral machines,” is problematic enough. But there is another, more subtle danger that should concern us.

Such a machine seems to enter into the world of morally consequential action that until now has been occupied exclusively by human beings, but they do so without a capacity to be burdened by the weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satisfying way. They will, in other words, lose no sleep over their decisions, whatever those may be.

We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human experience to the characteristics of our machines. Take memory for example. Having first decided, by analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere storage. So now we casually talk of offloading the work of memory or of Google being a better substitute for human memory without any thought for how human memory is related to perception, understanding, creativity, identity, and more.

I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by which machines are programmed to make ethically significant decisions the machine’s “conscience,” and then turn around, reverse the direction of the metaphor, and come to understand human conscience by analogy to what the machine does. This would result in an impoverishment of the moral life.

Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the right to be unhappy, to be troubled by fully realized human conscience?

This is, of course, not merely a matter of making the “right” decisions. Part of what makes programming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envisioning would remain. The moral weightiness of human existence does not reside solely in the moment of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is precisely this “living with” our decisions that a machine conscience cannot know.

In Miguel Unamuno’s Tragic Sense of Life, he relates the following anecdote: “A pedant who beheld Solon weeping for the death of a son said to him, ‘Why do you weep thus, if weeping avails nothing?’ And the sage answered him, ‘Precisely for that reason–because it does not avail.'”

Were we to conform our conscience to the “conscience” of our future machines, we would cease to shed such tears, and our humanity lies in Solon’s tears.

_______________________________________________

Also consider Evan Selinger’s excellent and relevant piece, “Would Outsourcing Morality to Technology Diminish Our Humanity?”

6 thoughts on “The Conscience of a Machine

  1. It seems like reductionism is relevant to these considerations. In
    the case of the natural sciences, one can take all sorts of things and
    say that they are “just chemistry”. Life is a sort of non-equilibrium
    thermodynamics, or things of that sort. In this case, someone says
    that conscience is just an algorithm. I guess there are two ways
    one might approach the feeling of impoverishment one has when something like this
    has been said. The first is to deny the lack of interest of the
    reduced object: “great!”, one might say, “I think molecules are so
    interesting!!”, or in the case of algorithms one might turn it around
    and say, “if algorithms include consciences, then they must be much,
    much more interesting than I ever thought!”. The other approach is to
    flatly stare back at the person who makes a statement about a “bug in
    my conscience”, and ask them to elaborate. “In what way is a conscience
    an algorithm? Can you really describe everything that we mean by
    conscience in terms of algorithms?” Most likely they will quickly
    admit that they don’t really know how to do this, and what they are
    calling a conscience is a much simplified thing.

    On the topic of impoverishment, it reminds me of this quote by Richard
    Feynman,
    http://www.goodreads.com/quotes/184384-i-have-a-friend-who-s-an-artist-and-has-sometimes
    in which he finds that detailed analysis only adds to the beauty of a phenomena, he
    doesn’t see how it can subtract. I think he’s sort of willfully blind
    here, but his enthusiasm is something one could try to aspire for.

    1. Boaz, Good points here. I think a serious conversation about these sorts of claims, those that get started with the questions you suggest, would go a long way toward halting the slide that worries me. I tend to think, though, that such conversations don’t happen as often as they should. The metaphors (and for the record, I am a fan of metaphors) casually slip into ordinary usage and we’re not likely to question their assumptions or recognize how the premises they invite us to tacitly accept go on to shape our thinking.

      Nice quotation from Feynman, I think you pretty much nailed it with your assessment of it.

  2. Another point- you write:
    “It is precisely this “living with” our decisions that a machine conscience cannot know.”
    I agree that most foreseeable independent machines we may put out in the world would likely have a pretty impoverished inner life, and hence impoverished meaning making abilities. However, they do still “live with” their decisions in the minimal sense. If they have a programmed in sense of self preservation, for example, then there could be a sort of weighing of how experiences and choices impact their ongoing likelihood to continue existing in a non-broken way.

    Its true that the very idea of an inner life of a machine gets you pretty deep into the debates about artificial intelligence. If you remove this worry about degradation of our own moral life, its not obvious that the field of robot ethics would be so awful. But I do see where you’re coming from with this worry that we would apply robot ethics to ourselves, with many likely deadening results.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s