Perhaps you’ve seen the clip below in which a dog-like robot developed by Boston Dynamics, a Google-owned robotics company, receives a swift kick and manages to maintain its balance:
I couldn’t resist tweeting that clip with this text: “The mechanical Hound slept but did not sleep, lived but did not live in … a dark corner of the fire house.” That line, of course, is from Ray Bradbury’s Fahrenheit 451, in which the mechanical Hound is deployed to track down dissidents. The apt association was first suggested to me a few months back by a reader’s email occasioned by an earlier Boston Dynamics robot.
My glib tweet aside, many have found the clip disturbing for a variety of reasons. One summary of the concerns can be found in a CNN piece by Phoebe Parke titled, “Is It Cruel to Kick a Robot Dog?” (via Mary Chayko). That question reminded me of a 2013 essay by Richard Fisher posted at BBC Future, “Is It OK to Torture or Murder a Robot?”
Both articles discuss our propensity to anthropomorphize non-human entities and artifacts. Looked at in that way, the ethical concerns seem misplaced if not altogether silly. So, according to one AI researcher quoted by Parke, “The only way it’s unethical is if the robot could feel pain.” A robot cannot feel pain, thus there is nothing unethical about the way we treat robots.
But is that really all that needs to be said about the ethical implications?
Consider these questions raised by Fisher:
“To take another example: if a father is torturing a robot in front of his 4-year-old son, would that be acceptable? The child can’t be expected to have the sophisticated understanding of adults. Torturing a robot teaches them that acts that cause suffering – simulated or not – are OK in some circumstances.
Or to take it to an extreme: imagine if somebody were to take one of the childlike robots already being built in labs, and sell it to a paedophile who planned to live out their darkest desires. Should a society allow this to happen?
Such questions about apparently victimless evil are already playing out in the virtual world. Earlier this year, the New Yorker described the moral quandaries raised when an online forum discussing Grand Theft Auto asked players if rape was acceptable inside the game. One replied: ‘I want to have the opportunity to kidnap a woman, hostage her, put her in my basement and rape her everyday, listen to her crying, watching her tears.’ If such unpleasant desires could be actually lived with a physical robotic being that simulates a victim, it may make it more difficult to tolerate.”
These are challenging questions that, to my mind, expose the inadequacy of thinking about the ethics of technology, or ethics more broadly, from a strictly instrumental perspective.
Recently, philosopher Charlie Huenemann posed a similarly provocative reflection on killing dogs in Minecraft. His reflections led him to consider the moral standing of the attachments we form to objects, whether they be material or virtual, in a manner I found helpful. Here are his concluding paragraphs:
The point is that we form attachments to things that may have no feelings or rights whatsoever, but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern. (Yes, lots of complications can come in here – “What if I really care for the fire that is now engulfing your home?” – but the basic point stands: there is some initial reason, though not necessarily a final or decisive one.) I had some attachment to my Minecraft dogs, which is why I felt sorry when they died. Had you come along in a multiplayer setting and chopped them to death for the sheer malicious pleasure of doing so, I could rightly claim that you did something wrong.
Moreover, we can also speak of attachments – even to virtual objects – that we should form, just as part of being good people. Imagine if I were to gain a Minecraft dog that accompanied me on many adventures. I even offer it rotten zombie flesh to eat on several occasions. But then one day I tire of it and chop it into nonexistence. I think most of would be surprised: “Why did you do that? You had it a long time, and even took care of it. Didn’t you feel attached to it?” Suppose I say, “No, no attachment at all”. “Well, you should have”, we would mumble. It just doesn’t seem right not to have felt some attachment, even if it was overcome by some other concern. “Yes, I was attached to it, but it was getting in the way too much”, would have been at least more acceptable as a reply. (“Still, you didn’t have to kill it. You could have just clicked on it to sit forever….”)
The first of my 41 questions about the ethics of technology was a simple one: What sort of person will the use of this technology make of me?
It’s a simple question, but one we often fail to ask because we assume that ethical considerations apply only to what people do with technology, to the acts themselves. It is a question, I think, that helps us imagine the moral implications of willful acts of virtual harm.
Of course, it is also worth asking, “What sort of person does my use of this technology reveal me to be?”
7 thoughts on “On the Moral Implications of Willful Acts of Virtual Harm”
As I was reading this I kept coming back to the same touchstone: if you don’t cry when Old Yeller dies, you lack compassion and are therefore not a person I want to know. I know he’s fictional, but that doesn’t matter… a decent person will at least have to fight the tears!
I enjoyed the post. I may post more thoughts later, but for now, I’m reminded of how upset my 10 year old nephew gets when my brother-in-law talks badly to Siri. He takes it really personally, and it made me realize how fuzzy the artificial/biological distinction can be for someone his age in some circumstances.
Fascinating post. I think you are right, Michael, in that we have to think more carefully about where morality resides: in the act or in the wish to carry out the act. Interestingly, I was, just prior to reading your post, revisiting some writing I did about David Adams Richards, a Canadian fiction writer, who, in his stories, explores the idea of sin — defining it (I think) as a failure to acknowledge our innate ability to know right from wrong, or a betrayal of this knowledge, in the interests of what we think we want. These ideas have some application here too.
If we believe that empathy is one of the fundamental concepts defining humanity then perhaps the question is not “What sort of person will the use of this technology make of me?” so much as it is “What does my use of this technology reveal about the status of my psychological/moral health as a human being?” Your post has stirred up all kinds of questions for me in terms of what triggers our emotional responses to biological and artificial life forms. Does our capacity, perhaps propensity, to anthropomorphize non-human entities rest in a pre-existing capacity for empathy toward human beings and other living things? It seems that there is a great deal of variance among humans in terms of the ability to empathize with others. If only we could find the empathy-stimulating hormone….
Reblogged this on no sign of it and commented:
Very timely post for me, as I have been working on an essay about the robot hero of Japanese cartoons, Astro Boy. Similar questions have entered my mind, and someone else has recently asked me about, how far can we extend human empathy to non-human entities? And what does this tell us of those who cannot extend such empathy very far?