The Facebook Experiment, Briefly Noted

More than likely, you’ve recently heard about Facebook’s experiment in the manipulation of user emotions. I know, Facebook as a whole is an experiment in the manipulation of user emotions. Fair enough, but this was a more pointed experimented that involved the manipulation of what user’s see in their News Feeds. Here is how the article in the Proceedings of the National Academy of Sciences  summarized the significance of the findings:

“We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”

Needless to say (well except to Facebook and the researchers involved), this massive psychological experiment raises all sorts of ethical questions and concerns. Here are some of the more helpful pieces I’ve found on the experiment and its implications:

Update: Here are two more worth considering:

Those four six pieces should give you a good sense of the variety of issues involved in the whole situation, along with a host of links to other relevant material.

I don’t have too much to add except two quick observations. First, I was reminded, especially by Gurstein’s post, of Greg Ulmer’s characterization of the Internet as a “prothesis of the unconscious.” Ulmer means something like this: The Internet has become a repository of the countless ways that culture has imprinted itself upon us and shaped our identity. Prior to the advent of the Internet, most of those memories and experiences would be lost to us even while they may have continued to be a part of who we became. The Internet, however, allows us to access many, if not all, of these cultural traces bringing them to our conscious awareness and allowing us to think about them.

What Facebook’s experiment suggests rather strikingly is that such a prosthesis is, as we should have known, a two-way interface. It not only facilitates our extension into the world, it is also a means by which the world can take hold of us. As a prothesis of our unconscious, the Internet is not only an extension of our unconscious, it also permits the manipulation of the unconscious by external forces.

Secondly, I was reminded of Paul Virilio’s idea of the general or integral accident. Virilio has written extensively about technology, speed, and accidents. The accident is an expansive concept in his work. In his view, accidents are built-in to the nature of any new technology. As he has frequently put it, “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash … Every technology carries its own negativity, which is invented at the same time as technical progress.”

The general or integral accident is made possible by complex technological systems. The accident made possible by a nuclear reactor or air traffic is obviously of a greater scale than that made possible by the invention of the hammer. Complex technological systems create the possibility of cascading accidents of immense scale and destructiveness. Information technologies also introduce the possibility of integral accidents. Virilio’s most common examples of these information accidents include the flash crashes on stock exchanges induced by electronic trading.

All of this is to say that Facebook’s experiment gives us a glimpse of what shape the social media accident might take. An interviewer alludes to this line from Virilio: “the synchronizing of collective emotions that leads to the administration of fear.” I’ve not been able to track down the original context, but it struck me as suggestive in light of this experiment.

Oh, and lastly, Facebook COO Sheryl Sandberg issued an apology of sorts. I’ll let Nick Carr tell you about it.

Love’s Labor Outsourced

On Valentine’s Day, The Atlantic’s tech site ran a characteristically thoughtful piece by Evan Selinger examining a new app called Romantimatic. From the app’s website:

Even with the amazing technology we have in our pockets, we can fly through the day without remembering to send a simple “I love you” to the most important person in our lives.

Romantimatic can help.

It can help by automatically reminding you to contact the one you love and providing some helpful pre-set messages to save you the trouble of actually coming up with something to say.

Selinger has his reservations about this sort of “outsourced sentiment,” and he irenically considers the case Romantimatic’s creator makes for his app while exploring the difference between the legitimate use “social training wheels” and the outsourcing of moral and emotional responsibility. I encourage you to read the whole thing.

“What’s really weird,” Selinger concludes, “is that Romantimatic style romance may be a small sign of more ambitious digital outsourcing to come.”

That is exactly right. Increasingly, we are able to outsource what we might think of as ethical and emotional labor to our devices and apps. But should we? I’m sure there are many for whom the answer is a resounding Yes. Why not? To be human is to make use of technological enhancements. Much of our emotional life is already technologically mediated anyway. And so on.

Others, however, might instinctively sense that the answer, at least sometimes, is No. But why exactly? Formulating a cogent and compelling response to that question might take a little work. Here, at least, is a start.

The problem, I think, involves a conflation of intellectual labor with ethical/emotional labor. For better and for worse, we’ve gotten used to the idea of outsourcing intellectual labor to our devices. Take memory, for instance. We’ve long since ceased memorizing phone numbers. Why bother when our phones can store those numbers for us? On a rather narrow and instrumental view of intellectual labor, I can see why few would take issue with it. As long as we find the solution or solve the problem, it seems not to matter how the labor is allocated between minds and machines. To borrow an old distinction, the labor itself seems accidental rather than essential to the goods sought by intellectual labor.

When it comes to our emotional and ethical lives, however, that seems not to be the case. When we think of ethical and emotional labor, it’s harder to separate the labor itself from the good that is sought or the end that is pursued.

For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous. In fact, subjective experience, generally speaking, is what we seem to be increasingly tempted to outsource  When it comes to our ethical and emotional lives, however, the labor is essential rather than accidental; it cannot be outsourced without undermining the whole project. The value is in the labor, and so is our humanity.

____________________________________________

Further Reading

Selinger has been covering this field for awhile; here is a related essay.

I touched on some of these issues here.

The Conscience of a Machine

Recently, Gary Marcus predicted that within the next two to three decades we would enter an era “in which it will no longer be optional for machines to have ethical systems.” Marcus invites us to imagine the following driverless car scenario: “Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?”

In this scenario, a variation of the trolley car problem, the computer operating the car would need to make a decision (although I suspect putting it that way is an anthropomorphism). Were a human being called upon to make such a decision, it would be considered a choice of moral consequence. Consequently, writing about Marcus’ piece, Nicholas Carr concluded, “We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”

Of course, there is a sense in which autonomous machines of this sort are not really ethical agents. To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or even moral intuition among human beings. They will do as they are programmed to do. The question is, What will they be programmed to do in such circumstances? What ethical system will animate the programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be Benthamites, calculating the greatest good for the greatest number?

There is an interesting sense, though, in which an autonomous machine of the sort envisioned in these scenarios is an agent, even if we might hesitate to call it an ethical agent. What’s interesting is not that a machine may cause harm or even death. We’ve been accustomed to this for generations. But in such cases, a machine has ordinarily malfunctioned, or else some human action was at fault. In the scenarios proposed by Marcus, an action that causes harm would be the result of a properly functioning machine and it would have not been the result of direct human action. The machines decided to take an action that resulted in harm, even if it was in some sense the lesser harm. In fact, such machines might rightly be called the first truly malfunctioning machines.

There is little chance that our world will not one day be widely populated by autonomous machines of the sort that will require a “conscience” or “ethical systems.” Determining what moral calculus should inform such “moral machines,” is problematic enough. But there is another, more subtle danger that should concern us.

Such a machine seems to enter into the world of morally consequential action that until now has been occupied exclusively by human beings, but they do so without a capacity to be burdened by the weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satisfying way. They will, in other words, lose no sleep over their decisions, whatever those may be.

We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human experience to the characteristics of our machines. Take memory for example. Having first decided, by analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere storage. So now we casually talk of offloading the work of memory or of Google being a better substitute for human memory without any thought for how human memory is related to perception, understanding, creativity, identity, and more.

I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by which machines are programmed to make ethically significant decisions the machine’s “conscience,” and then turn around, reverse the direction of the metaphor, and come to understand human conscience by analogy to what the machine does. This would result in an impoverishment of the moral life.

Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the right to be unhappy, to be troubled by fully realized human conscience?

This is, of course, not merely a matter of making the “right” decisions. Part of what makes programming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envisioning would remain. The moral weightiness of human existence does not reside solely in the moment of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is precisely this “living with” our decisions that a machine conscience cannot know.

In Miguel Unamuno’s Tragic Sense of Life, he relates the following anecdote: “A pedant who beheld Solon weeping for the death of a son said to him, ‘Why do you weep thus, if weeping avails nothing?’ And the sage answered him, ‘Precisely for that reason–because it does not avail.'”

Were we to conform our conscience to the “conscience” of our future machines, we would cease to shed such tears, and our humanity lies in Solon’s tears.

_______________________________________________

Also consider Evan Selinger’s excellent and relevant piece, “Would Outsourcing Morality to Technology Diminish Our Humanity?”