Machines for the Evasion of Moral Responsibility

The title of a recent article by Virginia Heffernan in Wired asked, “Who Will Take Responsibility for Facebook?”

The answer, of course, is that no one will. Our technological systems, by nature of their design and the ideology that sustains them, are machines for the evasion of moral responsibility.

Heffernan focused on Facebook’s role in spreading misinformation during the last election, which has recently come to fuller and more damning light. Not long afterwards, in an post titled “Google and Facebook Failed Us,” Alexis Madrigal explored how misinformation about the Las Vegas shooting spread on both Google and Facebook. Castigating both companies for their failure to take responsibility for the results their algorithms generated, Madrigal concluded: “There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.”

Writing on the same topic, William Turton noted of Google that “the company’s statement cast responsibility on an algorithm as if it were an autonomous force.” “It’s not about the algorithm,” he adds. “It’s not about what the algorithm was supposed to do, except that it went off and did a bad thing instead. Google’s business lives and dies by these things we call algorithms; getting this stuff right is its one job.”

Siva Vaidhyanathan, a scholar at UVA whose book on Facebook, Anti-Social Media, is to be released next year, described his impression of Zuckerberg to Hefferman in this way: “He lacks an appreciation for nuance, complexity, contingency, or even difficulty. He lacks a historical sense of the horrible things that humans are capable of doing to each other and the planet.”

This leads Heffernan to conclude the following:  “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability [….] It’s hard to imagine he will submit to truth and reconciliation, or use Facebook’s humiliation as a chance to reconsider its place in the world. Instead, he will likely keep lawyering up and gun it on denial and optics, as he has during past ­litigation and conflict.”

This is an arresting observation: “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability.” Frankly, I suspect Zuckerberg is not the only one among our technologists who fits this description.

It immediately reminded me of Hannah Arendt’s efforts to understand the unique evils of mid-twentieth century totalitarianism, specifically the evil of the Holocaust. Thoughtlessness, or better an inability to think, Arendt believed, was near the root of this new kind of evil. Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”

I should immediately make clear that I do not mean to equate Facebook’s and Google’s very serious failures with the Holocaust. This is not at all my point. Rather, it is that, following Arendt’s analysis, we can see more clearly how a certain inability to think (not merely calculate or problem solve) and consequently to assume moral responsibility for one’s actions, takes hold and yields a troubling and pernicious species of ethical and moral failures.

It is one thing to expose and judge individuals whose actions are consciously intended to cause harm and work against the public good. It is another thing altogether to encounter individuals who, while clearly committing such acts, are, in fact, themselves oblivious to the true nature of their actions. They are so enclosed within an ideological shell that they seem unable to see what they are doing, much less assume responsibility for it.

It would seem that whatever else we may say about algorithms as technical entities, they also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility. As such, however, they are just a new iteration of the moral myopia that many of our best tech critics have been warning us about for a very long time (see here and here).


Two years ago, I published a post, Resisting the Habits of the Algorithmic Mind, in which I explored this topic of thoughtlessness and moral responsibility. I think it remains a useful way of making sense of the peculiar contours of our contemporary moral topography.


 

 

 

8 thoughts on “Machines for the Evasion of Moral Responsibility

  1. Perhaps we are witnessing the failure to transmit good value systems to these billionaire children when they were children. They now head our biggest technology companies. That transmission of morals and a solid value system should happen during the first 10 years of life on the planet.
    It is difficult for highschools and colleges to inculcate these value systems if children haven’t formed them from discussion and the behavior of grandma, grandpa, mom, dad, and older siblings.
    In that case, it is not the absence of rational thought at fault failure to take responsibility for our poorly performing technology. Rather it is incomplete moral education and “values parenting” during the formative years.

  2. “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability.” Is this not true of most of us? And is it not us, as users, who should take at least some of the moral responsibility for the rubbish we read and share.

    If Facebook, Google et al are deliberately emphasising and highlighting false or inaccurate information, then yes, hold them to account. If they are simply reflecting back ourselves to ourselves, surely it is we who are at least as, if not more, guilty.

    Make these systems transparent, accountable and fair to all (e.g. prevent “advertisers” dominating the discourse with their buying power). Do not make them censors.

    The real potential of social media is to enable for the first time in history a global discourse between ordinary people. Enabling that should be our priority.

  3. Sure, the designers of Facebook might lack familiarity with the latest ethical theories. But I hold graver doubts about the technical knowledge of sociologists who write about algorithms from the outside.
    I question the usefulness of slow, manual censorship by primates. Facebook already has difficulty moderating for obscene content, and controversial decisions there (on censoring images of women breastfeeding, for instance) were made by humans.
    Facebook already hires consultant philosophers. I trust their existing collaborations more than the thought bubbles of Arts people clamouring around their new buzzword, ‘algorithm’.

    PS. This comment was polemical because it gets the point across faster. I’m not quite so angry. For more, see this Slate article: http://www.slate.com/articles/technology/future_tense/2017/01/a_chief_ethics_officer_won_t_fix_facebook_s_problems.html

    1. “In other words: You can’t simply shout ‘more ethics!’ within corporate structures that prioritize economic gains and silence ethical voices, and expect change to happen. If ethics is to stand a chance, we need clear and increasingly potent means of holding tech companies accountable for their actions.”

      With that conclusion from the Slate article, I have absolutely no objection. In fact, I’m basically in agreement with the gist of the whole thing. Indeed, it seems to line up fairly well with what I’m trying to get at. It’s not about a moral theory, it’s about the environment that makes ethical reflection and action possible (or undermines the possibility).

      For what it’s worth, I also agree that academics in the social sciences/humanities don’t always do their homework on these matters. I’m not a computer scientist. I do my best to lean on academics that I know have done their homework.

      Didn’t think the tone was too polemical, but thanks for clarifying. I would only question the “primates” comment and this only because of general misanthropy that often flows out of the tech world.

    2. Wait…

      Are you’re saying : “If you don’t know about algorithms, your ideas / opinions regarding algorithms are worthless” ?

      I think it is what I read here and what I often see insiders of any field (bankers, programmers, politicians, economists, you name it…) responding to “outside” critics
      Yet another way to deflect responsibility…

      So that line of reasoning can also applied to computer engineers who want to “algorithmically solve all the problems of the world”, their ideas / opinions about the world are worthless, right ?

      PS. I’m a computer programmer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s