Saturday Evening Links

Below are a few links for your reading pleasure this weekend.

Researcher Believes 3D Printing May Lead to the Creation of Superhuman Organs Providing Humans with New Abilities: “This God-like ability will be made possible thanks in part to the latest breakthroughs in bioprinting. If companies and researchers are coming close to having the ability to 3D print and implant entire organs, then why wouldn’t it be possible to create our own unique organs, which provide us with superhuman abilities?”

Future perfect: how the Victorians invented the future: “It was only around the beginning of the 1800s, as new attitudes towards progress, shaped by the relationship between technology and society, started coming together, that people started thinking about the future as a different place, or an undiscovered country – an idea that seems so familiar to us now that we often forget how peculiar it actually is.”

Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised? Paper by John Danaher: “Soon there will be sex robots. The creation of such devices raises a host of social, legal and ethical questions. In this article, I focus in on one of them. What if these sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse? Should the creation and use of such robots be criminalised, even if no person is harmed by the acts performed? I offer an argument for thinking that they should be.” (Link to article provided.)

Enthusiasts and Skeptics Debate Artificial Intelligence: “… the Singularitarians’ belief that we’re biological machines on the verge of evolving into not entirely biological super-machines has a distinctly religious fervor and certainty. ‘I think we are going to start to interconnect as a human species in a fashion that is intimate and magical,’ Diamandis told me. ‘What I would imagine in the future is a meta-intelligence where we are all connected by the Internet [and] achieve a new level of sentience. . . . Your readers need to understand: It’s not stoppable. It doesn’t matter what they want. It doesn’t matter how they feel.'”

Artificial Intelligence Isn’t a Threat—Yet: “The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.”

Is it okay to torture or murder a robot?: “What’s clear is that there is a spectrum of “aliveness” in robots, from basic simulations of cute animal behaviour, to future robots that acquire a sense of suffering. But as Darling’s Pleo dinosaur experiment suggested, it doesn’t take much to trigger an emotional response in us. The question is whether we can – or should – define the line beyond which cruelty to these machines is unacceptable. Where does the line lie for you? If a robot cries out in pain, or begs for mercy? If it believes it is hurting? If it bleeds?”

A couple of housekeeping notes. Reading Frankenstein posts will resume at the start of next week. Also, you may have noticed that an Index for the blog is in progress. I’ve always wanted to find a way to make older posts more accessible, so I’ve settled on an selective index for People and Topics. You can check it out by clicking the “Index” tab above.

Cheers!

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

Do Artifacts Have Ethics?

Writing about “technology and the moral dimension,” tech writer and Gigaom founder, Om Malik made the following observation:

“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”

I’m not sure how many people in the tech industry would concur with Malik’s claim, but it is a remarkably telling admission from at least one well-placed individual. Happily, Malik realizes that “it is time to add an emotional and moral dimension to products.” But what exactly does it mean to add an emotional and moral dimension to products?

Malik’s own ensuing discussion is brief and deals chiefly with using data ethically and producing clear, straightforward terms of service. This suggests that Malik is mostly encouraging tech companies to treat their customers in an ethically responsible manner. If so, it’s rather disconcerting that Malik takes this to be a discovery that he feels compelled to announce, prophetically, to his colleagues. Leaving that unfortunate indictment of the tech community aside, I want to suggest that there is no need to add a moral dimension to technology.

Years ago, Langdon Winner famously asked, “Do artifacts have politics?” In the article that bears that title, Winner went on to argue that they most certainly do. We might also ask, “Do artifacts have ethics?” I would argue that they do indeed. The question is not whether technology has a moral dimension, the question is whether we recognize it or not. In fact, technology’s moral dimension is inescapable, layered, and multi-faceted.

When we do think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about a technology on this view is the use to which it is put. This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash someones head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?

But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses?

Below are a few other questions that we might ask in order to get at the wide-ranging “moral dimension” of our technologies. There are, of course, many others that we could ask, but this is a start.

  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy?
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?

Silencing the Heretics: How the Faithful Respond to Criticism of Technology

I started to write a post about a few unhinged reactions to an essay published by Nicholas Carr in this weekend’s WSJ, “Automation Makes Us Dumb.”  Then I realized that I already wrote that post back in 2010. I’m republishing “A God that Limps” below, with slight revisions, and adding a discussion of the reactions to Carr. 

Our technologies are like our children: we react with reflexive and sometimes intense defensiveness if either is criticized. Several years ago, while teaching at a small private high school, I forwarded an article to my colleagues that raised some questions about the efficacy of computers in education. This was a mistake. The article appeared in a respectable journal, was judicious in its tone, and cautious in its conclusions. I didn’t think then, nor do I now, that it was at all controversial. In fact, I imagined that given the setting it would be of at least passing interest. However, within a handful of minutes (minutes!)—hardly enough time to skim, much less read, the article—I was receiving rather pointed, even angry replies.

I was mystified, and not a little amused, by the responses. Mostly though, I began to think about why this measured and cautious article evoked such a passionate response. Around the same time I stumbled upon Wendell Berry’s essay titled, somewhat provocatively, “Why I am Not Going to Buy a Computer.” More arresting than the essay itself, however, were the letters that came in to Harper’s. These letters, which now typically appear alongside the essay whenever it is anthologized, were caustic and condescending. In response, Berry wrote,

The foregoing letters surprised me with the intensity of the feelings they expressed. According to the writers’ testimony, there is nothing wrong with their computers; they are utterly satisfied with them and all that they stand for. My correspondents are certain that I am wrong and that I am, moreover, on the losing side, a side already relegated to the dustbin of history. And yet they grow huffy and condescending over my tiny dissent. What are they so anxious about?

Precisely my question. Whence the hostility, defensiveness, agitation, and indignant, self-righteous anxiety?

I’m typing these words on a laptop, and they will appear on a blog that exists on the Internet.  Clearly I am not, strictly speaking, a Luddite. (Although, in light of Thomas Pynchon’s analysis of the Luddite as Badass, there may be a certain appeal.) Yet, I do believe an uncritical embrace of technology may prove fateful, if not Faustian.

The stakes are high. We can hardly exaggerate the revolutionary character of certain technologies throughout history:  the wheel, writing, the gun, the printing press, the steam engine, the automobile, the radio, the television, the Internet. And that is a very partial list. Katherine Hayles has gone so far as to suggest that, as a species, we have “codeveloped with technologies; indeed, it is no exaggeration,” she writes in Electronic Literature, “to say modern humans literally would not have come into existence without technology.”

We are, perhaps because of the pace of technological innovation, quite conscious of the place and power of technology in our society and in our own lives. We joke about our technological addictions, but it is sometimes a rather nervous punchline. It makes sense to ask questions. Technology, it has been said, is a god that limps. It dazzles and performs wonders, but it can frustrate and wreak havoc. Good sense seems to suggest that we avoid, as Thoreau put it, becoming tools of our tools. This doesn’t entail burning the machine; it may only require a little moderation. At a minimum, it means creating, as far as we are able, a critical distance from our toys and tools, and that requires searching criticism.

And we are back where we began. We appear to be allergic to just that kind of searching criticism. So here is my question again:  Why do we react so defensively when we hear someone criticize our technologies?

And so ended my earlier post. Now consider a handful of responses to Carr’s article, “Automation Makes Us Dumb.” Better yet, read the article, if you haven’t already, and then come back for the responses.

Let’s start with a couple of tweets by Joshua Gans, a professor of management at the University of Toronto.

Then there was this from entrepreneur, Marc Andreessen:

Even better are some of the replies attached to Andreessen’s tweet. I’ll transcribe a few of those here for your amusement.

“Why does he want to be stuck doing repetitive mind-numbing tasks?”

“‘These automatic jobs are horrible!’ ‘Stop killing these horrible jobs with automation!'” [Sarcasm implied.]

“by his reasoning the steam engine makes us weaklings, yet we’ve seen the opposite. so maybe the best intel is ahead”

“Let’s forget him, he’s done so much damage to our industry, he is just interested in profiting from his provocations”

“Nick clearly hasn’t understood the true essence of being ‘human’. Tech is an ‘enabler’ and aids to assist in that process.”

“This op-ed is just a Luddite screed dressed in drag. It follows the dystopian view of ‘Wall-E’.”

There you have it. I’ll let you tally up the logical fallacies.

Honestly, I’m stunned by the degree of apparently willful ignorance exhibited by these comments. The best I can say for them is that they are based on a glance at the title of Carr’s article and nothing more. It would be much more worrisome if these individuals had actually read the article and still managed to make these comments that betray no awareness of what Carr actually wrote.

More than once, Carr makes clear that he is not opposed to automation in principle. The last several paragraphs of the article describe how we might go forward with automation in a way that avoids some serious pitfalls. In other words, Carr is saying, “Automate, but do it wisely.” What a Luddite!

When I wrote in 2010, I had not yet formulated the idea of a Borg Complex, but this inability to rationally or calmly abide any criticism of technology is surely pure, undistilled Borg Complex, complete with Luddite slurs!

I’ll continue to insist that we are in desperate need of serious thinking about the powers that we are gaining through our technologies. It seems, however, that there is a class of people who are hell-bent on shutting down any and all criticism of technology. If the criticism is misguided or unsubstantiated, then it should be refuted. Dismissing criticism while giving absolutely no evidence of having understood it, on the other hand, helps no one at all.

I come back to David Noble’s description of the religion of technology often, but only because of how useful it is as a way of understanding techno-scientific culture. When technology is a religion, when we embrace it with blind faith, when we anchor our hope in it, when we love it as ourselves–then any criticism of technology will be understood as either heresy or sacrilege. And that seems to be a pretty good way of characterizing the responses to tech criticism I’ve been discussing: the impassioned reactions of the faithful to sacrilegious heresy.

Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”

[….]

“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.'” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.