Quantify Thyself

A thought in passing this morning. Here’s a screen shot that purports to be from an ad for Microsoft’s new wearable device called Band:

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central

I say “purports” because I’ve not been able to find this particular shot and caption on any official Microsoft sites. I first encountered it in this story about Band from October of last year, and I also found it posted to a Reddit thread around the same time. You can watch the official ad here.

It may be that this image is hoax or that Microsoft decided it was a bit too disconcerting and pulled it. A more persistent sleuth should be able to determine which. Whether authentic or not, however, it is instructive.

In tweeting a link to the story in which I first saw the image, I commented: “Define ‘know,’ ‘self,’ and ‘human.'” Nick Seaver astutely replied: “that’s exactly what they’re doing, eh?”

Again, the “they” in this case appears to be a bit ambiguous. That said, the picture is instructive because it reminds us, as Seaver’s reply suggests, that more than our physical fitness is at stake in the emerging regime of quantification. If I were to expand my list of 41 questions about technology’s ethical dimensions, I would include this one: How will the use of this technology redefine my moral vocabulary? or What about myself will the use of this technology encourage me to value?

Consider all that is accepted when someone buys into the idea, even if tacitly so, that Microsoft Band will in fact deepen their knowledge of themselves. What assumptions are accepted about the nature of what it means to know and what there is to know and what can be known? What is implied about the nature of the self when we accept that a device like Band can help us understand it more effectively? We are, needless to say, rather far removed from the Delphic injunction, “Know thyself.”

It is not, of course, that I necessarily think users of Band will be so naive that they will consciously believe there is nothing more to their identity than what Band can measure. Rather, it’s that most of us do have a propensity to pay more attention to what we can measure, particularly when an element of competitiveness is introduced.

I’ll go a step further. Not only do we tend to pay more attention to what we can measure, we begin to care more about what can measure. Perhaps that is because measurement affords us a degree of ostensible control over whatever it is that we are able to measure. It makes self-improvement tangible and manageable, but it does so, in part, by a reduction of the self to those dimensions that register on whatever tool or device we happen to be using to take our measure.

I find myself frequently coming back to one line in a poem by Wendell Berry: “We live the given life, not the planned.” Indeed, and we might also say, “We live the given life, not the quantified.”

A certain vigilance is required to remember that our often marvelous tools of measurement always achieve their precision by narrowing, sometimes radically, what they take into consideration. To reveal one dimension of the whole, they must obscure the others. The danger lies in confusing the partial representation for the whole.

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

Do Artifacts Have Ethics?

Writing about “technology and the moral dimension,” tech writer and Gigaom founder, Om Malik made the following observation:

“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”

I’m not sure how many people in the tech industry would concur with Malik’s claim, but it is a remarkably telling admission from at least one well-placed individual. Happily, Malik realizes that “it is time to add an emotional and moral dimension to products.” But what exactly does it mean to add an emotional and moral dimension to products?

Malik’s own ensuing discussion is brief and deals chiefly with using data ethically and producing clear, straightforward terms of service. This suggests that Malik is mostly encouraging tech companies to treat their customers in an ethically responsible manner. If so, it’s rather disconcerting that Malik takes this to be a discovery that he feels compelled to announce, prophetically, to his colleagues. Leaving that unfortunate indictment of the tech community aside, I want to suggest that there is no need to add a moral dimension to technology.

Years ago, Langdon Winner famously asked, “Do artifacts have politics?” In the article that bears that title, Winner went on to argue that they most certainly do. We might also ask, “Do artifacts have ethics?” I would argue that they do indeed. The question is not whether technology has a moral dimension, the question is whether we recognize it or not. In fact, technology’s moral dimension is inescapable, layered, and multi-faceted.

When we do think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about a technology on this view is the use to which it is put. This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash someones head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?

But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses?

Below are a few other questions that we might ask in order to get at the wide-ranging “moral dimension” of our technologies. There are, of course, many others that we could ask, but this is a start.

  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy?
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?

Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”

[….]

“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.'” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.

Jaron Lanier Wants to Secularize AI

In 2010, one of the earliest posts on this blog noted an op-ed in the NY Times by Jaron Lanier titled “The First Church of Robotic.” In it, Lanier lamented the rise quasi-religious aspirations animating many among the Silicon Valley elite. Describing the tangle of ideas and hopes usually associated with the Singularity and/or Transhumanism, Lanier concluded, “What we are seeing is a new religion, expressed through an engineering culture.” The piece wraps up rather straightforwardly: “We serve people best when we keep our religious ideas out of our work.”

In fact, the new religion Lanier has in view has a considerably older pedigree than what he imagines. Historian David Noble traced the roots of what he called the religion of technology back to the start of the last millennium. What Lanier identified was only the latest iteration of that venerable techno-religious tradition.

A couple of days ago, Edge posted a video (and transcript) of an extended discussion by Lanier, which was sparked by recent comments made by Stephen Hawking and Elon Musk about the existential threat to humanity AI may pose in the not-to-distant future. Lanier’s talk ranges impressively over a variety of related issues and registers a number of valuable insights. Consider, for instance, this passing critique of Big Data:

“I want to get to an even deeper problem, which is that there’s no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you’re getting big data by observing a lot of people who make choices, and then you’re doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there’s not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There’s no way to know.”

To which he adds a few moments later, “It’s not so much a rise of evil as a rise of nonsense. It’s a mass incompetence, as opposed to Skynet from the Terminator movies. That’s what this type of AI turns into.” Big Data as banal evil, perhaps.

Lanier is certainly not the only one pointing out that Big Data doesn’t magically render pure or objective sociological data. A host of voices have made some variation of this point in their critique of the ideology surrounding Big Data experiments conducted by the likes of Facebook and OkCupid. The point is simple enough: observation/measurement alters the observed/measured phenomena. It’s a paradox that haunts most forms of human knowledge, perhaps especially our knowledge of ourselves, and it seems to me that we are better off abiding the paradox rather than seeking to transcend it.

Lanier also scores an excellent point when he asks us to imagine two scenarios involving the possibility of 3-D printed killer drones that can be used to target individuals. In the first scenario, they are developed and deployed by terrorists; in the second they are developed and deployed by some sort of rogue AI along the lines that Musk and others have worried about. Lanier’s question is this: what difference does it make whether terrorists or rogue AI is to blame? The problem remains the same.

“The truth is that the part that causes the problem is the actuator. It’s the interface to physicality. It’s the fact that there’s this little killer drone thing that’s coming around. It’s not so much whether it’s a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there’s enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it’s a massive misdirection.”

It is a misdirection that entails an evasion of responsibility and a failure of political imagination.

All of this is well put, and there’s more along the same lines. Lanier’s chief concern, however, is to frame this as a problem of religious thinking infecting the work of technology. Early on, for instance, he says, “what I’m proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing. What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field.”

And toward the conclusion of his talk, Lanier elaborates:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

What Lanier proposes in response to this state of affairs is something like a wall of separation, not between the church and the state, but between religion and technology:

“To me, what would be ridiculous is for somebody to say, ‘Oh, you mustn’t study deep learning networks,’ or ‘you mustn’t study theorem provers,’ or whatever technique you’re interested in. Those things are incredibly interesting and incredibly useful. It’s the mythology that we have to become more self-aware of. This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease that apart and say this is the part that’s great, and this is the part that’s self-defeating. We have to do it exactly the same thing with AI now.”

I’m sure Lanier would admit that this is easier said than done. In fact, he suggests as much himself a few lines later. But it’s worth asking whether the kind of sorting out that Lanier proposes is not merely challenging, but, perhaps, unworkable. Just as mid-twentieth century theories of secularization have come on hard times owing to a certain recalcitrant religiosity (or spirituality, if you prefer), we might also find that the religion of technology cannot simply be wished away or bracketed.

Paradoxically, we might also say that something like the religion of technology emerges precisely to the (incomplete) degree that the process of secularization unfolded in the West. To put this another way, imagine that there is within Western consciousness a particular yearning for transcendence. Suppose, as well, that this yearning is so ingrained that it cannot be easily eradicated. Consequently, you end up having something like a whack-a-mole effect. Suppress one expression of this yearning, and it surfaces elsewhere. The yearning for transcendence never quite dissipates, it only transfigures itself. So the progress of secularization, to the degree that it successfully suppresses traditional expressions of the quest for transcendence, manages only to channel it into other cultural projects, namely techno-science. I certainly don’t mean to suggest that the entire techno-scientific project is an unmitigated expression of the religion of technology. That’s certainly not the case. But, as Noble made clear, particularly in his chapter on AI, the techno-religious impulse is hardly negligible.

One last thought, for now, arising out of my recent blogging through Frankenstein. Mary Shelley seemed to understand that one cannot easily disentangle the noble from the corrupt in human affairs: both are rooted in the same faculties and desires. Attempt to eradicate the baser elements altogether, and you may very well eliminate all that is admirable too. The heroic tendency is not safe, but neither is the attempt to tame it. I don’t think we’ve been well-served by our discarding of this essentially tragic vision in favor of a more cheery techno-utopianism.