Quantify Thyself

A thought in passing this morning. Here’s a screen shot that purports to be from an ad for Microsoft’s new wearable device called Band:

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central

I say “purports” because I’ve not been able to find this particular shot and caption on any official Microsoft sites. I first encountered it in this story about Band from October of last year, and I also found it posted to a Reddit thread around the same time. You can watch the official ad here.

It may be that this image is hoax or that Microsoft decided it was a bit too disconcerting and pulled it. A more persistent sleuth should be able to determine which. Whether authentic or not, however, it is instructive.

In tweeting a link to the story in which I first saw the image, I commented: “Define ‘know,’ ‘self,’ and ‘human.'” Nick Seaver astutely replied: “that’s exactly what they’re doing, eh?”

Again, the “they” in this case appears to be a bit ambiguous. That said, the picture is instructive because it reminds us, as Seaver’s reply suggests, that more than our physical fitness is at stake in the emerging regime of quantification. If I were to expand my list of 41 questions about technology’s ethical dimensions, I would include this one: How will the use of this technology redefine my moral vocabulary? or What about myself will the use of this technology encourage me to value?

Consider all that is accepted when someone buys into the idea, even if tacitly so, that Microsoft Band will in fact deepen their knowledge of themselves. What assumptions are accepted about the nature of what it means to know and what there is to know and what can be known? What is implied about the nature of the self when we accept that a device like Band can help us understand it more effectively? We are, needless to say, rather far removed from the Delphic injunction, “Know thyself.”

It is not, of course, that I necessarily think users of Band will be so naive that they will consciously believe there is nothing more to their identity than what Band can measure. Rather, it’s that most of us do have a propensity to pay more attention to what we can measure, particularly when an element of competitiveness is introduced.

I’ll go a step further. Not only do we tend to pay more attention to what we can measure, we begin to care more about what can measure. Perhaps that is because measurement affords us a degree of ostensible control over whatever it is that we are able to measure. It makes self-improvement tangible and manageable, but it does so, in part, by a reduction of the self to those dimensions that register on whatever tool or device we happen to be using to take our measure.

I find myself frequently coming back to one line in a poem by Wendell Berry: “We live the given life, not the planned.” Indeed, and we might also say, “We live the given life, not the quantified.”

A certain vigilance is required to remember that our often marvelous tools of measurement always achieve their precision by narrowing, sometimes radically, what they take into consideration. To reveal one dimension of the whole, they must obscure the others. The danger lies in confusing the partial representation for the whole.

What Do We Want, Really?

I was in Amish country last week. Several times a day I heard the clip-clop of horse hooves and the whirring of buggy wheels coming down the street and then receding into the distance–a rather soothing Doppler effect. While there, I was reminded of an anecdote about the Amish relayed by a reader in the comments to a recent post:

I once heard David Kline tell of Protestant tourists sight-seeing in an Amish area. An Amishman is brought on the bus and asked how Amish differ from other Christians. First, he explained similarities: all had DNA, wear clothes (even if in different styles), and like to eat good food.

Then the Amishman asked: “How many of you have a TV?”

Most, if not all, the passengers raised their hands.

“How many of you believe your children would be better off without TV?”

Most, if not all, the passengers raised their hands.

“How many of you, knowing this, will get rid of your TV when you go home?”

No hands were raised.

“That’s the difference between the Amish and others,” the man concluded.

I like the Amish. As I’ve said before, the Amish are remarkably tech-savvy. They understand that technologies have consequences, and they are determined to think very hard about how different technologies will affect the life of their communities. Moreover, they are committed to sacrificing the benefits a new technology might bring if they deem the costs too great to bear. This takes courage and resolve. We may not agree with all of the choices made by Amish communities, but it seems to me that we must admire both their resolution to think about what they are doing and their willingness to make the sacrifices necessary to live according to their principles.

Image via Wikicommons

Image via Wikicommons

The Amish are a kind of sign to us, especially as we come upon the start of a new year and consider, again, how we might better live our lives. Let me clarify what I mean by calling the Amish a sign. It is not that their distinctive way of life points the way to the precise path we must all follow. Rather, it is that they remind us of the costs we must be prepared to incur and the resoluteness we must be prepared to demonstrate if we are to live a principled life.

It is perhaps a symptom of our disorder that we seem to believe that all can be made well merely by our making a few better choices along the way. Rarely do we imagine that what might be involved in the realization of our ideals is something more radical and more costly. It is easier for us to pretend that all that is necessary are a few simple tweaks and minor adjustments to how we already conduct our lives, nothing that will makes us too uncomfortable. If and when it becomes impossible to sustain that fiction, we take comfort in fatalism: nothing can ever change, really, and so it is not worth trying to change anything at all.

What is often the case, however, is that we have not been honest with ourselves about what it is that we truly value. Perhaps an example will help. My wife and I frequently discuss what, for lack of a better way of putting it, I’ll call the ethics of eating. I will not claim to have thought very deeply, yet, about all of the related issues, but I can say that we care about what has been involved in getting food to our table. We care about the labor involved, the treatment of animals, and the use of natural resources. We care, as well, about the quality of the food and about the cultural practices of cooking and eating. I realize, of course, that it is rather fashionable to care about such things, and I can only hope that our caring is not merely a matter of fashion. I do not think it is.

But it is another thing altogether for us to consider how much we really care about these things. Acting on principle in this arena is not without its costs. Do we care enough to bear those costs? Do we care enough to invest the time necessary to understand all the relevant complex considerations? Are we prepared to spend more money? Are we willing to sacrifice convenience? And then it hits me that what we are talking about is not simply making a different consumer choice here and there. If we really care about the things we say we care about, then we are talking about changing the way we live our lives.

In cases like this, and they are many, I’m reminded of a paragraph in sociologist James Hunter’s book about varying approaches to moral education in American schools. “We say we want the renewal of character in our day,” Hunter writes,

“but we do not really know what to ask for. To have a renewal of character is to have a renewal of a creedal order that constrains, limits, binds, obligates, and compels. This price is too high for us to pay. We want character without conviction; we want strong morality but without the emotional burden of guilt or shame; we want virtue but without particular moral justifications that invariably offend; we want good without having to name evil; we want decency without the authority to insist upon it; we want moral community without any limitations to personal freedom. In short, we want what we cannot possibly have on the terms that we want it.”

You may not agree with Hunter about the matter of moral education, but it is his conclusion that I want you to note: we want what we cannot possibly have on the terms that we want it.

This strikes me as being a widely applicable diagnosis of our situation. Across so many different domains of our lives, private and public, this dynamic seems to hold. We say we want something, often something very noble and admirable, but in reality we are not prepared to pay the costs required to obtain the thing we say we want. We are not prepared to be inconvenienced. We are not prepared to reorder our lives. We may genuinely desire that noble, admirable thing, whatever it may be; but we want some other, less noble thing more.

At this point, I should probably acknowledge that many of the problems we face as individuals and as a society are not the sort that would be solved by our own individual thoughtfulness and resolve, no matter how heroic. But very few problems, private or public, will be solved without an honest reckoning of the price to be paid and the work to be done.

So what then? I’m presently resisting the temptation to now turn this short post toward some happy resolution, or at least toward some more positive considerations. Doing so would be disingenuous. Mostly, I simply wanted to draw our attention, mine no less than yours, toward the possibly unpleasant work of counting the costs. As we thought about the new year looming before us and contemplated how we might live it better than the last, I wanted us to entertain the possibility that what will be required of us to do so might be nothing less than a fundamental reordering of our lives. At the very least, I wanted to impress upon myself the importance of finding the space to think at length and the courage to act.

Saturday Evening Links

Below are a few links for your reading pleasure this weekend.

Researcher Believes 3D Printing May Lead to the Creation of Superhuman Organs Providing Humans with New Abilities: “This God-like ability will be made possible thanks in part to the latest breakthroughs in bioprinting. If companies and researchers are coming close to having the ability to 3D print and implant entire organs, then why wouldn’t it be possible to create our own unique organs, which provide us with superhuman abilities?”

Future perfect: how the Victorians invented the future: “It was only around the beginning of the 1800s, as new attitudes towards progress, shaped by the relationship between technology and society, started coming together, that people started thinking about the future as a different place, or an undiscovered country – an idea that seems so familiar to us now that we often forget how peculiar it actually is.”

Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised? Paper by John Danaher: “Soon there will be sex robots. The creation of such devices raises a host of social, legal and ethical questions. In this article, I focus in on one of them. What if these sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse? Should the creation and use of such robots be criminalised, even if no person is harmed by the acts performed? I offer an argument for thinking that they should be.” (Link to article provided.)

Enthusiasts and Skeptics Debate Artificial Intelligence: “… the Singularitarians’ belief that we’re biological machines on the verge of evolving into not entirely biological super-machines has a distinctly religious fervor and certainty. ‘I think we are going to start to interconnect as a human species in a fashion that is intimate and magical,’ Diamandis told me. ‘What I would imagine in the future is a meta-intelligence where we are all connected by the Internet [and] achieve a new level of sentience. . . . Your readers need to understand: It’s not stoppable. It doesn’t matter what they want. It doesn’t matter how they feel.'”

Artificial Intelligence Isn’t a Threat—Yet: “The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.”

Is it okay to torture or murder a robot?: “What’s clear is that there is a spectrum of “aliveness” in robots, from basic simulations of cute animal behaviour, to future robots that acquire a sense of suffering. But as Darling’s Pleo dinosaur experiment suggested, it doesn’t take much to trigger an emotional response in us. The question is whether we can – or should – define the line beyond which cruelty to these machines is unacceptable. Where does the line lie for you? If a robot cries out in pain, or begs for mercy? If it believes it is hurting? If it bleeds?”

A couple of housekeeping notes. Reading Frankenstein posts will resume at the start of next week. Also, you may have noticed that an Index for the blog is in progress. I’ve always wanted to find a way to make older posts more accessible, so I’ve settled on an selective index for People and Topics. You can check it out by clicking the “Index” tab above.

Cheers!

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

Do Artifacts Have Ethics?

Writing about “technology and the moral dimension,” tech writer and Gigaom founder, Om Malik made the following observation:

“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”

I’m not sure how many people in the tech industry would concur with Malik’s claim, but it is a remarkably telling admission from at least one well-placed individual. Happily, Malik realizes that “it is time to add an emotional and moral dimension to products.” But what exactly does it mean to add an emotional and moral dimension to products?

Malik’s own ensuing discussion is brief and deals chiefly with using data ethically and producing clear, straightforward terms of service. This suggests that Malik is mostly encouraging tech companies to treat their customers in an ethically responsible manner. If so, it’s rather disconcerting that Malik takes this to be a discovery that he feels compelled to announce, prophetically, to his colleagues. Leaving that unfortunate indictment of the tech community aside, I want to suggest that there is no need to add a moral dimension to technology.

Years ago, Langdon Winner famously asked, “Do artifacts have politics?” In the article that bears that title, Winner went on to argue that they most certainly do. We might also ask, “Do artifacts have ethics?” I would argue that they do indeed. The question is not whether technology has a moral dimension, the question is whether we recognize it or not. In fact, technology’s moral dimension is inescapable, layered, and multi-faceted.

When we do think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about a technology on this view is the use to which it is put. This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash someones head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?

But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses?

Below are a few other questions that we might ask in order to get at the wide-ranging “moral dimension” of our technologies. There are, of course, many others that we could ask, but this is a start.

  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy?
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?