Writing about “technology and the moral dimension,” tech writer and Gigaom founder, Om Malik made the following observation:
“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”
I’m not sure how many people in the tech industry would concur with Malik’s claim, but it is a remarkably telling admission from at least one well-placed individual. Happily, Malik realizes that “it is time to add an emotional and moral dimension to products.” But what exactly does it mean to add an emotional and moral dimension to products?
Malik’s own ensuing discussion is brief and deals chiefly with using data ethically and producing clear, straightforward terms of service. This suggests that Malik is mostly encouraging tech companies to treat their customers in an ethically responsible manner. If so, it’s rather disconcerting that Malik takes this to be a discovery that he feels compelled to announce, prophetically, to his colleagues. Leaving that unfortunate indictment of the tech community aside, I want to suggest that there is no need to add a moral dimension to technology.
Years ago, Langdon Winner famously asked, “Do artifacts have politics?” In the article that bears that title, Winner went on to argue that they most certainly do. We might also ask, “Do artifacts have ethics?” I would argue that they do indeed. The question is not whether technology has a moral dimension, the question is whether we recognize it or not. In fact, technology’s moral dimension is inescapable, layered, and multi-faceted.
When we do think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about a technology on this view is the use to which it is put. This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash someones head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?
But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses?
Below are a few other questions that we might ask in order to get at the wide-ranging “moral dimension” of our technologies. There are, of course, many others that we could ask, but this is a start.
- What sort of person will the use of this technology make of me?
- What habits will the use of this technology instill?
- How will the use of this technology affect my experience of time?
- How will the use of this technology affect my experience of place?
- How will the use of this technology affect how I relate to other people?
- How will the use of this technology affect how I relate to the world around me?
- What practices will the use of this technology cultivate?
- What practices will the use of this technology displace?
- What will the use of this technology encourage me to notice?
- What will the use of this technology encourage me to ignore?
- What was required of other human beings so that I might be able to use this technology?
- What was required of other creatures so that I might be able to use this technology?
- What was required of the earth so that I might be able to use this technology?
- Does the use of this technology bring me joy?
- Does the use of this technology arouse anxiety?
- How does this technology empower me? At whose expense?
- What feelings does the use of this technology generate in me toward others?
- Can I imagine living without this technology? Why, or why not?
- How does this technology encourage me to allocate my time?
- Could the resources used to acquire and use this technology be better deployed?
- Does this technology automate or outsource labor or responsibilities that are morally essential?
- What desires does the use of this technology generate?
- What desires does the use of this technology dissipate?
- What possibilities for action does this technology present? Is it good that these actions are now possible?
- What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
- How does the use of this technology shape my vision of a good life?
- What limits does the use of this technology impose upon me?
- What limits does my use of this technology impose upon others?
- What does my use of this technology require of others who would (or must) interact with me?
- What assumptions about the world does the use of this technology tacitly encourage?
- What knowledge has the use of this technology disclosed to me about myself?
- What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
- What are the potential harms to myself, others, or the world that might result from my use of this technology?
- Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
- Does my use of this technology encourage me to view others as a means to an end?
- Does using this technology require me to think more or less?
- What would the world be like if everyone used this technology exactly as I use it?
- What risks will my use of this technology entail for others? Have they consented?
- Can the consequences of my use of this technology be undone? Can I live with those consequences?
- Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
- Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?
If you’ve appreciated what you’ve read, consider supporting the writer.