Back in 2014, I wrote a post called “Do Artifacts Have Ethics?” Yes, of course, was the answer, and I offered forty-one questions formulated to help us think about an artifact’s complex ethical dimensions. A few months later, in 2015, I wrote about what I called Humanist Tech Criticism.
You would think, then, that I’d be pleasantly surprised by the recent eruption of interest in both ethics of technology and humane technology. I am, however, less than enthusiastic but also trying not to be altogether cynical. I’ll elaborate in just a moment, but, first, here is a sampling of the spate of recent stories, articles, and essays about the twin themes of humanist technology and ethics of technology.
“Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It”
“Ethical Tech Will Require a Grassroots Revolution”
“The Internet of Things Needs a Code of Ethics”
“Early Facebook and Google Employees Form Coalition to Fight What They Built”
“The Big Tech Backlash Is Reaching a Tipping Point”
“Tech Backlash Grows as Investors Press Apple to Act on Children’s Use”
“Sean Parker unloads on Facebook”
“The Tech Humanist Manifesto”
“No One Is Coming. It Is Up To Us”
“Dear Zuck (and Facebook Product Teams)”
“The tech insiders who fear a smartphone dystopia”
You could easily find another two dozen stories and more just like these that focus on a set of interrelated topics: former Silicon Valley executives and designers lamenting their earlier work, current investors demanding more ethically responsible devices (especially where children are concerned), general fretting over the consequences of social media (especially Facebook) on our political culture, and reporting on the formation of the Center for Humane Technology.
As you might imagine, there have been critics of this recent ethical awakening among at least a few of the well-connected in Silicon Valley. Among the more measured are Maya Indira Ganesh’s two posts on Cyborgology and this post on the always insightful Librarian Shipwreck. Both are critical and suspicious of these new efforts, particularly the Center for Humane Technology. But neither forecloses the possibility that some good may yet come of it all. That seems a judicious approach to take.
Others, of course, have been even more critical, focusing on what can be interpreted as a rather opportunistic change of tune at little to no personal cost for most of these Silicon Valley executives and designers. This cynicism is not altogether unwarranted. I’m tempted by this same cynicism but it also appears to me that it cannot be justly applied indiscriminately to all of those individuals working for a more ethical and humane tech industry.
My concerns lie elsewhere. I’ve expressed some of them before. What they amount to, mostly, is this: all efforts to apply ethics to technology or to envision a more humane technology will flounder because there is no robust public consensus about either human flourishing or ethical norms.
Moreover, technology* is both cause and symptom of this state of affairs: it advances, more or less unchecked, precisely because of this absence while it progressively undermines the plausibility of such a consensus. Thus, we are stuck in a vicious cycle generated by a reinforcing confluence of political, cultural, economic, and technological forces.
Most of the efforts mentioned above appear to me, then, to address what amounts to the the tip of the tip of the iceberg. That said, I want to avoid a critical hipsterism here—”I’m aware of problems so deep you don’t even realize they exist.” And I also do not want to suggest that any attempt at reform is useless unless it addresses the problem in its totality. But it may also be the case that such efforts, arising from and never escaping the more general technological malaise, only serve to reinforce and extend the existing situation. Tinkering with the apparatus to make it more humane does not go far enough if the apparatus itself is intrinsically inhumane.
Meaningful talk about ethics and human flourishing in connection with technology, to say nothing of meaningful action, might only be possible within communities that can sustain both a shared vision the good life and the practices that embody such a vision. The problem, of course, is that our technologies operate at a scale that eclipses the scope of such communities.
In “Friday’s Child,” Auden makes the parenthetical observation, “When kings were local, people knelt.” Likewise, we might say that when technology was local, people ruled. Something changed once technology ceased to be local, that is to say once it evolved into complex systems that overlapped communities, states, countries, and cultures. Traditional institutions and cultural norms were no longer adequate. They could not scale up to keep pace with technology because their natural habitat was the local community.
A final set of observations: Modern technology, in the broadest sense we might imagine the phenomena, closer to what Ellul means by technique, is formative. It tacitly conveys an anthropology, an understanding of what it means to be a human being. It does so in the most powerful way possible: inarticulately, as something more basic than a worldview or an ideology. It operates on our bodies, our perception, our habits; it shapes our imagination, our relationships, our desires.
The modern liberal order abets technology’s formative power to the degree that it disavows any strong claims about ethics and human flourishing. It is in the space of that disavowal that technology as an implicit anthropology and an implicit politics takes root and expands, framing and conditioning any subsequent efforts to subject it to ethical critique. Our understanding of the human is already conditioned by our technological milieu. Fundamental to this tacit anthropology, or account of the human, is the infinite malleability of human nature. Malleable humanity is a precondition to the unfettered expansion of technology. (This is why transhumanism is the proper eschatology of our technological order. Ultimately, humanity must adapt and conform, even if it means the loss of humanity as we have known it. As explicit ideology, this may still seem like a fringe position; as implicit practice, however, it is widely adopted.)
All of this accounts for why previous calls for more humane technology have not amounted to much. And this would be one other quibble I have with the work of the Center for Human Technology and others calling for humanistic technology: thus far there seems to be little awareness of or interest in a longstanding history of tech criticism that should inform their efforts. Again, this is not about critical hipsterism, it is about drawing on a diverse intellectual tradition that contains indispensable wisdom for anyone working toward more ethical and humane technology. Maybe that work is still to come. I hope that it is.
*I confess I’m using the word technology in a manner to which I myself sometimes object, as shorthand for a network of artifacts, techniques, implicit values, and political/industrial concerns.
Reminder: You can subscribe to my roughly twice-monthly newsletter, The Convivial Society, here.