Why We Can’t Have Humane Technology

Back in 2014, I wrote a post called “Do Artifacts Have Ethics?” Yes, of course, was the answer, and I offered forty-one questions formulated to help us think about an artifact’s complex ethical dimensions. A few months later, in 2015, I wrote about what I called Humanist Tech Criticism.

You would think, then, that I’d be pleasantly surprised by the recent eruption of interest in both ethics of technology and humane technology. I am, however, less than enthusiastic but also trying not to be altogether cynical. I’ll elaborate in just a moment, but, first, here is a sampling of the spate of recent stories, articles, and essays about the twin themes of humanist technology and ethics of technology.

“Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It”
“Ethical Tech Will Require a Grassroots Revolution”
“The Internet of Things Needs a Code of Ethics”
“Early Facebook and Google Employees Form Coalition to Fight What They Built”
“The Big Tech Backlash Is Reaching a Tipping Point”
“Tech Backlash Grows as Investors Press Apple to Act on Children’s Use”
“Sean Parker unloads on Facebook”
“The Tech Humanist Manifesto”
“No One Is Coming. It Is Up To Us”
“Dear Zuck (and Facebook Product Teams)”
“The tech insiders who fear a smartphone dystopia”

You could easily find another two dozen stories and more just like these that focus on a set of interrelated topics: former Silicon Valley executives and designers lamenting their earlier work, current investors demanding more ethically responsible devices (especially where children are concerned), general fretting over the consequences of social media (especially Facebook) on our political culture, and reporting on the formation of the Center for Humane Technology.

As you might imagine, there have been critics of this recent ethical awakening among at least a few of the well-connected in Silicon Valley. Among the more measured are Maya Indira Ganesh’s two posts on Cyborgology and this post on the always insightful  Librarian Shipwreck. Both are critical and suspicious of these new efforts, particularly the Center for Humane Technology. But neither forecloses the possibility that some good may yet come of it all. That seems a judicious approach to take.

Others, of course, have been even more critical, focusing on what can be interpreted as a rather opportunistic change of tune at little to no personal cost for most of these Silicon Valley executives and designers. This cynicism is not altogether unwarranted. I’m tempted by this same cynicism but it also appears to me that it cannot be justly applied indiscriminately to all of those individuals working for a more ethical and humane tech industry.

My concerns lie elsewhere. I’ve expressed some of them before. What they amount to, mostly, is this: all efforts to apply ethics to technology or to envision a more humane technology will flounder because there is no robust public consensus about either human flourishing or ethical norms.

Moreover, technology* is both cause and symptom of this state of affairs: it advances, more or less unchecked, precisely because of this absence while it progressively undermines the plausibility of such a consensus. Thus, we are stuck in a vicious cycle generated by a reinforcing confluence of political, cultural, economic, and technological forces.

Most of the efforts mentioned above appear to me, then, to address what amounts to the the tip of the tip of the iceberg. That said, I want to avoid a critical hipsterism here—”I’m aware of problems so deep you don’t even realize they exist.” And I also do not want to suggest that any attempt at reform is useless unless it addresses the problem in its totality. But it may also be the case that such efforts, arising from and never escaping the more general technological malaise, only serve to reinforce and extend the existing situation. Tinkering with the apparatus to make it more humane does not go far enough if the apparatus itself is intrinsically inhumane.

Meaningful talk about ethics and human flourishing in connection with technology, to say nothing of meaningful action, might only be possible within communities that can sustain both a shared vision the good life and the practices that embody such a vision. The problem, of course, is that our technologies operate at a scale that eclipses the scope of such communities.

In “Friday’s Child,” Auden makes the parenthetical observation, “When kings were local, people knelt.” Likewise, we might say that when technology was local, people ruled. Something changed once technology ceased to be local, that is to say once it evolved into complex systems that overlapped communities, states, countries, and cultures. Traditional institutions and cultural norms were no longer adequate. They could not scale up to keep pace with technology because their natural habitat was the local community.

A final set of observations: Modern technology, in the broadest sense we might imagine the phenomena, closer to what Ellul means by technique, is formative. It tacitly conveys an anthropology, an understanding of what it means to be a human being. It does so in the most powerful way possible: inarticulately, as something more basic than a worldview or an ideology. It operates on our bodies, our perception, our habits; it shapes our imagination, our relationships, our desires.

The modern liberal order abets technology’s formative power to the degree that it disavows any strong claims about ethics and human flourishing. It is in the space of that disavowal that technology as an implicit anthropology and an implicit politics takes root and expands, framing and conditioning any subsequent efforts to subject it to ethical critique. Our understanding of the human is already conditioned by our technological milieu. Fundamental to this tacit anthropology, or account of the human, is the infinite malleability of human nature. Malleable humanity is a precondition to the unfettered expansion of technology. (This is why transhumanism is the proper eschatology of our technological order. Ultimately, humanity must adapt and conform, even if it means the loss of humanity as we have known it. As explicit ideology, this may still seem like a fringe position; as implicit practice, however, it is widely adopted.)

All of this accounts for why previous calls for more humane technology have not amounted to much. And this would be one other quibble I have with the work of the Center for Human Technology and others calling for humanistic technology: thus far there seems to be little awareness of or interest in a longstanding history of tech criticism that should inform their efforts. Again, this is not about critical hipsterism, it is about drawing on a diverse intellectual tradition that contains indispensable wisdom for anyone working toward more ethical and humane technology. Maybe that work is still to come. I hope that it is.


*I confess I’m using the word technology in a manner to which I myself sometimes object, as shorthand for a network of artifacts, techniques, implicit values, and political/industrial concerns.


Related: Democracy and Technology and One Does Not Simply Add Ethics To Technology.


Reminder: You can subscribe to my roughly twice-monthly newsletter, The Convivial Society, here.

7 thoughts on “Why We Can’t Have Humane Technology

  1. Thanks Michael. As always, a challenging perspective. I agree that ultimately humanity will adapt out of necessity ( or collectively react against something to enforce modification) , just as we have done throughout history with any change thrust on society, be that local or global, technology or other. And while some change is genuinely for the betterment of society, it is mostly for the benefit of an individual ( ie driven by reward, greed, power etc). From my perspective, it’s the latter which will be the dominant hindrance to building anything within an ethical framework. I say “ethical framework”, but your are correct in highlighting the non existence of any universally accepted model. My hope is that future technology design will incorporate “ethical parameter” settings which afford a user to use technology but influence how it operates based on ethical preferences.

  2. You talk like Matthew Crawford, whose 2015 book_The World Beyond Your Head_ does offer such critiques of the liberal stance implied by inaction as the following: “We abstain on principle from condemning activitites that leave one compromised and degraded, because we fear that disapproval of the activity would be paternalistic toward those who engage in it. We also abstain from affirming some substantive picture of human flourishing, for fear of imposing our values on others. This gives us a pleasant feeling we have succeeded in not being paternalistic or presumptuous. The priority we give to avoiding _these_ vices in particular is rooted in our respect for persons unnderstood as autonomous. “People should be allowed to make their own decisions.” But this seeming magnanimity comes at a cost: one might ask, are you truly for (and, by extension, truly against) _anything_? When is intervention warranted? The true liberal would say never.

    It seems to me the burden of proof is at this point on those who defend the technological society. How much evidence do you need to convince you that human will is being eroded and fast?

    It’s all true, the inhumanity can’t be designed out of the product. It is intrinsic. You’ve read your Ellul. You already know. There is no effective counterargument to _The Technological Society_.

  3. It will be interesting to see whether the CHT will make good on its early acclaim or just end up providing big tech with an easy way to tithe to public opinion. I’m impressed by your comparatively measured response, which aside from the cautious position David Golumbia took in a recent Vice column, was the only such response I’ve read.

    I suspect that the out-of-the-gate repugnance on the part of some tech critics has at least a little to do with a threatened sense of identity. It may be that some people involved in the CHT or elsewhere are just starting on a path to thinking more deeply about this stuff. That should be encouraged. I hope I’m not so attached to some sense of epistemic privilege regarding technology that I would impede actual concrete improvements or dissuade others from being involved. After all, it’s not as if this is a zero-sum game. I think you’re on the right track in using this as an opportunity to re-inject some lessons from less mainstream critique back into the discourse.

    A few thoughts on politics…

    Like the comment above, I think you’ve located the crux of the problem with reining in technology in our society. Despite their many boons, modern, liberal, technological societies tend to nullify the moral consensus necessary to direct their course while piling up systemic problems out of reach. I think we simply have conflicting values here: avoiding paternalism (a worthy goal, to be sure) means speaking only for ourselves, yet solving our worst problems now requires that we operate as a collective. I don’t want to be overly pessimistic, but I’ve seen nothing to suggest that this trade off from democracy at this scale is remediable without some kind of major event shifting the paradigm. In fact, we’ve arrived at point where technology perversely and explicitly appears as a solution to political stagnation (among other forms).

    This problem has been dogging our democracy a long time, as you’ve mentioned before. After all, in the US, haven’t space (as in the frontier) and the market have also functioned as ways to pay forward societal tensions? I can’t help but wonder whether an effective politics at the scale we’re now seeking requires a cultural and epistemic homogenization that’s simply too totalizing to be worth it. If we can’t find a way to scale down to a point where people can participate politically in a way that lets them actively perceive the effects of their contributions, I don’t see the turmoil abating.

    On the question of addiction…

    Though I try to be wary of generalizing too much from personal experience, I’m still inclined to think there’s good reason to consider the addiction angle. We need not do so exclusively, but the opposite risk that few progressives seem concerned about is that we may ask more from politics than it’s actually capable of delivering.

    If we take seriously the idea that people may respond with varying degrees of vulnerability (much as with gambling or food), and that some of the most well funded psychological research in the world happens in marketing departments, I don’t think it’s unreasonable talk about addiction. Here, one very useful role played by people like Tristan Harris and Adam Alter, not to mention Natasha Dow Schüll (who is hardly a groomed Silicon Valley insider), has been in informing us of the way addiction, or the more ostensibly beneficent “habit formation,” has increasingly become an explicit design goal for many companies. Wasn’t this always the trajectory of consumerism? Why, in lieu of (or perhaps toward?) the political consensus which brings us agency/salvation, should we constrain ourselves from considering smaller scale or even individual strategies? I’m not convinced by people like Ganesh that we’re politically diverted by such considerations.

    For instance, I just finished Robert Lustig’s The Hacking of the American Mind. After bracketing my aversion to his typical neuro-reductive leanings, I found that quite a few of the counter-consumer-cultural strategies he suggests aren’t so terribly far from the ones Illich did.

    We shouldn’t let other debates with neuroscience, biology, phenomenology or other approaches prevent us from recognizing that they have something to offer. If there is a solution, I seriously doubt it will be categorically individual, political, cultural or scientific.The willingness to look at all of these angles is something I’ve always appreciated in Nicholas Carr’s work, even though it seems like he’s been shunned lately for starting his critiques from perhaps a more phenomenological place.

    One last thought on technology and community that is perhaps relevant to the above…

    It may very well be the case that, as with alcohol, that there’s no legislative solution to all the trouble here. Maybe there are people whose lives are enhanced by certain forms of social media, like grandparents who like seeing pictures of the grand kids more often. Other people may find that, for a variety of reasons, the very same technology operates to their detriment, as a temptation to escapism or superficiality, or a vulnerability to social pressures. What we have then is a trade-off for which there is no universal prescription.

    This is why, in the case of alcoholism, people in AA try to address this through voluntary cooperative action rather than lobbying for prohibition. Though it may have had early roots in temperance and the Oxford group, most modern AA groups operate in politically agnostic fashion. Many people deride AA for not collecting measurable data or making itself centralized and publicly transparent, it’s also true that no one is forced to join, the diagnosis of alcoholic is only a personal one, and members are motivated to direct, personal participation.

    There may be a lot that’s problematic about an analogy to tech here, but there may also be much that is instructive. At the least, it raises the interesting question of when it’s appropriate for a group of people directly and personally affected by an issue to freely band together to address their own problems, and when the authority to address such issues should be reserved for society at large.

    Another good case study might be this documentary on cochlear implants: https://www.youtube.com/watch?v=hdIoSNwNfVs

    While I suspect many people might cringe at the thought of a child being denied the ability to hear via an implant, and I among them, I think the film portrays clearly enough that very real strife and suffering comes when the constraints that create family bonds, culture and lifestyle are broken down. In the end, are there not some such constraints embedded in all of our bonds and identifications? Here, I think it’s pretty easy to see visions of the future of human enhancement, even if the answers aren’t so obvious.

Leave a comment