Humanist Technology Criticism

“Who are the humanists, and why do they dislike technology so much?”

That’s what Andrew McAfee wants to know. McAfee, formerly of Harvard Business School, is now a researcher at MIT and the author, with Erik Brynjolfsson, of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. At his blog, hosted by the Financial Times, McAfee expressed his curiosity about the use of the terms humanism or humanist in “critiques of technological progress.” “I’m honestly not sure what they mean in this context,” McAfee admitted.

Humanism is a rather vague and contested term with a convoluted history, so McAfee asks a fair question–even if his framing is rather slanted. I suspect that most of the critics he has in mind would take issue with the second half of McAfee’s compound query. One of the examples he cites, after all, is Jaron Lanier, who, whatever else we might say of him, can hardly be described as someone who “dislikes technology.”

That said, what response can we offer McAfee? It would be helpful to sketch a history of the network of ideas that have been linked to the family of words that include humanism, humanist, and the humanities. The journey would take us from the Greeks and the Romans, through (not excluding) the medieval period to the Renaissance and beyond. But that would be a much larger project, and I wouldn’t be your best guide. Suffice it to say that near the end of such a journey, we would come to find the idea of humanism splintered and in retreat; indeed, in some quarters, we would find it rejected and despised.

But if we forego the more detailed history of the concept, can we not, nonetheless, offer some clarifying comments regarding the more limited usage that has perplexed McAfee? Perhaps.

I’ll start with an observation made by Wilfred McClay in a 2008 essay in the Wilson Quarterly, “The Burden of the Humanities.” McClay suggested that we define the humanities as “the study of human things in human ways.”¹ If so, McClay continues, “then it follows that they function in culture as a kind of corrective or regulative mechanism, forcing upon our attention those features of our complex humanity that the given age may be neglecting or missing.” Consequently, we have a hard time defining the humanities–and, I would add, humanism–because “they have always defined themselves in opposition.”

McClay provides a brief historical sketch showing that the humanities have, at different historical junctures, defined themselves by articulating a vision of human distinctiveness in opposition to the animal, the divine, and the rational-mechanical. “What we are as humans,” McClay adds, “is, in some respects, best defined by what we are not: not gods, not angels, not devils, not machines, not merely animals.”

In McClay’s historical sketch, humanism and the humanities have lately sought to articulate an understanding of the human in opposition to the “rational-mechanical,” or, in other words, in opposition to the technological, broadly speaking. In McClay’s telling, this phase of humanist discourse emerges in early nineteenth century responses to the Enlightenment and industrialization. Here we have the beginnings of a response to McAfee’s query. The deployment of humanist discourse in the context of technology criticism is not exactly a recent development.

There may have been earlier voices of which I am unaware, but we may point to Thomas Carlyle’s 1829 essay, “Sign of the Times,” as an ur-text of the genre.² Carlyle dubbed his era the “Mechanical Age.” “Men are grown mechanical in head and heart, as well as in hand,” Carlyle complained. “Not for internal perfection,” he added, “but for external combinations and arrangements for institutions, constitutions, for Mechanism of one sort or another, do they hope and struggle.”

Talk of humanism in relation to technology also flourished in the early and mid-twentieth century. Alan Jacobs, for instance, is currently working on a book project that examines the response of a set of early 20th century Christian humanists, including W.H. Auden, Simone Weil, and Jacques Maritain, to total war and the rise of technocracy. “On some level each of these figures,” Jacobs explains, “intuited or explicitly argued that if the Allies won the war simply because of their technological superiority — and then, precisely because of that success, allowed their societies to become purely technocratic, ruled by the military-industrial complex — their victory would become largely a hollow one. Each of them sees the creative renewal of some form of Christian humanism as a necessary counterbalance to technocracy.”

In a more secular vein, Paul Goodman asked in 1969, “Can Technology Be Humane?” In his article (h/t Nicholas Carr), Goodman observed that popular attitudes toward technology had shifted in the post-war world. Science and technology could no longer claim the “unblemished and justified reputation as a wonderful adventure” they had enjoyed for the previous three centuries. “The immediate reasons for this shattering reversal of values,” in Goodman’s view, “are fairly obvious.

Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied subsequent developments, the deterioration of the physical environment and the destruction of the biosphere, the catastrophes impending over the cities because of technological failures and psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield diminishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely only increase human woe.”

For his part, Goodman advocated a more prudential and, yes, humane approach to technology. “Whether or not it draws on new scientific research,” Goodman argued, “technology is a branch of moral philosophy, not of science.” “As a moral philosopher,” Goodman continued in a remarkable passage, “a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.” The whole essay is well-worth your time. I bring it up merely as another instance of the genre of humanistic technology criticism.

More recently, in an interview cited by McAfee, Jaron Lanier has advocated the revival of humanism in relation to the present technological milieu. “I’m trying to revive or, if you like, resuscitate, or rehabilitate the term humanism,” Lanier explained before being interrupted by a bellboy cum Kantian, who breaks into the interview to say, “Humanism is humanity’s adulthood. Just thought I’d throw that in.” When he resumes, Lanier expanded on what he means by humanism:

“And pragmatically, if you don’t treat people as special, if you don’t create some sort of a special zone for humans—especially when you’re designing technology—you’ll end up dehumanising the world. You’ll turn people into some giant, stupid information system, which is what I think we’re doing. I agree that humanism is humanity’s adulthood, but only because adults learn to behave in ways that are pragmatic. We have to start thinking of humans as being these special, magical entities—we have to mystify ourselves because it’s the only way to look after ourselves given how good we’re getting at technology.”

In McAfee’s defense, this is an admittedly murky vision. I couldn’t tell you what exactly Lanier is proposing when he says that we have to “mystify ourselves.” Earlier in the interview, however, he gave an example that might help us understand his concerns. Discussing Google Translate, he observes the following: “What people don’t understand is that the translation is really just a mashup of pre-existing translations by real people. The current set up of the internet trains us to ignore the real people who did the first translations, in order to create the illusion that there is an electronic brain. This idea is terribly damaging. It does dehumanise people; it does reduce people.”

So Lanier’s complaint here seems to be that this particular configuration of technology and obscures an essential human element. Furthermore, Lanier is concerned that people are reduced in this process. This is, again, a murky concept, but I take it to mean that some important element of what constitutes the human is being ignored or marginalized or suppressed. Like the humanities in McClay’s analysis, Lanier’s humanism draws our attention to “those features of our complex humanity that the given age may be neglecting or missing.”

One last example. Some years ago, historian of science George Dyson wondered if the cost of machines that think will be people who don’t. Dyson’s quip suggests the problem that Evan Selinger has dubbed the outsourcing of our humanity. We outsource our humanity when we allow an app or device to do for us what we ought to be doing for ourselves (naturally, that ought needs to be established). Selinger has developed his critique in response to a variety of apps but especially those that outsource what we may call our emotional labor.

I think it fair to include the outsourcing critique within the broader genre of humanist technology criticism because it assumes something about the nature of our humanity and finds that certain technologies are complicit in its erosion. Not surprisingly, in a tweet of McAfee’s post, Selinger indicated that he and Brett Frischmann had plans to co-author a book analyzing the concept of dehumanizing technology in order to bring clarity to its application. I have no doubt that Selinger and Frishchmann’s work will advance the discussion.

While McAfee was puzzled by humanist discourse with regards to technology criticism, others have been overtly critical. Evgeny Morozov recently complained that most technology critics default to humanist/anti-humanist rhetoric in their critiques in order to evade more challenging questions about politics and economics. For my part, I don’t see why both approaches cannot each contribute to a broader understanding of technology and its consequences while also informing our personal and collective responses.

Of course, while Morozov is critical of humanizing/dehumanizing approach to technology on more or less pragmatic grounds–it is ultimately ineffective in his view–others oppose it on ideological or theoretical grounds. For these critics, humanism is part of the problem not the solution. Technology has been all too humanistic, or anthropocentric, and has consequently wreaked havoc on the global environment. Or, they may argue that any deployment of humanism as an evaluative category also implies a policing of the boundaries of the human with discriminatory consequences. Others will argue that it is impossible to make a hard ontological distinction among the natural, the human, and the technological. We have always been cyborgs in their view. Still others argue that there is no compelling reason to privilege the existing configuration of what we call the human. Humanity is a work in progress and technology will usher in a brave, new post-human world.

Already, I’ve gone on longer than a blog post should, so I won’t comment on each of those objections to humanist discourse. Instead, I’ll leave you with a few considerations about what humanist technology criticism might entail. I’ll do so while acknowledging that these considerations undoubtedly imply a series of assumptions about what it means to be a human being and what constitutes human flourishing.

That said, I would suggest that a humanist critique of technology entails a preference for technology that (1) operates at a humane scale, (2) works toward humane ends, (3) allows for the fullest possible flourishing of a person’s capabilities, (4) does not obfuscate moral responsibility, and (5) acknowledges certain limitations to what we might quaintly call the human condition.

I realize these all need substantial elaboration and support–the fifth point is especially contentious–but I’ll leave it at that for now. Take that as a preliminary sketch. I’ll close, finally, with a parting observation.

A not insubstantial element within the culture that drives technological development is animated by what can only be described as a thoroughgoing disgust with the human condition, particularly its embodied nature. Whether we credit the wildest dreams of the Singulatarians, Extropians, and Post-humanists or not, their disdain as it finds expression in a posture toward technological power is reason enough for technology critics to strive for a humanist critique that acknowledges and celebrates the limitations inherent in our frail, yet wondrous humanity.

This gratitude and reverence for the human as it is presently constituted, in all its wild and glorious diversity, may strike some as an unpalatably religious stance to assume. And, indeed, for many of us it stems from a deeply religious understanding of the world we inhabit, a world that is, as Pope Francis recently put it, “our common home.” Perhaps, though, even the secular citizen may be troubled by, as Hannah Arendt has put it, such a “rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking).”

________________________

¹ Here’s a fuller expression of McClay’s definition from earlier in the essay: “The distinctive task of the humanities, unlike the natural sciences and social sciences, is to grasp human things in human terms, without converting or reducing them to something else: not to physical laws, mechanical systems, biological drives, psychological disorders, social structures, and so on. The humanities attempt to understand the human condition from the inside, as it were, treating the human person as subject as well as object, agent as well as acted-upon.”

² Shelley’s “In Defense of Poetry” might qualify.

6 thoughts on “Humanist Technology Criticism

  1. I read the whole thing and I have reason to believe that I agree with you on many points. Your final five guidelines for tech with a humanist bent make very good sense to me especially when I consider them in the context of edtech and how we use and apply technology in our schools and homes in the interest of facilitating learning. Thanks for duly stretching my thought process.

  2. Thanks for capturing the never-ending and complicated tension between humanity and all else, in this case, technology. The experience of being fully and truly human is indeed facilitated by, yes, technology. We can’t avoid it, which makes your five criteria worth exploring further. I hope you do that!

  3. [A comment from the “other side”]

    Hi, I’ve wandered by your blog a few times. I’ve wondered if it would do any good to give my perspective as someone who is very pro-technology, but also critical of the way it’s propagandized in the service of corporate power. I’ve found it generally bad idea all around to comment on a blog where one is in basic philosophical disagreement with the blog-owner. But I very much enjoyed the Morozov article, and it paralleled many of my own thoughts. Let me just try to say something based in a different worldview, keeping in mind the cliche of tech people slagging humanities people as wanting to go live in caves.

    I think the key is that contentious point of “limitations to what we might quaintly call the human condition”. Are these limitations to be what one “celebrates” or struggles to surpass? I see a thread of what might be called nerd-shaming in that sort writing, of marking broad areas of knowledge as off-limits because of fear in the very socially conservative speaker. Along the lines of the old concept of “There are some things Man was not meant to know, you are damned to Hell if you tamper in this realm”.

    There’s too much of a nasty history of attacks on science and technology rooted in such concepts to be ignored. “… human distinctiveness in opposition to the animal, the divine, and the rational-mechanical. …” fits, for example, the opposition to evolution based on the oppositional claim that it meant humans were mere animals. Even today that’s still an argument in some circles, that evolution somehow denies humanity. There’s 19th century writing full of horror regarding the emerging discovery that life is based in chemistry and electricity.

    And this, in my view, is where Morozov makes some very good points about the extremely reactionary nature which can come out of such thinking. When you say “why both approaches cannot each contribute to a broader understanding of technology …” maybe it’s possible in a completely abstract way. But in practice, if your analysis has heavy amounts of essentially saying why techies are to be denigrated because they’re socially maladjusted from what you deem permissible aspirations (“a thoroughgoing disgust with the human condition, particularly its embodied nature”), that’s rather far afield from dealing with laissez-faire capitalism and inequality. It’s a version of the standard Left problem of identity-politics versus economics. Yes, yes, there are some ways those two can intersect, it’s not 100% disjoint. However, again, it’s likely you’re going to be spending a huge amount of time in cultural argument, essentially slamming fairly powerless people for approval from your own clique, and not a whole lot in labor organizing (once more, maybe not 100%-0%, but it’s much easier to go after unpopular supposed weird subgroups than money and power).

    1. Seth,

      Thanks for taking the time to comment. While I share your wariness about engaging in online exchanges under certain circumstances, I hope you find this exchange, however it unfolds, civil and profitable. In any case, I’m glad for the push-back when I get it; it keeps me honest, so to speak.

      Regarding the matter of limitations, no nerd shaming was intended. I’m not entirely sure, actually, how it would entail nerd-shaming in any case. Also, I did not have in view knowledge so much as the limits imposed by the fact that we are embodied creatures. I find it helpful to think about this issue of limits with the help of some categories provided by Albert Borgmann. He distinguishes between troubles we resist in practice even if we must accept them, and those we accept in principle and in practice.

      Regarding the “nasty history of attacks on science and technology,” just let me know if you think I’ve made such an attack, and we can certainly discuss that point. As for the general concern, it seems to me that science and technology are pretty much in the clear. Whatever the history, and I think it is more complex than what your comment suggests, science and tech more or less run the cultural show now.

      Finally, regarding the last paragraph, I’m not entirely sure I followed exactly, so please feel free to correct me if I’ve not read your point rightly. First, I’m pretty sympathetic to Morozov’s argument, even if I’m hesitant to embrace it altogether. Second, I’m not trying to denigrate anyone, much less slamming powerless people (I’m not sure who these powerless people are, by the way). I’d be curious to know where exactly you see me doing this. Thirdly, and this is a point that I’d want to elaborate at greater length, I’m not so sure that the line you quoted is so far afield as you suggest from matters of “laissez-faire capitalism and inequality.” Capitalism and the views I have in mind, chiefly those that see human nature as a field for unbridled manipulation, both proceed from shared assumptions about desire, power, and individual autonomy.

      I’ll close with one more point. I’m not sure I’ve yet done a good enough job of expressing where I depart from Morozov’s position, thus it may not be obvious to reader’s why I think the approach he criticizes and his own may not be ultimately incompatible. Basically, it seems to me that Morozov’s point is about analysis and action, that is understanding the impact of economic forces on the development of technology and figuring out paths of political action. But on what grounds do we oppose certain practices if not from some prior understanding of what is good and what is pernicious? This is where the humanist criticism can come into play. But I need to think more about this, no doubt.

      Again, thanks for the comment. I hope this response helps.

      1. Thanks for being open to some dialogue.

        I’d hoped I was clear about the nerd-shaming aspect, in the concept of seeming to describe aspirations as social maladjustment – i.e. “a thoroughgoing disgust with the human condition, particularly its embodied nature”. When you talk of “[limits] we accept in principle and in practice.”, I’d say a key area of divide is regarding pushing against what is currently *thought* to be a limit in principle, sometimes even attempting to take action in practice (said action which will initially be clumsy, or even outright physically dangerous to the doer, e.g. body-modification). Perhaps some of what we currently BELIEVE are “limits imposed by the fact that we are embodied creatures”, are not really limits at all. For example, the development of anesthesia certainly extended the limits of possible surgery on the body.

        And this segues into what I mean by “essentially slamming fairly powerless people”. When you say “Whether we credit the wildest dreams of the Singulatarians, Extropians, and Post-humanists or not, their disdain as it finds expression in a posture toward technological power is reason enough for technology critics to strive for a humanist critique …”, I look at that, and think the end result is going to be humanities types doing nerd-shaming about those supposed weirdos, rather than doing anything which’ll help in reviving unions or strengthen worker’s rights. Really, the wild dreamers are just doing very common philosophy on the question of mind-body duality, except in their own dialect. Said humanist critique might in theory somehow have an economic effect in a certain path, I’ll grant that – it’s *conceivable*. But it’ll be loaded with so many identity-politics kinds of diversions that I doubt it’ll ever affect a billionaire or big corporation in practice.

        Here’s an analogy: To riff on the old saying “If you have a hammer, everything looks like a nail”, what if you want to drive a nail, but you DON’T have a hammer? All you have is a screwdriver, where you have spent many years of study mastering the techniques of application (nobody is better than you at rotational speed, yet with the precise amount of subtlety to avoid damage). Maybe it’s possible to take the back-end of the screwdriver and pound the nail. Perhaps if you roughen the head of the nail, and press a screwdriver blade very hard, one can somehow treat the nail as a screw (and then bring in all the developed skill at force of rotation). Is this an utterly impossible task? No, it might be possible – but it’s deeply error-prone, and very likely to fail in many ways. The nail’s much more likely to bend, or the screwdriver to slip and stab something or even shatter. But, you might say, it could be done, it could be done. Yes, I reply, but it’s an extremely bad way of trying to do it.

        That is, when you say “But on what grounds do we oppose certain practices if not from some prior understanding of what is good and what is pernicious? This is where the humanist criticism can come into play.”, I see you trying to re-invent what’s fundamentally an economic critique (Morozov’s point) with your type of humanist critique. This is like trying to use a screwdriver as a hammer.

        One further point, when you say “science and tech more or less run the cultural show now.”, I believe I understand what you’re discussing, and that connects to my own political advocacy. What I see you referencing is how science and tech are pressed into a narrative of support for plutocracy, laissez-faire capitalism, union-busting, and so on (e.g. the cliche “Luddite!” accusation). I’m very much opposed to this, and I wish there were more support for what I call pro-technology social criticism. But that’s not the same as a “cultural show” based in the science and tech methods of thinking meaning logic, evidence, exploration, testing your views against the world, etc. Everything from anti-vaxxers to Creationism to climate-change denialism shows that society has a long way to go there.

        I hope this helps you think about the topic.

Leave a comment