Humanist Technology Criticism

“Who are the humanists, and why do they dislike technology so much?”

That’s what Andrew McAfee wants to know. McAfee, formerly of Harvard Business School, is now a researcher at MIT and the author, with Erik Brynjolfsson, of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. At his blog, hosted by the Financial Times, McAfee expressed his curiosity about the use of the terms humanism or humanist in “critiques of technological progress.” “I’m honestly not sure what they mean in this context,” McAfee admitted.

Humanism is a rather vague and contested term with a convoluted history, so McAfee asks a fair question–even if his framing is rather slanted. I suspect that most of the critics he has in mind would take issue with the second half of McAfee’s compound query. One of the examples he cites, after all, is Jaron Lanier, who, whatever else we might say of him, can hardly be described as someone who “dislikes technology.”

That said, what response can we offer McAfee? It would be helpful to sketch a history of the network of ideas that have been linked to the family of words that include humanism, humanist, and the humanities. The journey would take us from the Greeks and the Romans, through (not excluding) the medieval period to the Renaissance and beyond. But that would be a much larger project, and I wouldn’t be your best guide. Suffice it to say that near the end of such a journey, we would come to find the idea of humanism splintered and in retreat; indeed, in some quarters, we would find it rejected and despised.

But if we forego the more detailed history of the concept, can we not, nonetheless, offer some clarifying comments regarding the more limited usage that has perplexed McAfee? Perhaps.

I’ll start with an observation made by Wilfred McClay in a 2008 essay in the Wilson Quarterly, “The Burden of the Humanities.” McClay suggested that we define the humanities as “the study of human things in human ways.”¹ If so, McClay continues, “then it follows that they function in culture as a kind of corrective or regulative mechanism, forcing upon our attention those features of our complex humanity that the given age may be neglecting or missing.” Consequently, we have a hard time defining the humanities–and, I would add, humanism–because “they have always defined themselves in opposition.”

McClay provides a brief historical sketch showing that the humanities have, at different historical junctures, defined themselves by articulating a vision of human distinctiveness in opposition to the animal, the divine, and the rational-mechanical. “What we are as humans,” McClay adds, “is, in some respects, best defined by what we are not: not gods, not angels, not devils, not machines, not merely animals.”

In McClay’s historical sketch, humanism and the humanities have lately sought to articulate an understanding of the human in opposition to the “rational-mechanical,” or, in other words, in opposition to the technological, broadly speaking. In McClay’s telling, this phase of humanist discourse emerges in early nineteenth century responses to the Enlightenment and industrialization. Here we have the beginnings of a response to McAfee’s query. The deployment of humanist discourse in the context of technology criticism is not exactly a recent development.

There may have been earlier voices of which I am unaware, but we may point to Thomas Carlyle’s 1829 essay, “Sign of the Times,” as an ur-text of the genre.² Carlyle dubbed his era the “Mechanical Age.” “Men are grown mechanical in head and heart, as well as in hand,” Carlyle complained. “Not for internal perfection,” he added, “but for external combinations and arrangements for institutions, constitutions, for Mechanism of one sort or another, do they hope and struggle.”

Talk of humanism in relation to technology also flourished in the early and mid-twentieth century. Alan Jacobs, for instance, is currently working on a book project that examines the response of a set of early 20th century Christian humanists, including W.H. Auden, Simone Weil, and Jacques Maritain, to total war and the rise of technocracy. “On some level each of these figures,” Jacobs explains, “intuited or explicitly argued that if the Allies won the war simply because of their technological superiority — and then, precisely because of that success, allowed their societies to become purely technocratic, ruled by the military-industrial complex — their victory would become largely a hollow one. Each of them sees the creative renewal of some form of Christian humanism as a necessary counterbalance to technocracy.”

In a more secular vein, Paul Goodman asked in 1969, “Can Technology Be Humane?” In his article (h/t Nicholas Carr), Goodman observed that popular attitudes toward technology had shifted in the post-war world. Science and technology could no longer claim the “unblemished and justified reputation as a wonderful adventure” they had enjoyed for the previous three centuries. “The immediate reasons for this shattering reversal of values,” in Goodman’s view, “are fairly obvious.

Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied subsequent developments, the deterioration of the physical environment and the destruction of the biosphere, the catastrophes impending over the cities because of technological failures and psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield diminishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely only increase human woe.”

For his part, Goodman advocated a more prudential and, yes, humane approach to technology. “Whether or not it draws on new scientific research,” Goodman argued, “technology is a branch of moral philosophy, not of science.” “As a moral philosopher,” Goodman continued in a remarkable passage, “a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.” The whole essay is well-worth your time. I bring it up merely as another instance of the genre of humanistic technology criticism.

More recently, in an interview cited by McAfee, Jaron Lanier has advocated the revival of humanism in relation to the present technological milieu. “I’m trying to revive or, if you like, resuscitate, or rehabilitate the term humanism,” Lanier explained before being interrupted by a bellboy cum Kantian, who breaks into the interview to say, “Humanism is humanity’s adulthood. Just thought I’d throw that in.” When he resumes, Lanier expanded on what he means by humanism:

“And pragmatically, if you don’t treat people as special, if you don’t create some sort of a special zone for humans—especially when you’re designing technology—you’ll end up dehumanising the world. You’ll turn people into some giant, stupid information system, which is what I think we’re doing. I agree that humanism is humanity’s adulthood, but only because adults learn to behave in ways that are pragmatic. We have to start thinking of humans as being these special, magical entities—we have to mystify ourselves because it’s the only way to look after ourselves given how good we’re getting at technology.”

In McAfee’s defense, this is an admittedly murky vision. I couldn’t tell you what exactly Lanier is proposing when he says that we have to “mystify ourselves.” Earlier in the interview, however, he gave an example that might help us understand his concerns. Discussing Google Translate, he observes the following: “What people don’t understand is that the translation is really just a mashup of pre-existing translations by real people. The current set up of the internet trains us to ignore the real people who did the first translations, in order to create the illusion that there is an electronic brain. This idea is terribly damaging. It does dehumanise people; it does reduce people.”

So Lanier’s complaint here seems to be that this particular configuration of technology and obscures an essential human element. Furthermore, Lanier is concerned that people are reduced in this process. This is, again, a murky concept, but I take it to mean that some important element of what constitutes the human is being ignored or marginalized or suppressed. Like the humanities in McClay’s analysis, Lanier’s humanism draws our attention to “those features of our complex humanity that the given age may be neglecting or missing.”

One last example. Some years ago, historian of science George Dyson wondered if the cost of machines that think will be people who don’t. Dyson’s quip suggests the problem that Evan Selinger has dubbed the outsourcing of our humanity. We outsource our humanity when we allow an app or device to do for us what we ought to be doing for ourselves (naturally, that ought needs to be established). Selinger has developed his critique in response to a variety of apps but especially those that outsource what we may call our emotional labor.

I think it fair to include the outsourcing critique within the broader genre of humanist technology criticism because it assumes something about the nature of our humanity and finds that certain technologies are complicit in its erosion. Not surprisingly, in a tweet of McAfee’s post, Selinger indicated that he and Brett Frischmann had plans to co-author a book analyzing the concept of dehumanizing technology in order to bring clarity to its application. I have no doubt that Selinger and Frishchmann’s work will advance the discussion.

While McAfee was puzzled by humanist discourse with regards to technology criticism, others have been overtly critical. Evgeny Morozov recently complained that most technology critics default to humanist/anti-humanist rhetoric in their critiques in order to evade more challenging questions about politics and economics. For my part, I don’t see why both approaches cannot each contribute to a broader understanding of technology and its consequences while also informing our personal and collective responses.

Of course, while Morozov is critical of humanizing/dehumanizing approach to technology on more or less pragmatic grounds–it is ultimately ineffective in his view–others oppose it on ideological or theoretical grounds. For these critics, humanism is part of the problem not the solution. Technology has been all too humanistic, or anthropocentric, and has consequently wreaked havoc on the global environment. Or, they may argue that any deployment of humanism as an evaluative category also implies a policing of the boundaries of the human with discriminatory consequences. Others will argue that it is impossible to make a hard ontological distinction among the natural, the human, and the technological. We have always been cyborgs in their view. Still others argue that there is no compelling reason to privilege the existing configuration of what we call the human. Humanity is a work in progress and technology will usher in a brave, new post-human world.

Already, I’ve gone on longer than a blog post should, so I won’t comment on each of those objections to humanist discourse. Instead, I’ll leave you with a few considerations about what humanist technology criticism might entail. I’ll do so while acknowledging that these considerations undoubtedly imply a series of assumptions about what it means to be a human being and what constitutes human flourishing.

That said, I would suggest that a humanist critique of technology entails a preference for technology that (1) operates at a humane scale, (2) works toward humane ends, (3) allows for the fullest possible flourishing of a person’s capabilities, (4) does not obfuscate moral responsibility, and (5) acknowledges certain limitations to what we might quaintly call the human condition.

I realize these all need substantial elaboration and support–the fifth point is especially contentious–but I’ll leave it at that for now. Take that as a preliminary sketch. I’ll close, finally, with a parting observation.

A not insubstantial element within the culture that drives technological development is animated by what can only be described as a thoroughgoing disgust with the human condition, particularly its embodied nature. Whether we credit the wildest dreams of the Singulatarians, Extropians, and Post-humanists or not, their disdain as it finds expression in a posture toward technological power is reason enough for technology critics to strive for a humanist critique that acknowledges and celebrates the limitations inherent in our frail, yet wondrous humanity.

This gratitude and reverence for the human as it is presently constituted, in all its wild and glorious diversity, may strike some as an unpalatably religious stance to assume. And, indeed, for many of us it stems from a deeply religious understanding of the world we inhabit, a world that is, as Pope Francis recently put it, “our common home.” Perhaps, though, even the secular citizen may be troubled by, as Hannah Arendt has put it, such a “rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking).”

________________________

¹ Here’s a fuller expression of McClay’s definition from earlier in the essay: “The distinctive task of the humanities, unlike the natural sciences and social sciences, is to grasp human things in human terms, without converting or reducing them to something else: not to physical laws, mechanical systems, biological drives, psychological disorders, social structures, and so on. The humanities attempt to understand the human condition from the inside, as it were, treating the human person as subject as well as object, agent as well as acted-upon.”

² Shelley’s “In Defense of Poetry” might qualify.

Tech Criticism! What is it Good For?

Earlier this year, Evgeny Morozov published a review essay of Nicholas Carr’s The Glass Cage.The review also doubled as a characteristically vigorous, and uncharacteristically confessional, censure of the contemporary practice of technology criticism. In what follows, I’ll offer a bit of unsolicited commentary on Morozov’s piece, and in a follow-up post I’ll link it to Alan Jacobs’ proposal for a technological history of modernity.

Morozov opened by asking two questions: “What does it mean to be a technology critic in today’s America? And what can technology criticism accomplish?” Some time ago, I offered my own set of reflections on the practice of technology criticism, and, as I revisit those reflections, I find that they overlap, somewhat, with a few of Morozov’s concerns. I’m going to start on this point of agreement.

“That radical critique of technology in America has come to a halt,” Morozov maintains, “is in no way surprising: it could only be as strong as the emancipatory political vision to which it is attached. No vision, no critique.” Which is to say that technology criticism, like technology, must always be for something other than itself. It must be animated and framed by a larger concern. In my own earlier reflections on technology criticism, I put the matter thus:

The critic of technology is a critic of artifacts and systems that are always for the sake of something else. The critic of technology does not love technology because technology rarely exists for its own sake …. So what does the critic of technology love? Perhaps it is the environment. Perhaps it is an ideal of community or friendship. Perhaps it is an ideal civil society. Perhaps it is health and vitality. Perhaps it is sound education. Perhaps liberty. Perhaps joy. Perhaps a particular vision of human flourishing. The critic of technology is animated by a love for something other than the technology itself. [Or should be … too many tech critics are, in fact, far too enamored of the technologies themselves.]

[Moreover,] criticism of technology, if it moves beyond something like mere description and analysis, implies making what amount to moral and ethical judgments. The critic of technology, if they reach conclusions about the consequences of technology for the lives of individual persons and the health of institutions and communities, will be doing work that rests on ethical principles and carries ethical implications.

Naturally, such ethical evaluations are not arrived at in a moral vacuum or from some ostensibly neutral position. According to what standards, then, and from within which tradition does tech criticism proceed? Well, it depends on the critic in question. More from my earlier post:

The libertarian critic, the Marxist critic, the Roman Catholic critic, the posthumanist critic, and so on — each advances their criticism of technology informed by their ethical commitments. Their criticism of technology flows from their loves. Each criticizes technology according to the larger moral and ethical framework implied by the movements, philosophies, and institutions that have shaped their identity. And, of course, so it must be. We are limited beings whose knowledge is always situated within particular contexts. There is no avoiding this, and there is nothing particularly undesirable about this state of affairs. The best critics will be self-aware of their commitments and work hard to sympathetically entertain divergent perspectives. They will also work patiently and diligently to understand a given technology before reaching conclusions about its moral and ethical consequences. But I suspect this work of understanding, precisely because it can be demanding, is typically driven by some deeper commitment that lends urgency and passion to the critic’s work.

For his part, if I may frame his essay with the categories I’ve sketched above, Morozov is deeply motivated by what he calls an “emancipatory political vision.” Consequently, he concludes that any technology criticism that does not work to advance this vision is a waste of time, at best. Tech criticism, divorced from political and economic considerations, cannot, in Morozov’s view, accomplish the lofty goal of advancing the progressive emancipatory vision he prizes.

While I feel the force of Morozov’s argument, I wouldn’t put the matter quite so starkly. There are, as I had suggested in my earlier post, a variety of perspectives from which one might launch a critique of technological society. Morozov’s piece pushes critics of all stripes to grapple with the effectiveness of their work (is that already a technocratic posture to take?), but each will define what constitutes effectiveness on their own terms.

I’d also suggest that a revolution or bust model of engagement with technology is not entirely helpful. For one thing, is there really nothing at all to be gained by arriving at better understandings of the personal and social consequences of our technologies? I think I’ll take marginal improvements for some to none at all. Does this amount to fighting a rear guard action. Perhaps. In any case, I don’t see why we shouldn’t present a broad front. Let the phenomenologists do their work and the Marxists theirs. Better yet, let their work mingle promiscuously. Indeed, let the Pope himself do his part.

It also seems to me that, if there is to be a political response to technological society, then it should be democratic in nature; and if democratic, then is must arise out deliberation and consent. If so, then whatever work helps advance public understanding of the stakes can be valuable, even if it gives us only a partial analysis.

Morozov would reply, as he argued against Carr, that this assumes the problem is one of an ill-informed citizenry in need of illumination when, in fact, the problem is rather that economic and social forces are limiting the ability of the average person to act in line with their preferences. In his recent piece arguing for an “attentional commons,” Matthew Crawford identified one instance of a recurring pattern:

“Silence is now offered as a luxury good. In the business-class lounge at Charles de Gaulle Airport, I heard only the occasional tinkling of a spoon against china. I saw no advertisements on the walls. This silence, more than any other feature, is what makes it feel genuinely luxurious. When you step inside and the automatic doors whoosh shut behind you, the difference is nearly tactile, like slipping out of haircloth into satin. Your brow unfurrows, your neck muscles relax; after 20 minutes you no longer feel exhausted.

Outside, in the peon section, is the usual airport cacophony. Because we have allowed our attention to be monetized, if you want yours back you’re going to have to pay for it.”

The pattern is this: where the technologically enhanced market intrudes, what used to be a public good is repackaged as a luxury item that now only the few can afford. I think this well illustrates Morozov’s point, and it is an important one. It suggests that tech criticism may risk turning into therapy or life-coaching for the wealthy. One can observe this same concern in an earlier piece from Morozov on “the mindfulness racket.”

That said and acknowledged, I’m not sure all didactic efforts are wholly wasted. Morozov is an intensely smart critic. He knows a lot. He’s thought long and hard about the problems of technological society. He is remarkably well read. Most of us aren’t. As I a teacher I’ve come to realize that it is easy to forget what you, too, had to learn at one point. It is easy to assume that your audience knows everything that you’ve learned over the years, particularly in whatever field you happen to specialize. While the delimiting forces of present economic and political configurations should not be ignored, I think it is much too early to give up the task of propagating a serious understanding of technology and its consequences.

______________________

“Why, then, aspire to practice any kind of technology criticism at all?” Morozov asks. His reply was less than sanguine:

“I am afraid I do not have a convincing answer. If history has, in fact, ended in America—with venture capital (represented by Silicon Valley) and the neoliberal militaristic state (represented by the NSA) guarding the sole entrance to its crypt—then the only real task facing the radical technology critic should be to resuscitate that history. But this surely can’t be done within the discourse of technology, and given the steep price of admission, the technology critic might begin most logically by acknowledging defeat.”

Or, they might begin to reimagine the tech critical project. How deeply do we need to dig to “resuscitate that history”? How can we escape the discourse of technology? What if Morozov hasn’t pushed quite far enough? Morozov wants us to frame technology in light of economics and politics, but what if politics and economics, as they presently exist, are already compromised, already encircled by technology?

In a follow-up post, I’ll explain why I think Alan Jacobs’ project to understand the technological history of modernity, as I understand it, may help us answer some of these questions.

Silencing the Heretics: How the Faithful Respond to Criticism of Technology

I started to write a post about a few unhinged reactions to an essay published by Nicholas Carr in this weekend’s WSJ, “Automation Makes Us Dumb.”  Then I realized that I already wrote that post back in 2010. I’m republishing “A God that Limps” below, with slight revisions, and adding a discussion of the reactions to Carr. 

Our technologies are like our children: we react with reflexive and sometimes intense defensiveness if either is criticized. Several years ago, while teaching at a small private high school, I forwarded an article to my colleagues that raised some questions about the efficacy of computers in education. This was a mistake. The article appeared in a respectable journal, was judicious in its tone, and cautious in its conclusions. I didn’t think then, nor do I now, that it was at all controversial. In fact, I imagined that given the setting it would be of at least passing interest. However, within a handful of minutes (minutes!)—hardly enough time to skim, much less read, the article—I was receiving rather pointed, even angry replies.

I was mystified, and not a little amused, by the responses. Mostly though, I began to think about why this measured and cautious article evoked such a passionate response. Around the same time I stumbled upon Wendell Berry’s essay titled, somewhat provocatively, “Why I am Not Going to Buy a Computer.” More arresting than the essay itself, however, were the letters that came in to Harper’s. These letters, which now typically appear alongside the essay whenever it is anthologized, were caustic and condescending. In response, Berry wrote,

The foregoing letters surprised me with the intensity of the feelings they expressed. According to the writers’ testimony, there is nothing wrong with their computers; they are utterly satisfied with them and all that they stand for. My correspondents are certain that I am wrong and that I am, moreover, on the losing side, a side already relegated to the dustbin of history. And yet they grow huffy and condescending over my tiny dissent. What are they so anxious about?

Precisely my question. Whence the hostility, defensiveness, agitation, and indignant, self-righteous anxiety?

I’m typing these words on a laptop, and they will appear on a blog that exists on the Internet.  Clearly I am not, strictly speaking, a Luddite. (Although, in light of Thomas Pynchon’s analysis of the Luddite as Badass, there may be a certain appeal.) Yet, I do believe an uncritical embrace of technology may prove fateful, if not Faustian.

The stakes are high. We can hardly exaggerate the revolutionary character of certain technologies throughout history:  the wheel, writing, the gun, the printing press, the steam engine, the automobile, the radio, the television, the Internet. And that is a very partial list. Katherine Hayles has gone so far as to suggest that, as a species, we have “codeveloped with technologies; indeed, it is no exaggeration,” she writes in Electronic Literature, “to say modern humans literally would not have come into existence without technology.”

We are, perhaps because of the pace of technological innovation, quite conscious of the place and power of technology in our society and in our own lives. We joke about our technological addictions, but it is sometimes a rather nervous punchline. It makes sense to ask questions. Technology, it has been said, is a god that limps. It dazzles and performs wonders, but it can frustrate and wreak havoc. Good sense seems to suggest that we avoid, as Thoreau put it, becoming tools of our tools. This doesn’t entail burning the machine; it may only require a little moderation. At a minimum, it means creating, as far as we are able, a critical distance from our toys and tools, and that requires searching criticism.

And we are back where we began. We appear to be allergic to just that kind of searching criticism. So here is my question again:  Why do we react so defensively when we hear someone criticize our technologies?

And so ended my earlier post. Now consider a handful of responses to Carr’s article, “Automation Makes Us Dumb.” Better yet, read the article, if you haven’t already, and then come back for the responses.

Let’s start with a couple of tweets by Joshua Gans, a professor of management at the University of Toronto.

Then there was this from entrepreneur, Marc Andreessen:

Even better are some of the replies attached to Andreessen’s tweet. I’ll transcribe a few of those here for your amusement.

“Why does he want to be stuck doing repetitive mind-numbing tasks?”

“‘These automatic jobs are horrible!’ ‘Stop killing these horrible jobs with automation!'” [Sarcasm implied.]

“by his reasoning the steam engine makes us weaklings, yet we’ve seen the opposite. so maybe the best intel is ahead”

“Let’s forget him, he’s done so much damage to our industry, he is just interested in profiting from his provocations”

“Nick clearly hasn’t understood the true essence of being ‘human’. Tech is an ‘enabler’ and aids to assist in that process.”

“This op-ed is just a Luddite screed dressed in drag. It follows the dystopian view of ‘Wall-E’.”

There you have it. I’ll let you tally up the logical fallacies.

Honestly, I’m stunned by the degree of apparently willful ignorance exhibited by these comments. The best I can say for them is that they are based on a glance at the title of Carr’s article and nothing more. It would be much more worrisome if these individuals had actually read the article and still managed to make these comments that betray no awareness of what Carr actually wrote.

More than once, Carr makes clear that he is not opposed to automation in principle. The last several paragraphs of the article describe how we might go forward with automation in a way that avoids some serious pitfalls. In other words, Carr is saying, “Automate, but do it wisely.” What a Luddite!

When I wrote in 2010, I had not yet formulated the idea of a Borg Complex, but this inability to rationally or calmly abide any criticism of technology is surely pure, undistilled Borg Complex, complete with Luddite slurs!

I’ll continue to insist that we are in desperate need of serious thinking about the powers that we are gaining through our technologies. It seems, however, that there is a class of people who are hell-bent on shutting down any and all criticism of technology. If the criticism is misguided or unsubstantiated, then it should be refuted. Dismissing criticism while giving absolutely no evidence of having understood it, on the other hand, helps no one at all.

I come back to David Noble’s description of the religion of technology often, but only because of how useful it is as a way of understanding techno-scientific culture. When technology is a religion, when we embrace it with blind faith, when we anchor our hope in it, when we love it as ourselves–then any criticism of technology will be understood as either heresy or sacrilege. And that seems to be a pretty good way of characterizing the responses to tech criticism I’ve been discussing: the impassioned reactions of the faithful to sacrilegious heresy.