Cyborg Discourse is Useless

In “Why Silicon Valley Can’t Fix Itself,” Ben Tarnoff and Moira Weigel critically engage with one response to the tech backlash, the emergence of Center for Humane Technology.

The piece begins with an overview of the wave of bad press the tech industry has received, focusing especially on criticism that has emerged within Silicon Valley from former and current industry executives, investors, and workers. Weigel and Tarnoff then describe the work of the center and its emphasis on more humane design as the key to redressing the ills caused by Silicon Valley. They make the interesting observation that humanizing technology is a message the tech industry can get behind because it is, in at least one manifestation, native to Silicon Valley. They are thinking chiefly of the work of Stewart Brand and, later, Steve Jobs.

They then turn to their critique of what they call the tech humanist response to the problems generated by Silicon Valley, a response embodied by the Center for Humane Technology. It is to that critique that I want to give my attention. Weigel and Tarnoff’s argument targets humanist technology criticism more broadly, and it is this broader argument that I want to consider more closely.

Clarifications: I should say before moving forward that back in 2015 I wrote briefly in defense of what I then called humanist tech criticism. I did so initially in response to Evgeny Morozov’s review of Nick Carr’s work on automation, a review which was also a broadside against what he called humanist technology criticism. Shortly thereafter I returned to the theme in response to Andrew McAfee’s query, “Who are the humanists, and why do they dislike technology so much?”

More recently, in a discussion of the tech backlash, I’ve expressed some reservations about the project of humanist technology criticism. My reservations, however, stem from different sources than either Morozov’s critique or that of Weigel and Tarnoff, although there is some overlap with both.

One more prefatory note before I get on with a discussion of Weigel and Tarnoff’s critique of humanist technology criticism. I’ve been using the phrase “what X called humanist technology criticism,” and I’ve done so because the phrase is being used with a measure of imprecision or without a great deal of critical rigor. I think that’s important to note and keep in mind. Finally, then, on to what I’m actually writing this post to discuss.

Turnoff and Weigel’s critique of tech-humanist discourse is two-fold. First, they find that tech humanist criticism, as it is deployed by the Center for Humane Technology, is too narrowly focused on either how individuals qua consumers use digital devices or on the design decisions made by engineers or programmers. This focus ignores the larger economic context in which such decisions are made. In this respect, their critique reiterates Morozov’s 2015 critique of humanist technology criticism.

They argue, for example, that individual design decisions “are only a symptom of a larger issue:

the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.

About this, they are almost certainly right. As I wrote in my earlier post on the center, “Tinkering with the apparatus to make it more humane does not go far enough if the apparatus itself is intrinsically inhumane.” That tech companies are poised to appropriate and absorb the tech humanist critique, as it now manifests itself, and strengthen their hand as a result seems obvious enough.

The second aspect of Tarnoff and Weigel’s critique is more philosophical in nature. “Tech humanists say they want to align humanity and technology,” they write, “But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.”

This misunderstanding, in their view, generates a number of problems. For example, it yields misguided anxieties about the loss of essential human qualities as a consequence of technological change. (Perfunctory mention of the Phaedrus? Check.) “Holding humanity and technology separate,” they also argue, “clears the way for a small group of humans to determine the proper alignment between them.” And, fundamentally, because human nature changes it cannot “serve as a stable basis for evaluating the impact of technology.”

“Fortunately,” the authors tell us, “there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future.” “This tradition,” they add, “does not address ‘humanity’ in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as ‘cyborgs’, to quote the biologist and philosopher of science Donna Haraway.”

Somewhat provocatively I want to suggest that cyborg discourse is useless. This is a little different than claiming that it is entirely erroneous or wholly without merit. Nor is it really a claim about Haraway’s work. It seems to me that cyborg discourse as it is most often deployed in discussions of technology today is only superficially connected with Haraway’s arguments. They are dealt with about as deeply as Plato’s, which is to say not very deeply at all.

Historical note: What is most striking about the cyborg argument is how very Victorian it turns out to be. Writing in the mid-sixties, Lewis Mumford observed in “Technics and the Nature of Man” that for “more than a century man has been habitually defined as a tool-using animal.” Mumford targets the Victorian reduction of the human being to homo faber, the toolmaker, and the view that human beings owe their unique capacities to their use of tools. It is a view that is, in his analysis, wrong on the facts and also a projection into the past of “modern man’s own overwhelming interest in tools, machines, technical mastery.”

It is this understanding that Mumford challenges precisely because, in his view, it has abetted the rise of authoritarian technics controlled by a very few. In other words, Mumford’s far more radical political and economic critique of modern technology is grounded in an understanding of human nature that is decidedly at odds with cyborg discourse. Cyborg discourse turns out to be rhetorical steampunk.

Rather, what I am claiming is that cyborg discourse, as it is popularly deployed in discussions about the impact of technology, is useless because it gets us nowhere. By itself it offers no practical wisdom. It offers no critical tools to help us judge, weigh, or evaluate. We’ve always been cyborgs, you say? Fine. How does this help me think about any given technology? How does this help me evaluate its consequences?

Indeed, it is worse than useless because, more often than not, it abets the unchecked growth of the tech industry by blunting critique and dampening intuitive reservations.  Indeed, the most consistent application of cyborg rhetoric lies in the eschatological fantasies of the transhumanists. The tech industry, in other words, is as adept at appropriating and absorbing cyborg discourse as it is humanist discourse.

Consider, to begin with, the claim that because human nature changes it cannot serve as a stable basis for evaluating the impact of technology.

At what rate exactly does human nature change? Does it change so quickly that it cannot guide our reflections on the relative merits of new technologies? As evidence for the claim that humans and technology “constantly change together,” the authors cite a journal article that they say “suggests that the human hand evolved to manipulate the stone tools that our ancestors used.” The conclusion of the article, however, is less than definitive: while certain strands of evidence point in this direction,  “it cannot be directly determined that hominin hands evolved by natural selection in adaptation to tool making.” Moreover, the time scale cited by the author is, in any case, “many millennia.”

It seems to me that very little follows from this piece of evidence. The relevance of this thesis to how we think about and evaluate technology today needs to be established. We are no long talking about primitive stone tools nor are we helped by taking into consideration processes that played out over the course of many millennia. If someone claims that a certain technology is dehumanizing, telling them that our human ancestors evolved in conjunction with their use of stone tools is a fine bit of petty sophistry.

And why should it be the case that holding humanity and technology separate paves the way for an elite class to determine the nature of the relationship? Is this a necessary development? How so? Are there no counter-examples? Blurring the distinction has, in fact, had the effect that the authors attribute to maintaining the distinction. “We have always been cyborgs” is just as much a case for thoughtless assimilation to whatever new technology we’re being sold.

The cyborg tradition, the authors claim, does not address the abstraction “humanity” but distinct human beings. This if fine, but, again, I’m not sure it gets us very far. For one thing, are we back to individuals making decisions? And on what basis, exactly, are these distinct human beings making their decisions?

“To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention,” the authors grant. Okay, so how are we to judge and discern? Can these individuals not be guided by some particular understanding of what constitutes human flourishing? If I’m going to act collectively will it not be on the basis of some understanding of what is good not just for me personally but for me and others as human beings?

“But it does suggest,” they immediately add, “that living well with technology can’t be a matter of making technology more ‘human’.” But again, why should we not judge technology based upon some understanding of what is fitting for the sorts of creatures we presently are? Because in five millennia we will have a marginally different skeletal configuration? If not all technologies are good for us, is it not because some technologies erode something that could be claimed as fundamental to human dignity or because it undermines some essential component of human flourishing?

Interestingly, we are then told that the “cyborg way of thinking, by contrast, tells us that our species is essentially technological.” Have we not just substituted one essentialist account of human nature for another? Cyborg discourse, as it turns out, aims to tell us exactly the sort of creatures we are. It’s not that we are doing away with all accounts of human nature, we are just privileging one account over others.

In this way it parallels the liberal democratic pretense to neutrality regarding competing visions of the good life. And, in the same way and for the same reason, it thus promotes a context in which technology can flourish independently of any specifically human ends.

The anti-tech humanist position staked out by the authors also ignores the possibility that some technologies are fundamentally disordering of individual and collective human experience. In many respects, they are subject to the same critique that they leveled against the Center for Humane Technology. What they want is simply a better version, by their lights, of existing technology. Chiefly, this entails some version of public ownership. But what will constitute this public if not some shared understanding of what is good for people given their peculiarly human nature?

“But even though our continuous co-evolution with our machines is inevitable,” Tarnoff and Weigel write, “the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.” A little further on they invite us to envision “a worker-owned Uber, a user-owned Facebook or a socially owned ‘smart city’ of the kind being developed in Barcelona.” But what of those who, for reasons grounded in a particular understanding of the human condition, don’t care to live in any iteration of a smart city? Or what if a publicly owned version of Facebook is judged to be socially and politically disordering on the same grounds? Cyborg rhetoric tends dismisses such criticism because it is grounded on an account of human nature that is at odds with the cyborg vision.

“Rather than trying to humanise technology, then, we should be trying to democratize it,” Tarnoff and Weigel insist. “We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.” But herein lies the problem. Society as a whole is too fractured a unit to undertake the kind of collective action the author’s desire. It is an abstraction, just like Humanity. The authors seem to imagine that society as a whole shares their concerns. But what if most people are perfectly content trading their data for convenience?

When it comes down to it, everyone is a humanist technology critic, there are simply competing understandings of the human in play. If the use of a given technology is to be regulated or resisted or otherwise curtailed, it’s because someone deems it bad for people given some understanding, tacit as it may be, of what people are for.

None of this is to say that humanist discourse does not have its own set of problems, theoretical and practical. Or that the critical questions I’ve raised may not have satisfactory answers from a cyborg discourse perspective. Mostly it is to say that more often than not cyborg discourse is facile and superficial and, by itself, does very little to enlighten our situation or point a way forward.


4 thoughts on “Cyborg Discourse is Useless

  1. Wonderful to see a focused and critical inspection of this. The cyborg and constructivist camps produce interesting points at times, but often I have the impression that academic fashion and anxiety about their critiques being seen as conservative causes them to overstate. As you point out here, some draw equivalencies that may be theoretically or abstractly plausible but are of negligible consequence (evolution = no human nature, reality is mediation/representation all the way down, etc.).

    There’s something about the whole thing that feels a bit gotcha-like, in that way that avant-garde theory so often can. It also bears repeating in any discussion with a political bent that vast majority of Haraway’s writing is utterly unintelligible to all but a highly select few. Morozov’s point about tech criticism being insufficiently radical is useful enough to make me glad he’s out there, but when the phenomenological experience of everyday people is always waved away as false consciousness, I become uncomfortable.

    Even where the cyborg stuff gets translated to the pop level, as in the article at hand, I don’t see how the collectivist vein is in any way intrinsic to it. In fact, flashy as cyborg-ism is, I’ll be surprised if it doesn’t become a marketing angle as wearables and implants become more common.

    When the “digital dualism” concept was being invoked ad nauseam, I was and still am confused about how encouraging the public to think critically about media is helped by confiscating our ability to talk about a distinction between offline and online. Yes, I see the philosophical subtleties, but is the emphasis there really on the most pertinent point? Again, as you point out, having a conception of parameters/conditions where humans thrive just means the obvious answer is tipping points in mediation, enhancement, etc. Without such a conception, then critical discussion of the difference between eyeglasses and genetic enhancement is limited to economics, which is absurd.

    1. David,

      Thanks for that, it was all well put and sums up many of my own conclusions as well, particularly the point about cyborg marketing and waving off people’s experience. The link to the digital dualism debates, in which I played my own small role, is also apropos. Same point basically: not much help as we seek to make concrete judgments.

  2. hey Michael, even though my blog is largely based on exploring the concept of the cyborg, I concluded a while ago that that discourse had largely run its course, not so much because the concept was exhausted, but rather because the word had become a catch all for whatever the speaker was trying to get at. I haven’t been able to come up with an alternative.

    At any rate I thought one of my old posts might have some relevance to the discussion here. https://atomicgeography.com/2015/06/09/disabled-cyborgs-in-space/

    1. Thanks for that. I’m not nearly so well versed in the cyborg tradition, I’m going mostly off pop criticism variations on the theme, which I’ve found less than useful. In any case, I appreciate your much more informed perspective.

Leave a comment