We’re Reading Fahrenheit 451 Wrong

I was reminded of Ray Bradbury’s Fahrenheit 451 a day or two ago while reading Ian Bogost on Apple’s Airpods. Bogost examined Airpods’ potential long term social consequences. “Human focus, already ambiguously cleft between world and screen,” he suggests, “will become split again, even when maintaining eye contact.” A little further on, he writes, “Everyone will exist in an ambiguous state between public engagement with a room or space and private retreat into devices or media.”

It’s a good piece, you should read the whole thing.

It reminded me of Bradbury on two counts. First, and most obviously, Bradbury’s novel imaginatively predicted Airpods before earphones were invented. In the novel they are called Seashells, and they are just one of the ways that characters in the story sever their connection with the world beyond their heads, to borrow Matt Crawford’s formulation. Second, Bogost’s fears echo Bradbury’s. Fahrenheit 451 isn’t really about censorship, after all, and it’s unfortunate that the novel has been reduced to that theme in the popular imagination.

Bradbury makes clear that the firemen who famously start fires to burn books are doing so only long after people stopped reading books of their own accord as other forms of media came to dominate their experience. Actually, to be more precise, they did not stop reading altogether. They stopped reading certain kinds of books: the ones that made demands of the reader, intellectual, emotional, moral demands that might upset their fragile sense of well-being.

Fahrenheit 451, in other words, is more Huxley than Orwell.

Fundamentally, I would argue it is, like Huxley’s Brave New World, about happiness. “Are you happy?” a young girl named Clarisse asks Montag, the protagonist. It is the question that triggers all the subsequent action in the novel. It is the question that awakens Montag to the truth of his situation.

At one remove from the question of happiness, is the matter of alienation from reality effected by media technologies. In 1953, when barely half of American households owned a television set, and primitive sets at that, Bradbury foresaw a future of complete immersion in four wall-sized screens through which people would socialize interactively with characters from popular programs.

Speed also severed people from meaningful contact with the world and became an impediment to thought. Literal speed—billboards where as long as football fields in order to be seen by drivers zooming by and walking was deemed a public nuisance—and the speed of information. An old professor, who knew better but did not have the courage to fight the changes he witnessed, explained the problem to Montag:

“Speed up the film, Montag, quick. Click? Pic? Look, Eye, Now, Flick, Here, There, Swift, Pace, Up, Down, In, Out, Why, How, Who, What, Where, Eh? Uh! Bang! Smack! Wallop, Bing, Bong, Boom! Digest-digests, digest-digest-digests. Politics? One column, two sentences, a headline! Then, in mid-air, all vanishes! Whirl man’s mind around about so fast under the pumping hands of publishers, exploiters, broadcasters, that the centrifuge flings off all unnecessary, time-wasting thought!”

Social media was still more than fifty years away.

This same professor, Faber, later went on to lecture Montag about what was needed. Three things, he claimed. First, quality information, from books or elsewhere. Second, leisure, but not just “off-hours,” which Montag was quick to say he had plenty of:

“Off-hours, yes. But time to think? If you’re not driving a hundred miles an hour, at a clip where you can’t think of anything else but the danger, then you’re playing some game or sitting in some room where you can’t argue with the fourwall televisor. Why? The televisor is ‘real.’ It is immediate, it has dimension. It tells you what to think and blasts it in. It must be, right. It seems so right. It rushes you on so quickly to its own conclusions your mind hasn’t time to protest, ‘What nonsense!'”

The third needful thing? A society that granted “the right to carry out actions based on what we learn from the inter-action of the first two.”

Not a bad prescription, if you ask me.

Earlier in the novel, as Montag travelled by subway to meet Faber for the first, he clung to a copy of the Bible that he had stowed away. He knows he will have to surrender it, so he attempts to memorize as much as he can. But he discovers that his mind is a sieve. He recalled that when he was a child an older relative would play a joke on him by offering a dime if he could fill a sieve with sand.

As he travelled to meet Faber, “he remembered the terrible logic of that sieve.”

But, he thought to himself, “if you read fast and read all, maybe some of the sand will stay in the sieve. But he read and the words fell through, and he thought, in a few hours, there will be Beatty, and here will be me handing this over, so no phrase must escape me, each line must be memorized.”

But his material environment undermined his efforts. As in Kurt Vonnegut’s short story, Harrison Bergeron, distraction undid the work of the mind. In this case, Montag’s focus and concentration battled and lost against the tools of marketing. An ad for Denham’s Dentrifice blared over a loud speaker as he tried to commit what he read to memory. The brief memorable scene portrays a scenario that should feel all-too familiar to us.

He clenched the book in his fists. Trumpets blared.
“Denham’s Dentrifice.”
Shut up, thought Montag. Consider the lilies of the field.
“Denham’s Dentifrice.”
They toil not-
“Denham’s–”
Consider the lilies of the field, shut up, shut up.
“Dentifrice !
“He tore the book open and flicked the pages and felt them as if he were blind, he picked at the shape of the individual letters, not blinking.
“Denham’s. Spelled : D-E-N”
They toil not, neither do they . . .
A fierce whisper of hot sand through empty sieve.
“Denham’s does it!”
Consider the lilies, the lilies, the lilies…
“Denham’s dental detergent.”
“Shut up, shut up, shut up!” It was a plea, a cry so terrible that Montag found himself on his feet, the shocked inhabitants of the loud car staring, moving back from this man with the insane, gorged face, the gibbering, dry mouth, the flapping book in his fist. The people who had been sitting a moment before, tapping their feet to the rhythm of Denham’s Dentifrice, Denham’s Dandy Dental Detergent, Denham’s Dentifrice Dentifrice Dentifrice, one two, one two three, one two, one two three. The people whose mouths had been faintly twitching the words Dentifrice Dentifrice Dentifrice. The train radio vomited upon Montag, in retaliation, a great ton-load of music made of tin, copper, silver, chromium, and brass. The people were pounded into submission; they did not run, there was no place to run; the great air-train fell down its shaft in the earth.
“Lilies of the field.” “Denham’s.”
“Lilies, I said!”
The people stared.
“Call the guard.”
“The man’s off–”
“Knoll View!”
The train hissed to its stop.
“Knoll View!
“Denham’s.” A whisper.
Montag’s mouth barely moved. “Lilies…”

Attention is a resource, and, like all precious resources, it must be cultivated with care and defended. It is, after all, that by which we get our grip on the world and how we remain open to world.


 

Tip the Writer

$1.00

Shame On You, Devil: A 1959 Challenge to Technologists-in-Training That Still Resonates

A little while ago, I cited a couple of passages from Romano Guardini’s Letters from Lake Como: Explorations in Technology and the Human Race on the theme of consciousness. Guardian was a Catholic philosopher and theologian active during the first half of the twentieth century. Although largely forgotten today, he was widely known in his day and left his mark on the thinking of several better-remembered contemporaries, including Hannah Arendt, who sat under Guardini’s teaching in her undergraduate years. Guardini’s work was also prominently cited in Pope Francis’ 2015 encyclical, Laudato Si.

At some point in the near future I may have more to say about Guardini and his reflections on technology in the Letters, which were originally during 1924 and 1925. Here I only want to draw your attention to parts of a 1959 address Guardini delivered to the Munich College of Technology, an address which is now included with the translation I’m reading.

It was especially interesting to read Guardini’s address to these technologists-in-training in light of the recent burst of frustration with Silicon Valley and, more broadly, with technologists and technologies that increasingly disorder our private and public experience.

At the outset, Guardini acknowledges that he does not have much to offer by way of technical know-how, so his theme would not be “the actual structure and work of machines,” but “what they mean for human existence, or more precisely, how their construction and use affect humanity as a living totality.”

In many respects, Guardini was about as friendly a critic as these technologists could’ve hoped for. What he has to say “will have the character of an existential problem, and it will thus necessarily reflect concern.” He adds that he “will have to consider primarily the negative element in the phenomenon of machines,” but he insists that they are “to see here neither the pessimism that we often sense in current cultural criticism nor the resentment that comes with the end of an epoch against the new thing that is pushing out the old.”

I think Guardini was in earnest about this. This same attitude was ultimately borne out by this Letters written nearly forty years earlier. “The concern I want to express,” he tells them, “is the positive one whether the process of technology worldwide will really achieve the great things that it can and should.”

“A healthy optimism,” he writes, “is undoubtedly part of all forceful action, but so, too, is a sense of responsibility for this action.”

Having made these preliminary comments, which, again, set about as irenic a tone as could be expected, Guardini briefly laid out a taxonomy of technology that included tools, contrivances, and machines. He goes on to explore the human consequences of machines, chiefly focusing on the power machines granted and the ethical responsibility this entailed.

“To gain power is to experience it as it lays claim to our mind, spirit, and disposition,” Guardini claimed. “If we have power, we have to use it, and that involves conditions. We have to use it with responsibility, and that involves an ethical problem.”

“Thus dangers of the most diverse kind arise out of the power that machines give,” he elaborated. “Physically one human group subjugates another in open or concealed conflict. Mentally and spiritually the thinking and feelings of the one influence the other.”

Interestingly, Guardini noted that in order to assume ethical responsibility for our machines it must be presupposed “that we freely stand over against machines even as we use them, that we experience and treat them as something for whose operation we have to set the standards.”

“But do we do that?” Guardini wondered. “Does any such ethos exist? That remains to be seen. It is a disturbing fact that people often see the attempt to relate to machines in this way as romantic. As a rule today people find in machines and their working given realities that we cannot alter in any way.”

Near the end of the talk, Guardini cites an example from a story that had appeared in Frankfurter Allgemeine Zeitung, which, he adds, “is certainly not against technology.”

The example is of how “machines spur us to go into areas where personal restraint would forbid us to intrude.” The article, he goes on to explain, “shows us in sharp detail what is the issue here—namely, the possibility of committing people without their even being aware of it. But that involves a basic threat to something that is essential in all human dealings—namely, trust.”

“The possibility of committing people” is an interesting phrase. It’s translated from German and I have no idea what the underlying German word might be nor could I make much of it if I did. I take Guardini to mean something like compromising someone without their consent or somehow gaining some advantage over them without their awareness. This seems to fit with the case Guardini goes on to describe.

As Guardini summarizes it, the article discusses the commercial availability of something like a Dick Tracy-style watch that can surreptitiously record conversations. In the article, a salesperson was asked “whether people might be bothered by them and would want them.” The response was straightforward: “Why should they bother us?” Moreover, people were already buying them, the salesman added. “We cannot prevent this,” the reporter added, “but we are permitted to say: ‘Shame on you, devil.'”

Guardini went on: “The reporter said, ‘Shame on you, devil,’ giving evidence of ethical judgment in the matter. But most people seem not to have such judgment. At issue here is not a romantic fear of machines but the fact that power is impinging on something that ought not to be challenged if the very essence of our humanity is to remain unthreatened.”

What struck me here was how Guardini’s concerns paralleled those being articulated today about digital media. We are at every turn committed or compromised without our full awareness or consent by the gamut of digital tools we either submit to or are otherwise subjected to. Tech companies routinely “go into areas where personal restraint would forbid us to intrude.” Trust is everywhere eroded and undermined. Those who urge responsibility and restraint are accused of being anti-technology romantics.

Near the end of this talk, Guardini acknowledged that “a kind of anxiety exists that leads to distinct distrust of active people,” but he nonetheless concluded that “we should not forget that those who take up practical tasks,” by which he meant engineers and technologists, “can indeed very easily ignore the problems. Or else they can have a belief in the power of progress, think that everything will come out right, and feel that they themselves are released from responsibility.” This, too, sounds very familiar.

I’ll give the last word to Guardini:

The fact that the machine brings a measure of freedom hitherto unknown is in the first instance a gain. The value of freedom, however, is not fixed solely by the question “Freedom from what?” but decisively by the further question “Freedom for what?”

 

The Point of Technical Convergence

The following passage is taken from one of the last chapters of Jacques Ellul’s The Technological Society, originally published in 1954.

However, one important fact has escaped the notice of the technicians, the phenomenon of technical convergence …. Our interest here is the convergence on man of a plurality, not of techniques, but of systems or complexes of techniques. The result is an operational totalitarianism; no longer is any part of man free and independent of these techniques …. It is impossible to determine, by considering any human technique in isolation, whether its human object remains intact or not. The problem can be solved only by using the human being as a criterion, only by looking at this point of convergence of technical systems.

[….]

Our highly specialized technicians will have a vast number of problems to hurdle before they are in a position to put together the pieces of the puzzle. The technical operations involved do not appear to fit well together, and only by means of a new technique of organization will it be possible to unite the different pieces into a whole. When this has finally been accomplished, however, human techniques will develop very fast. As yet unrecognized potentialities for influencing the individual will appear. At the moment such possibilities are only dimly discerned in the penumbra of totalitarian regimes still in their infancy. It should not be forgotten, of course, that while our technicians are trying to synthesize the various techniques theoretically, a synthetic unity already exists and man is its object.

I submit that we can read this prophetically and find the fulfillment of the “new technique of organization” that will unleash “unrecognized potentialities for influencing the individual” in digital technology, perhaps even seeing in the smartphone the symbol of technical convergence. A whole assemblage of political, economic, psychological, and social techniques find in this digital device a focal point upon which to converge on the human being.

Media Ecological Perspective on Free Speech

Rhetoric in oral cultures tends to be, in Walter Ong’s phrasing, “agonistically toned.” Ong noted that speech in oral societies was more like an event or action than it was a label or sign. Words did things (curses, blessings, incantations, etc.), and irrevocably so.

This was so, in part, because speech in oral societies was uttered in the dynamic and always potentially fraught context of face-to-face encounters. The audience in oral societies is always present and visible. It is literally an audience, it hears you. Writing, by contrast, creates the possibility of addressing an audience that is neither visible nor present. The audience becomes an abstraction. Cool detachment can prevail in writing because there is no one to immediately challenge you.

It was also so because in oral societies one couldn’t conceive of a word visually, as a thing; it was an auditory event. In literate societies one can’t help but conceive of a word as a thing. As Ong says at one point, a literate person inevitably thinks of the image of letters when he thinks about a word. (Try it for yourself: close your eyes and think of a word, not the thing that word represents but of the word itself.) This thing-like quality is reinforced by the fixity of print. A word conceived of as an inert thing can also be conceived of as a harmless thing, its there, lifeless, on the page. Words conceived of as an active, dynamic force will not so easily be experienced as harmless in themselves.

A maximalist doctrine of freedom of speech, then, may be most plausible when speech is imagined primarily as inert words-as-things. It is not surprising then that freedom of speech is historically correlated with the appearance of print.

The psychodynamics of digital media, however, are more akin to those of orality than literacy.

Discourse on digital media platforms, from comment boxes to social media, is infamously combative. On digital platforms, words takes on a more active quality. They can no longer be imagined as inert and lifeless things.

This is so, in part, because digital media reintegrates the word into a dynamic situation. The audience in digital media is not always visible, but it can be present with a degree of immediacy that is more like a face-to-face encounter than writing or print. Moreover, the pixelated word is more ephemeral and less-thing like than the printed word. It is both more ephemeral and more likely to initiate action.

Digital media, thus, reanimates the inert printed word, and the living word is experienced as both more powerful and more dangerous.

Under these circumstances a maximalist account of freedom of speech loses a measure of plausibility; it loses its status as a taken for granted and unalloyed good.

Cyborg Discourse is Useless

In “Why Silicon Valley Can’t Fix Itself,” Ben Tarnoff and Moira Weigel critically engage with one response to the tech backlash, the emergence of Center for Humane Technology.

The piece begins with an overview of the wave of bad press the tech industry has received, focusing especially on criticism that has emerged within Silicon Valley from former and current industry executives, investors, and workers. Weigel and Tarnoff then describe the work of the center and its emphasis on more humane design as the key to redressing the ills caused by Silicon Valley. They make the interesting observation that humanizing technology is a message the tech industry can get behind because it is, in at least one manifestation, native to Silicon Valley. They are thinking chiefly of the work of Stewart Brand and, later, Steve Jobs.

They then turn to their critique of what they call the tech humanist response to the problems generated by Silicon Valley, a response embodied by the Center for Humane Technology. It is to that critique that I want to give my attention. Weigel and Tarnoff’s argument targets humanist technology criticism more broadly, and it is this broader argument that I want to consider more closely.

Clarifications: I should say before moving forward that back in 2015 I wrote briefly in defense of what I then called humanist tech criticism. I did so initially in response to Evgeny Morozov’s review of Nick Carr’s work on automation, a review which was also a broadside against what he called humanist technology criticism. Shortly thereafter I returned to the theme in response to Andrew McAfee’s query, “Who are the humanists, and why do they dislike technology so much?”

More recently, in a discussion of the tech backlash, I’ve expressed some reservations about the project of humanist technology criticism. My reservations, however, stem from different sources than either Morozov’s critique or that of Weigel and Tarnoff, although there is some overlap with both.

One more prefatory note before I get on with a discussion of Weigel and Tarnoff’s critique of humanist technology criticism. I’ve been using the phrase “what X called humanist technology criticism,” and I’ve done so because the phrase is being used with a measure of imprecision or without a great deal of critical rigor. I think that’s important to note and keep in mind. Finally, then, on to what I’m actually writing this post to discuss.

Turnoff and Weigel’s critique of tech-humanist discourse is two-fold. First, they find that tech humanist criticism, as it is deployed by the Center for Humane Technology, is too narrowly focused on either how individuals qua consumers use digital devices or on the design decisions made by engineers or programmers. This focus ignores the larger economic context in which such decisions are made. In this respect, their critique reiterates Morozov’s 2015 critique of humanist technology criticism.

They argue, for example, that individual design decisions “are only a symptom of a larger issue:

the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.

About this, they are almost certainly right. As I wrote in my earlier post on the center, “Tinkering with the apparatus to make it more humane does not go far enough if the apparatus itself is intrinsically inhumane.” That tech companies are poised to appropriate and absorb the tech humanist critique, as it now manifests itself, and strengthen their hand as a result seems obvious enough.

The second aspect of Tarnoff and Weigel’s critique is more philosophical in nature. “Tech humanists say they want to align humanity and technology,” they write, “But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.”

This misunderstanding, in their view, generates a number of problems. For example, it yields misguided anxieties about the loss of essential human qualities as a consequence of technological change. (Perfunctory mention of the Phaedrus? Check.) “Holding humanity and technology separate,” they also argue, “clears the way for a small group of humans to determine the proper alignment between them.” And, fundamentally, because human nature changes it cannot “serve as a stable basis for evaluating the impact of technology.”

“Fortunately,” the authors tell us, “there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future.” “This tradition,” they add, “does not address ‘humanity’ in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as ‘cyborgs’, to quote the biologist and philosopher of science Donna Haraway.”

Somewhat provocatively I want to suggest that cyborg discourse is useless. This is a little different than claiming that it is entirely erroneous or wholly without merit. Nor is it really a claim about Haraway’s work. It seems to me that cyborg discourse as it is most often deployed in discussions of technology today is only superficially connected with Haraway’s arguments. They are dealt with about as deeply as Plato’s, which is to say not very deeply at all.

Historical note: What is most striking about the cyborg argument is how very Victorian it turns out to be. Writing in the mid-sixties, Lewis Mumford observed in “Technics and the Nature of Man” that for “more than a century man has been habitually defined as a tool-using animal.” Mumford targets the Victorian reduction of the human being to homo faber, the toolmaker, and the view that human beings owe their unique capacities to their use of tools. It is a view that is, in his analysis, wrong on the facts and also a projection into the past of “modern man’s own overwhelming interest in tools, machines, technical mastery.”

It is this understanding that Mumford challenges precisely because, in his view, it has abetted the rise of authoritarian technics controlled by a very few. In other words, Mumford’s far more radical political and economic critique of modern technology is grounded in an understanding of human nature that is decidedly at odds with cyborg discourse. Cyborg discourse turns out to be rhetorical steampunk.

Rather, what I am claiming is that cyborg discourse, as it is popularly deployed in discussions about the impact of technology, is useless because it gets us nowhere. By itself it offers no practical wisdom. It offers no critical tools to help us judge, weigh, or evaluate. We’ve always been cyborgs, you say? Fine. How does this help me think about any given technology? How does this help me evaluate its consequences?

Indeed, it is worse than useless because, more often than not, it abets the unchecked growth of the tech industry by blunting critique and dampening intuitive reservations.  Indeed, the most consistent application of cyborg rhetoric lies in the eschatological fantasies of the transhumanists. The tech industry, in other words, is as adept at appropriating and absorbing cyborg discourse as it is humanist discourse.

Consider, to begin with, the claim that because human nature changes it cannot serve as a stable basis for evaluating the impact of technology.

At what rate exactly does human nature change? Does it change so quickly that it cannot guide our reflections on the relative merits of new technologies? As evidence for the claim that humans and technology “constantly change together,” the authors cite a journal article that they say “suggests that the human hand evolved to manipulate the stone tools that our ancestors used.” The conclusion of the article, however, is less than definitive: while certain strands of evidence point in this direction,  “it cannot be directly determined that hominin hands evolved by natural selection in adaptation to tool making.” Moreover, the time scale cited by the author is, in any case, “many millennia.”

It seems to me that very little follows from this piece of evidence. The relevance of this thesis to how we think about and evaluate technology today needs to be established. We are no long talking about primitive stone tools nor are we helped by taking into consideration processes that played out over the course of many millennia. If someone claims that a certain technology is dehumanizing, telling them that our human ancestors evolved in conjunction with their use of stone tools is a fine bit of petty sophistry.

And why should it be the case that holding humanity and technology separate paves the way for an elite class to determine the nature of the relationship? Is this a necessary development? How so? Are there no counter-examples? Blurring the distinction has, in fact, had the effect that the authors attribute to maintaining the distinction. “We have always been cyborgs” is just as much a case for thoughtless assimilation to whatever new technology we’re being sold.

The cyborg tradition, the authors claim, does not address the abstraction “humanity” but distinct human beings. This if fine, but, again, I’m not sure it gets us very far. For one thing, are we back to individuals making decisions? And on what basis, exactly, are these distinct human beings making their decisions?

“To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention,” the authors grant. Okay, so how are we to judge and discern? Can these individuals not be guided by some particular understanding of what constitutes human flourishing? If I’m going to act collectively will it not be on the basis of some understanding of what is good not just for me personally but for me and others as human beings?

“But it does suggest,” they immediately add, “that living well with technology can’t be a matter of making technology more ‘human’.” But again, why should we not judge technology based upon some understanding of what is fitting for the sorts of creatures we presently are? Because in five millennia we will have a marginally different skeletal configuration? If not all technologies are good for us, is it not because some technologies erode something that could be claimed as fundamental to human dignity or because it undermines some essential component of human flourishing?

Interestingly, we are then told that the “cyborg way of thinking, by contrast, tells us that our species is essentially technological.” Have we not just substituted one essentialist account of human nature for another? Cyborg discourse, as it turns out, aims to tell us exactly the sort of creatures we are. It’s not that we are doing away with all accounts of human nature, we are just privileging one account over others.

In this way it parallels the liberal democratic pretense to neutrality regarding competing visions of the good life. And, in the same way and for the same reason, it thus promotes a context in which technology can flourish independently of any specifically human ends.

The anti-tech humanist position staked out by the authors also ignores the possibility that some technologies are fundamentally disordering of individual and collective human experience. In many respects, they are subject to the same critique that they leveled against the Center for Humane Technology. What they want is simply a better version, by their lights, of existing technology. Chiefly, this entails some version of public ownership. But what will constitute this public if not some shared understanding of what is good for people given their peculiarly human nature?

“But even though our continuous co-evolution with our machines is inevitable,” Tarnoff and Weigel write, “the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.” A little further on they invite us to envision “a worker-owned Uber, a user-owned Facebook or a socially owned ‘smart city’ of the kind being developed in Barcelona.” But what of those who, for reasons grounded in a particular understanding of the human condition, don’t care to live in any iteration of a smart city? Or what if a publicly owned version of Facebook is judged to be socially and politically disordering on the same grounds? Cyborg rhetoric tends dismisses such criticism because it is grounded on an account of human nature that is at odds with the cyborg vision.

“Rather than trying to humanise technology, then, we should be trying to democratize it,” Tarnoff and Weigel insist. “We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.” But herein lies the problem. Society as a whole is too fractured a unit to undertake the kind of collective action the author’s desire. It is an abstraction, just like Humanity. The authors seem to imagine that society as a whole shares their concerns. But what if most people are perfectly content trading their data for convenience?

When it comes down to it, everyone is a humanist technology critic, there are simply competing understandings of the human in play. If the use of a given technology is to be regulated or resisted or otherwise curtailed, it’s because someone deems it bad for people given some understanding, tacit as it may be, of what people are for.

None of this is to say that humanist discourse does not have its own set of problems, theoretical and practical. Or that the critical questions I’ve raised may not have satisfactory answers from a cyborg discourse perspective. Mostly it is to say that more often than not cyborg discourse is facile and superficial and, by itself, does very little to enlighten our situation or point a way forward.


Tip the Writer

$1.00