Back in May, Nicholas Carr wrote a blog post critically examining Moira Weigel and Ben Tarnoff’s “Why Silicon Valley Can’t Fix Itself.” You may remember that around the same time, I had a few things to say about the same piece. Regrettably, I’d missed Carr’s post when it was published or I would’ve certainly incorporated his argument. In any case, I encourage you to go back and read what Carr had to say.
Carr doesn’t take up the Weigel/Tarnoff piece until about half way through his post. The first half interacts with an earlier piece by Tarnoff and another by Evgeny Morozov that take for granted the data mining metaphor and deploy it in an argument for public ownership of data.
Carr is chiefly concerned with the mining metaphor and how it shapes our understanding of the problem. If Facebook, Google, etc. are mining our data, that in turn suggests something about our role in the process. It conceives of the human being as raw material. Carr suggests we consider another metaphor, not very felicitous either as he notes, that of the factory. We are not raw material, we are producers: we produce data by our actions. Here’s the difference:
“The factory metaphor makes clear what the mining metaphor obscures: We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead. The questions we need to grapple with are political and economic, to be sure. But they are also personal, ethical, and philosophical.”
This then leads Carr into a discussion of the Weigel/Tarnoff piece, which is itself a brief against the work of the new tech humanists.
(I’ve written about an older brand of tech humanism before, and I’ve expressed certain reservations about the new tech humanists as well. But my reservations were not exactly Weigel and Tarnoff’s.)
Carr’s whole discussion is worth reading, but here are two selections I thought especially well put. First:
But Tarnoff and Weigel’s suggestion is the opposite of the truth when it comes to the broader humanist tradition in technology theory and criticism. It is the thinkers in that tradition — Mumford, Arendt, Ellul, McLuhan, Postman, Turkle, and many others — who have taught us how deeply and subtly technology is entwined with human history, human society, and human behavior, and how our entanglement with technology can produce effects, often unforeseen and sometimes hidden, that may run counter to our interests, however we choose to define those interests.
Though any cultural criticism will entail the expression of values — that’s what gives it bite — the thrust of the humanist critique of technology is not to impose a particular way of life on us but rather to give us the perspective, understanding, and know-how necessary to make our own informed choices about the tools and technologies we use and the way we design and employ them. By helping us to see the force of technology clearly and resist it when necessary, the humanist tradition expands our personal and social agency rather than constricting it.
And:
Nationalizing collective stores of personal data is an idea worthy of consideration and debate. But it raises a host of hard questions. In shifting ownership and control of exhaustive behavioral data to the government, what kind of abuses do we risk? It seems at least a little disconcerting to see the idea raised at a time when authoritarian movements and regimes are on the rise. If we end up trading a surveillance economy for a surveillance state, we’ve done ourselves no favors.
But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency.
I could not put this any better. That last paragraph especially is something I tried to get at in my recent piece for The New Atlantis when I wrote:
Social media platforms are the most prominent focal point of the tech backlash. Critics have understandably centered their attention on the related issues of data collection, privacy, and the political weaponization of targeted ads. But if we were to imagine a world in which each of these issues were resolved justly and equitably to the satisfaction of most critics, further questions would still remain about the moral and political consequences of social media. For example: If social media platforms become our default public square, what sort of discourse do they encourage or discourage? What kind of political subjectivity emerges from the habitual use of social media? What understanding of community and political action do they foster? These questions and many others — and the understanding they might yield — have not been a meaningful part of the conversation about the tech backlash.
I remain relatively convinced that the discontents of humanism (variously understood), the emergence of technopoly (as Neil Postman characterized the present techno-social configuration), and the modern (as in c. 1600-present) political order are deeply intertwined. (See this earlier post on democracy and technology.) Witness, for example, the de facto governing role that a platform like Facebook is forced to assume over the speech of its nearly 2 billion users, and, absent a set of shared values among those 2 billion users, how the platform must implement ever more elaborate technical and technocratic solutions.
Humanism is a complex and controversial term. It can be understood in countless ways. I would propose, however, that there is more affinity than is usually acknowledged between anti-Humanism understood as an opposition to a narrow and totalizing understanding of the human and anti-humanism as exemplified by the misanthropic visions of the transhumanists and their Silicon Valley acolytes. Although perhaps “affinity” is not the best way of putting the matter. The former abets the latter, that much I’d want to argue.
So, concluding thesis: If we are incapable of even a humble affirmation of our humanness then we leave ourselves open to the worst depredations of the technological order and those who stand to profit most from it.
2 thoughts on “Technopoly and Anti-Humanism”