Zuckerberg’s Blindness and Ours

There’s a 14,000-word profile of Mark Zuckerberg in the latest issue of the New Yorker. Don’t care to read 14,000 words on Zuckerberg? Not to worry, others have taken up the burden on your behalf (and mine, frankly) in order to distill for us what is worthy of note. Alexis Madrigal, for example, offers you what he thinks are the eight most revealing quotes from the interview.

In my Twitter feed, reactions were characterized chiefly by a general sense of fatigue. The piece was praised for its prose and tone, but readers who have been following Facebook and Zuckerberg for some time tended to agree on one thing: we really don’t learn anything new.

Casey Newton, who has followed Facebook as closely as any journalist for the last year or two, wonders whether at this juncture there is any point to these interviews. He commends the New Yorker profile, but finds that it tells us very little that is new or useful. Chiefly, it supplies quaint anecdotes that reveal Zuckerberg’s seemingly benign quirkiness. The strongest part of the very long profile was, in Newton’s view, two of the concluding paragraphs:

“The caricature of Zuckerberg is that of an automaton with little regard for the human dimensions of his work. The truth is something else: he decided long ago that no historical change is painless. Like Augustus, he is at peace with his trade-offs. Between speech and truth, he chose speech. Between speed and perfection, he chose speed. Between scale and safety, he chose scale. His life thus far has convinced him that he can solve ‘problem after problem after problem,’ no matter the howling from the public it may cause.

At a certain point, the habits of mind that served Zuckerberg well on his ascent will start to work against him. To avoid further crises, he will have to embrace the fact that he’s now a protector of the peace, not a disrupter of it. Facebook’s colossal power of persuasion has delivered fortune but also peril. Like it or not, Zuckerberg is a gatekeeper. The era when Facebook could learn by doing, and fix the mistakes later, is over. The costs are too high, and idealism is not a defense against negligence.”

The mention of the Roman emperor Augustus is not random; in one of the interviews that fed into the profile, Zuckerberg had gone on at length about his admiration for Augustus.

I found the elegant enumeration of the trade-offs Zuckerberg chose to make especially useful. They sum up rather well what we need to know about Facebook’s ethos, so much of which derives from Zuckerberg himself, of course.

Newton goes on to say that he finds himself “less interested in reading tech CEOs perform their thoughtfulness.” What we ought to care about, in his view, is not what the tech CEO’s think but, rather, what they do. “Maybe tech platforms can be ‘fixed,'” Newton concludes, “or maybe they can’t. But either way, it’s not an oral exam. And we ought not to treat it like one.”

Others, of course, would argue, or at least assert, that we, in fact, find no thoughtfulness whatsoever in these Silicon Valley profiles, performed or otherwise. Very often this view is packaged as a defense of the humanities in higher ed. If only these tech CEO’s had received an education in the humanities, they would have been better equipped to steer their companies in a more ethical and just direction.

I tend to agree with Adam Elkus, who frequently lampoons these humanities oriented critiques of Zuckerberg, Dorsey, et al. The humanities will not save us. The problem is not stupidity or even ignorance per se. And a humanistic course of education, should we even be able to agree about what that ought to entail, is no guarantee of moral integrity (should we even be able to agree about what that entails).

Moreover, while I’m sympathetic to some degree with Newton’s counsel that we care more about what these tech CEO’s do than about what they think, the case of Zuckerberg suggests that what they think and what they do is in closer harmony than we might imagine. And, especially with Facebook, what Zuckerberg thinks and consequently does, sets the course for an immensely influential platform. The problem, I’d suggest is not that Zuckerberg doesn’t think, the problem is rather in the particular shape his thinking appears to take.

As per usual my understanding of Hannah Arendt’s work on thinking and moral responsibility informs my conclusions about these matters (read more here and here). Most useful, in my view, are the distinctions she makes among mental activities, most of which, in casual conversation, we tend to simply call thinking. But not all thinking is equal, and thought, as Arendt understands it, is not synonymous with intelligence. It is, however, deeply related to our capacity for judgment, which is yet another kind of mental activity.

Interestingly, Arendt tends to define what she calls thinking against what we might broadly label problem-solving, or the search answers and solutions. This mode of thinking, which is extremely valuable in its own right, tends to be the very kind of thinking that is generally prized in Silicon Valley and it is the kind of thinking at which  Zuckerberg himself tends to excel.

In one passage from the prologue to The Human Condition, a passage to which I’ve frequently returned, Arendt issues the following warning:

“If it should turn out to be true that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

As I read her, Arendt is here warning us about the danger of a reduction and an estrangement. The reduction is of knowledge to what she calls “know-how,” what we might gloss as the capacity to solve problems through technical ingenuity. The estrangement is that between this already impoverished form of knowledge on the one hand and thought on the other.

Reducing knowledge to know-how and doing away with thought leaves us trapped by an impulse to see the world merely as a field of problems to be solved by the application of the proper tool or technique, and this impulse is also compulsive because it cannot abide inaction. We can call this an ideology or we can simply call it a frame of mind, but either way it seems that this is closer to the truth about the mindset of Silicon Valley.

It is not a matter of stupidity or education, formally understood, or any kind of personal turpitude. Indeed, by most accounts, Zuckerberg is both earnest and, in his own way, thoughtful. Rather it is the case that one’s intelligence and one’s education, even if it were deeply humanistic, and one’s moral outlook, otherwise exemplary and decent, are framed by something more fundamental: a distinctive way of perceiving the world. This way of seeing the world, including the human being, as a field of problems to be solved by the application of tools and techniques, bends all of our faculties to its own ends. The solution is the truth, the solution is the good, the solution the beautiful. Nothing that is given is valued.

The trouble with this way of seeing the world is that it cannot quite imagine the possibility that some problems are not susceptible to merely technical solutions or, much less, that some problems are best abided. It is also plagued by hubris—often of the worst sort, the hubris of the powerful and well-intentioned—and, consequently, it is incapable of perceiving its own limits. As in the Greek tragedies, hubris generates blindness, a blindness born precisely out of one’s distinctive way of seeing. And that’s not the worst of it. That worst of it is that we are all, to some degree, now tempted and prone to see the world in just this way too.


 

Tip the Writer

$1.00

 

Audience Overload

Information overload is a concept that has long been used to describe the experience of digital media, although the term and the problem itself predate the digital age.

In a 2011 blog post, Nicholas Carr distinguished between two kinds of information overload: situational overload and ambient overload.

“Situational overload is the needle-in-the-haystack problem: You need a particular piece of information – in order to answer a question of one sort or another – and that piece of information is buried in a bunch of other pieces of information. The challenge is to pinpoint the required information, to extract the needle from the haystack, and to do it as quickly as possible. Filters have always been pretty effective at solving the problem of situational overload …

“Situational overload is not the problem. When we complain about information overload, what we’re usually complaining about is ambient overload. This is an altogether different beast. Ambient overload doesn’t involve needles in haystacks. It involves haystack-sized piles of needles. We experience ambient overload when we’re surrounded by so much information that is of immediate interest to us that we feel overwhelmed by the never ending pressure of trying to keep up with it all.”

Relatedly, Eli Pariser coined the term “filter bubble” around 2010 to describe a situation generated by platforms that deploy sophisticated algorithms to serve users information they are likely to care about. These algorithms are responsive to a user’s choices and interactions with information on the platform. The fear is that we will be increasingly isolated in bubbles that feed us only what we already are inclined to believe.

Last month, Zeynep Tufekci published a sharp essay in MIT’s Technology Review titled, “How social media took us from Tahrir Square to Donald Trump.”  If you didn’t catch it when it came out, I encourage you to give it a read. In it she briefly discussed the filter bubble problem and offered an excellent analysis:

“The fourth lesson has to do with the much-touted issue of filter bubbles or echo chambers—the claim that online, we encounter only views similar to our own. This isn’t completely true. While algorithms will often feed people some of what they already want to hear, research shows that we probably encounter a wider variety of opinions online than we do offline, or than we did before the advent of digital tools.

“Rather, the problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone. It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one. In sociology terms, we strengthen our feeling of ‘in-group’ belonging by increasing our distance from and tension with the ‘out-group’—us versus them. Our cognitive universe isn’t an echo chamber, but our social one is. This is why the various projects for fact-checking claims in the news, while valuable, don’t convince people. Belonging is stronger than facts.”

While the problems associated with information overload are worth our consideration, if we want to critically examine the consequences of social media, particularly on our political culture, we need to look elsewhere.

For one thing, the problem of information overload is not a distinctly digital problem, although it is true that it has been greatly augmented by the emergence of digital technology.

Moreover, the focus on information overload also trades on an incomplete, and thus inadequate understanding of the human person. We are not, after all, merely information processing machines. As I’ve suggested before, affect overload is a more serious problem than information overload. We are not quite the rational actors we imagine ourselves to be. Affect not information is the coin of the realm in the world of social media.

This is implicit in Tufekci’s analysis of the real problem related to the encounter with opposing views online. It is also why the preoccupation with fact-checking is itself a symptom of the problem rather than a solution. People do not necessarily share “fake news” because they believe it. Emotion and social gamesmanship play an important role as well.

Finally, and this is the point I set out to make, we’ve been focusing on our information inputs when we ought to have also paid attention to audience effect. That’s what’s different about the digital media environment. Print gave us information overload, digital media gave all of us who would never have had a way to reach an audience beyond our small social circle in the pre-digital world the means to do so—it gave us audience overload. The audience is always with us, it is on demand whenever we want it. And the audience can talk back to us instantaneously. We will become who we think we need to be to get what we want from this audience.

And while it is impossible to fine-tune that audience in the same way we might work to fine-tune our information flows, we nonetheless can customize it to a significant degree, and, more importantly, we have some ideal image of that audience in our mind. It is important that this audience is not a definite, tangible audience before us. The indefinite shape of the audience allows us to give it shape in our minds, which is to say that it can all the more effectively mess with us for its being, in part, an implicit projection of our psyche.

It is this virtual audience which we desire, this audience we want to please, this audience from whom we seek a reaction to satisfy our emotional cravings—cravings already manipulated by the structure of the platforms that connect us with our audience—it is this audience and the influence it exerts over us that has played an important role in disordering our public discourse.

It is not just that our attention is fractured by the constant barrage of information, it is also that our desire for attention has deformed our intellectual and emotional lives. The ever-present audience has proven too powerful a temptation, too heavy a burden.

Reframing Technocracy

In his 1977 classic, Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, Langdon Winner invites us to consider not the question “Who governs?” but rather “What governs?”

He further elaborates:

“Are there certain conditions, constraints, necessities, requirements, or imperatives effectively governing how an advanced technological society operates? Do such conditions predominate regardless of the specific character of the men who ostensibly hold power? This, it seems to me, is the most crucial problem raised by the conjunction of politics and technics. It is certainly the point at which the idea of autonomous  technology has its broadest significance.”

Earlier, he had discussed one way in which technocracy had been envisioned throughout the 20th century: as the emergence of an elite class of scientists, technicians, and engineers, who displace the traditional political class and become the rulers of society. This vision was popular among science fiction writers and theorists who were overtly technocratic. This vision never quite played out as these writers and theorists imagined. But this does not mean, in Winner’s view, that there is no meaningful sense in which we might speak about our political order being technocratic.

This is the significance of the question “What rules?” rather than “Who rules?”

Here is Winner again:

“If one returns to the modern writings on technocracy in this light, one finds that parallel to the conceptions about scientific and technical elites and their power is a notion of order— a technological order— in which in a true sense no persons or groups rule at all. Individuals and elites are present, but their roles and actions conform so closely to the framework established by the structures and processes of the technical system that any claim to determination by human choice becomes purely illusory. In this way of looking at things, technology itself is seen to have a distinctly political form. The technological order built since the scientific revolution now encompasses, interpenetrates, and incorporates all of society. Its standards of operation are the rules men must obey. Within this comprehensive order, government becomes the business of recognizing what is necessary and efficient for the continued functioning and elaboration of large-scale systems and the rational implementation of their manifest requirements. Politics becomes the acting out of the technical hegemony.”

Winner takes this to be the view generally represented, despite their differences, by Spengler, Juenger, Jaspers, Mumford, Marcuse, Giedion, and Ellul. “[I]n them,” Winner writes, “one finds a roughly shared notion of society and politics, a common set of observations, assumptions, modes of thinking and sense of the whole, which, I be-
lieve, unites them as an identifiable tradition.”

Throughout this section of Autonomous Technology, Winner sets out to update and refine their argument.

Along the way, Winner takes some passing shots at the work of what we might call the popular tech criticism current at the time Winner was writing, a little over forty years ago.

“Much of what now passes for incisive analysis,” Winner notes, “is actually nothing more than elaborate landscape, impressionistic, futuristic razzle-dazzle spewing forth in an endless stream of paperback non-books, media extravaganzas, and global village publicity.”

A little further on he catalogs a list of recurring tropes in popular writing about technology: overcrowded cities, “labyrinthine bureaucracies,” consumerism and waste, the rise of the military-industrial complex, etc.

Winner goes on:

“To go on describing such things endlessly does little to advance our insight. Neither is it helpful to devise new names for the world produced. The Postindustrial Society? The Technetronic Society? The Posthistoric Society? The Active Society? In an unconscious parody of the ancient belief that he who knows God’s secret name will have extraordinary powers, the idea seems to be that a stroke of nomenclature will bring light to the darkness. This does make for captivating book titles but little else. The fashion, furthermore, is to exclaim in apparent horror at the incredible scenes unfolding before one’s eyes and yet deep in one’s heart relish the excitement and perversity of it all. Alleged critiques turn out to be elaborate advertisements for the situations they ostensibly abhor.”

On all counts, it seems to me that Winner’s book has aged well.

Technopoly and Anti-Humanism

Back in May, Nicholas Carr wrote a blog post critically examining Moira Weigel and Ben Tarnoff’s “Why Silicon Valley Can’t Fix Itself.” You may remember that around the same time, I had a few things to say about the same piece. Regrettably, I’d missed Carr’s post when it was published or I would’ve certainly incorporated his argument. In any case, I encourage you to go back and read what Carr had to say.

Carr doesn’t take up the Weigel/Tarnoff piece until about half way through his post. The first half interacts with an earlier piece by Tarnoff and another by Evgeny Morozov that take for granted the data mining metaphor and deploy it in an argument for public ownership of data.

Carr is chiefly concerned with the mining metaphor and how it shapes our understanding of the problem. If Facebook, Google, etc. are mining our data, that in turn suggests something about our role in the process. It conceives of the human being as raw material. Carr suggests we consider another metaphor, not very felicitous either as he notes, that of the factory. We are not raw material, we are producers: we produce data by our actions. Here’s the difference:

“The factory metaphor makes clear what the mining metaphor obscures: We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead. The questions we need to grapple with are political and economic, to be sure. But they are also personal, ethical, and philosophical.”

This then leads Carr into a discussion of the Weigel/Tarnoff piece, which is itself a brief against the work of the new tech humanists.

(I’ve written about an older brand of tech humanism before, and I’ve expressed certain reservations about the new tech humanists as well. But my reservations were not exactly Weigel and Tarnoff’s.)

Carr’s whole discussion is worth reading, but here are two selections I thought especially well put. First:

But Tarnoff and Weigel’s suggestion is the opposite of the truth when it comes to the broader humanist tradition in technology theory and criticism. It is the thinkers in that tradition — Mumford, Arendt, Ellul, McLuhan, Postman, Turkle, and many others — who have taught us how deeply and subtly technology is entwined with human history, human society, and human behavior, and how our entanglement with technology can produce effects, often unforeseen and sometimes hidden, that may run counter to our interests, however we choose to define those interests.

Though any cultural criticism will entail the expression of values — that’s what gives it bite — the thrust of the humanist critique of technology is not to impose a particular way of life on us but rather to give us the perspective, understanding, and know-how necessary to make our own informed choices about the tools and technologies we use and the way we design and employ them. By helping us to see the force of technology clearly and resist it when necessary, the humanist tradition expands our personal and social agency rather than constricting it.

And:

Nationalizing collective stores of personal data is an idea worthy of consideration and debate. But it raises a host of hard questions. In shifting ownership and control of exhaustive behavioral data to the government, what kind of abuses do we risk? It seems at least a little disconcerting to see the idea raised at a time when authoritarian movements and regimes are on the rise. If we end up trading a surveillance economy for a surveillance state, we’ve done ourselves no favors.

But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency.

I could not put this any better. That last paragraph especially is something I tried to get at in my recent piece for The New Atlantis when I wrote:

Social media platforms are the most prominent focal point of the tech backlash. Critics have understandably centered their attention on the related issues of data collection, privacy, and the political weaponization of targeted ads. But if we were to imagine a world in which each of these issues were resolved justly and equitably to the satisfaction of most critics, further questions would still remain about the moral and political consequences of social media. For example: If social media platforms become our default public square, what sort of discourse do they encourage or discourage? What kind of political subjectivity emerges from the habitual use of social media? What understanding of community and political action do they foster? These questions and many others — and the understanding they might yield — have not been a meaningful part of the conversation about the tech backlash.

I remain relatively convinced that the discontents of humanism (variously understood), the emergence of technopoly (as Neil Postman characterized the present techno-social configuration), and the modern (as in c. 1600-present) political order are deeply intertwined. (See this earlier post on democracy and technology.) Witness, for example, the de facto governing role that a platform like Facebook is forced to assume over the speech of its nearly 2 billion users, and, absent a set of shared values among those 2 billion users, how the platform must implement ever more elaborate technical and technocratic solutions.

Humanism is a complex and controversial term. It can be understood in countless ways. I would propose, however, that there is more affinity than is usually acknowledged between anti-Humanism understood as an opposition to a narrow and totalizing understanding of the human and anti-humanism as exemplified by the misanthropic visions of the transhumanists and their Silicon Valley acolytes. Although perhaps “affinity” is not the best way of putting the matter. The former abets the latter, that much I’d want to argue.

So, concluding thesis: If we are incapable of even a humble affirmation of our humanness then we leave ourselves open to the worst depredations of the technological order and those who stand to profit most from it.

Social Media, Mass Society, and the Desire for Attention

A Twitter thread, slightly expanded, for those of you with the good sense not to be on Twitter. 

Thesis: Many of our present social, political, personal disorders are rooted in or related to disorders of attention. But … disorders of attention are themselves rooted in an earlier disorderd state: that of the anonymous individual of mass society.

The desire for attention is itself a good and perfectly human desire. In Arendt’s terms, it is the desire to appear and act before others and to be noted in our particularity. It is the desire to be seen and to be acknowledged for who we are.

For Arendt this appearing and acting happened in the public realm as opposed to the private realm or the social realm. The political arena of the ancient Greek polis was her model for this public space. The private realm was the realm of the household. The social realm was a more recent development, it was the realm of mass society. It was not a private realm, but neither was it a realm in which the individual could meaningfully appear in the integrity of her particularity.

The scale and structures of mass society denied individuals this space of appearing. Most individuals no longer had access to a realm wherein they could be meaningfully noted by others. (Aside: celebrity culture is a vicarious satisfaction of this unsatisfiable desire. See also Walker Percy’s The Moviegoer.)

Social media appeared to satisfy this need through platforms designed ostensibly to satisfy this desire for what was now termed “connection.” But what actually appeared was an increasingly compulsive because never fully satisfied desire for attention.

In part, this is because, like mass society, social media does not operate at a scale or in a space conducive to meaningful human appearing and action.

Rather than reconstituting human-scaled spaces of embodied appearance and action, social media generated mass-scaled spaces where our disembodied avatars competed for attention on platforms explicitly designed to generate this compulsive seeking after attention.

Also, where pre-mass society spaces were delimited and distinct from private spheres, the new public constituted by social media colonized private life, making it, too, fodder for the new quasi-public sphere of competitive attention.

Social media thus amounts to an apparent avenue for assuaging the disorders of mass society but fails and makes matters worse by doubling down on and exacerbating the original problem: the elimination of human-scaled spaces for individual appearance and action.