Attention and Memory in the Age of the Disciplinary Spectacle

A while ago, I wrote about what I took to be the convergence of the society of the spectacle and the disciplinary society, the convergence, that is, of the analysis offered by Debord and Foucault respectively. It was in some ways an odd suggestion given Foucault’s expressed hostility to spectacle theorizing, but it struck me that fusing these two critical strands would be useful because, as I saw it, the material apparatus of spectacle and disciplinary surveillance had merged with the advent of digital technology. You can read my initial musings on that score here: “Eight Theses Regarding the Society of the Disciplinary Spectacle.”

As it turns out someone had, not surprisingly, beaten me to the punch: Debord himself. I discovered this while reading a paper presented by Jonathan Crary in 1989 titled, “Spectacle, Attention, Counter-Memory.”  (h/t Nick Seaver). Here are some particularly interesting sections.

“It is easy to forget that in Society of the Spectacle Debord outlined two different models of the spectacle; one he called ‘concentrated’ and the other ‘diffused,’ preventing the word spectacle from simply being synonymous with consumer or late capitalism. Concentrated spectacle was what characterized Nazi Germany, Stalinist Russia, and Maoist China; the preeminent model of diffused spectacle was the United States: ‘Wherever the concentrated spectacle rules so does the police … it is accompanied by permanent violence. The imposed image of the good envelops in its spectacle the totality of what officially exists and is usually concentrated in one man who is the guarantee of totalitarian cohesion. Everyone must magically identify with this absolute celebrity — or disappear.’ The diffuse spectacle, on the other hand, accompanies the abundance of commodities.”

And:

“I suspect that Foucault did not spend much time watching television or thinking about it, because it would not be difficult to make a case that television is a further perfecting of panoptic technology. In it surveillance and spectacle are not opposed terms, as he insists, but collapsed onto one another in a more effective disciplinary apparatus. Recent developments have confirmed literally this overlapping model: television sets that contain advanced image recognition technology in order to monitor and quantify the behavior, attentiveness, and eye movement of a spectator.”

Recall that Crary is writing in 1989. I was surprised by the claim in the last sentence. But in a footnote he cites an article in the Times from June of that year: “TV Viewers, Beware: Nielsen May Be Looking.” Happily, it’s available online so we can read about this then cutting edge technology. As an aside, here is an interesting excerpt from the Times piece:

“Nielsen and Sarnoff demonstrated a working model of the device at a news conference yesterday, at which Nielsen executives faced questions about the system’s similarities to the surveillance of Big Brother in George Orwell’s novel ‘Nineteen Eighty-Four.’

But Nielsen executives argued that the system will not be an invasion of privacy. ‘I don’t think we’re talking about Big Brother here at all,’ said John A. Dimling, executive vice president of Nielsen. ‘We’re not scanning the room to find out what people are doing. We’re sensitive to the issue of privacy.’ Mr. Dimling said it will be at least three years before the system goes into service.”

“We’re sensitive to the issues of privacy.” Right. It’s useful to remember how long we have been hearing these rejoinders. Needless to say, the whole thing seems quaint in light of present realities.

It turns out that “in 1988 Debord sees his two original models of diffused and concentrated spectacle becoming indistinct, converging into what he calls ‘the integrated society of the spectacle.'” A more elegant formulation than what I came up with, naturally.

More from Crary:

“As much as any single feature, Debord sees the core of the spectacle as the annihilation of historical knowledge — in particular the destruction of the recent past. In its place there is the reign of a perpetual present. History, he writes, had always been the measure by which novelty was assessed, but whoever is in the business of selling novelty has an interest in destroying the means by which it could be judged. Thus there is a ceaseless appearance of the important, and almost immediately its annihilation and replacement: ‘That which the spectacle ceases to speak of for three days no longer exists.'”

This sort of thing always strikes me as susceptible to two very different readings. The one concludes something like this: “You see, back then they worried about technology in similar ways to how some people worry about technology today and we now know those concerns were silly. Everything turned out okay.” The other reading goes something like this: “We really do have an amazing capacity to apathetically acclimate to a gradually emerging dystopia.”

Crary concludes his paper with two responses to the society of the spectacle. The first was embodied in a 1924 essay by the French painter Fernand Leger titled, “The Spectacle.” Here is Crary’s assessment of his project (emphasis mine):

“… the confused program he comes up with in this text is an early instance of the ploys of all those — from Warhol to today’s so-called simulationist— who believe, or at least claim, they are outwitting the spectacle at its own game. Leger summarizes this kind of ambition: ‘Let’s push the system to the extreme,’ he states, and offers vague suggestions for polychroming the exterior of factories and apartment buildings, for using new materials and setting them in motion. But this ineffectual inclination to outdo the allure of the spectacle becomes complicit with its annihilation of the past and fetishization of the new.

This seems to me like a perennially useful judgment.

Against this project he opposes “what Walter Benjamin called the ‘anthropological’ dimension of surrealism.”

“It was a strategy of turning the spectacle of the city inside out through counter-memory and counter-itineraries. These would reveal the potency of outmoded objects excluded from its slick surfaces, and of derelict spaces off its main routes of circulation. The strategy incarnated a refusal of the imposed present, and in reclaiming fragments of a demolished past it was implicitly figuring an alternative future.”

However, Crary concludes on a cautious note and raises useful questions for us to consider thirty years later:

“Whether these practices have any vitality or even relevance today depends in large measure on what an archaeology of the present tells us. Are we still in the midst of a society that is organized as appearance? Or have we entered a nonspectacular global system arranged primarily around the control and flow of information, a system whose management and regulation of attention would demand wholly new forms of resistance and memory?”


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.

The Original Sin of Internet Culture

I recently encountered the claim that “the foundational sin of internet culture was pretending like online wasn’t real life.” A familiar claim to anyone who has kept up with what I once dubbed the Cyborgology school of digital criticism, whose enduring contribution was the introduction of the term digital dualism. Digital dualism, according to the the scholars associated with the website Cyborgology, is a fallacy that misconstrues the digital world as a “virtual” world in opposition to the offline world, which is understood to be the “real” world. The usefulness of the term occasioned some spirited debates in which I played a minor role.

I wonder, though, about the idea that “pretending like online life wasn’t real life” is somehow that original sin of Internet culture. At the very least it seems to me that the claim can be variously understood. The sort of pretending the author had in mind probably involves the mistaken belief that online words and deeds do not have offline consequences. We could, however, also take the claim to mean something like this: the original sin of internet culture was the mistaken belief that our online experience could somehow transcend our offline faults, flaws, and frailties. Or, to put it otherwise, the original sin of Internet culture was its peculiar brand of gnostic utopianism: the belief that digital media could usher us into a period of quasi-mystic and disembodied harmony and unity.

Of course, as we now know all too well, this was a deeply destructive myth: we are no different online than we are offline. Indeed, a credible and compelling case could be made for the proposition that we are, in fact, a far worse version of ourselves online. In any event, we bring to the digital realm exactly the same propensity for vice that we exhibit in the so-called real world although with fewer of the “real world” constraints that might have curbed our vicious behavior. And, of course, because the boundaries between the digital realm and the analog realm are indeed porous if not exactly fictive, these vices then spill back over into the “real world.” Who we are offline is who we are online, and who we become through our online experience is who we will be offline.

This original sin, then, this digital utopianism encouraged us to uncritically cede to our digital tools, devices, and platforms ever expanding swaths of our experience in the mistaken hope that down this path lay our salvation and our liberation. We burdened the internet with messianic hopes—of course we were bound to be disappointed.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.

 

Silence

It is possible to participate in social media without ever posting anything. I don’t think I’ve ever seen statistics on users who frequently visit social media sites without ever posting, but I imagine they’re out there.

The fact that constantly impresses itself upon me, however, is how social media use generates an imperative to speak. One appears on social media chiefly by saying something. The shape which that “saying” takes varies, of course. We can speak on social media by posting words, images, or, more frequently, some combination of both. We can speak by linking. We can speak, as well, by liking, retweeting, sharing, etc. Our profiles speak, too, it’s true. But their static speech does not ordinarily generate engagement. No one speaks back to the static presence of an online profile. To exist on social media, to be taken into account, one must speak. Silence doesn’t signal, virtue or anything else for that matter. To remain silent on social media is an act of self-privation given that the social media self is constituted by our multi-modal loquaciousness.

I’m intrigued by how this imperative reorders the meaning of silence beyond the parameters social media. In putting it this way, of course, it immediately becomes apparent that defining the parameters of social media is no easy thing. Are its parameters to be drawn around our immediate engagement with the platform or rather by the times when a platform may easily capture our attention, say whenever I have a smartphone on me? Or, more broadly still, are the parameters of social media to be drawn so as to include any moment that is tinged by the existence of social media, the moment, for instance, that I interpret as social media fodder even if I cannot then access a platform?

However we decide to draw those lines, I wonder whether silence may not be construed as a defect in consequence of the habitual experience of the self generated by social media, an experience which tends to bind being and speaking tightly together.

The point of contrast, it seems to me, is the capacity of bodies in physical proximity to be for one another without also speaking. This capacity to be in silence is important and valuable. It may be most important and valuable in those moments when our words fail us—moments of profound emotional depth.

Silence, of course, need not be permanent. It is, rather, an incubator of thought and feeling, essential to the emergence of intellectual and emotional maturity. Without passing through silence, what we have to say may be vain, vacuous, and even harmful.

There is, as the writer of Ecclesiastes put it, “a time to keep silence, and a time to speak.” Wisdom consists of knowing how to tell the difference. Social media tempts us with folly of believing there is only ever a time to speak.

 

Technology Is Not Neutral: Two Ways of Understanding the Claim

One salutary aspect of the tech backlash, as the wave of critical attention Silicon Valley has received over the last year or so has come to be called, has been the increasing willingness—particularly, it seems to me, among tech journalists—to acknowledge that technology is not neutral. But in reality, I’m not sure that we have come all that far.

These discussions tend to center on social media platforms designed so as to generate compulsive engagement for the sake of capturing user data or the deployment of algorithms, which, rather than operating objectively, simply redeploy the biases, blind-spots, and prejudices of their programmers.

Such cases, of course, deserve all the critical attention they receive, but they should not exhaust our understanding of what it mean to claim that technology is not neutral. It could even be argued that these cases are instances of technological neutrality. The platforms and algorithms have simply been weaponized by the designers rather than by some set of users.

From this perspective, these technologies, platforms and algorithms, are not neutral because they have been designed, intentionally or otherwise, to take advantage of unwitting users. In theory, for example, if the platforms were differently designed so as to not engender compulsive engagement or if they were not operated so as to aggressively collect user data or if they were not liable to be used for finely-targeted manipulation campaigns, then all would be well. The extent of their non-neutrality, so to speak, or what is ethically significant about them is co-extensive with the explicitly malicious design practices employed by the social media company. Eliminate these practices, whether by law or regulation, and you no longer have to worry about the moral/ethical consequences of social media.

It seems to me that what you lose here is actually close to what matters most. To say that technology is not neutral is not merely to say that it can be maliciously designed. Even benevolently designed technologies are not neutral; they, too, can be morally formative or deforming.

The tech backlash has focused on maliciously designed technology. Moreover, focusing on law and regulation will, likewise, only address a limited set of what is morally consequential about our technology use. Even the renewed focus on ethics of technology as it is construed in the tech sector falls far short of the mark. All of this just scratches the surface of what ought to concern us or at least warrant our critical attention.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.

In Defense of Technology Ethics, Properly Understood

One of the more puzzling aspects of technology discourse that I encounter with some frequency is the tendency to oppose legal, political, and economic analysis of technology to ethical analysis. Maybe I’m the weird outlier on this matter, but I just don’t see how it is helpful or even ultimately tenable to oppose these two areas of analysis to one another. (Granted, I have my own reservations about “tech ethics” discourse, but they’re of a different sort.)

I think I understand where the impulse to oppose law, politics, and economics to ethics comes from. In its simplest form the idea is that ethics without legal, political, or economic action is, at best, basically a toothless parlor game. More likely, it’s a clever public relations scheme to put up a presentable front that shields a company’s depredations from public view.

This is fine, I get that. But the answer to this very real possibility is not to deprecate ethical reflection regarding technology as such. The answer, it seems to me, is to move on all fronts.

What exactly do we think we’re aiming to accomplish with law, regulation, and economic policy anyway if not acting to address some decidedly ethical concern regarding technology? Law, regulation, and policy is grounded in some understanding of what is right, what is ethical, what is humane—in short, what ought to be. It’s not clear to me how such determinations would not be improved by serious and sustained reflection on the ethical consequences of technological change. If you set aside serious reflection on the ethics of technology, it’s not as if you get to now properly focus on the more serious work of law and policy apart from ethics;  you simply get to work on law and policy from a more naive, uninformed, and unacknowledged ethical foundation.

(As an example of the sort of work that does this well, consider Evan Selinger and Brett Frischmann’s Reengineering Humanity. Their book blends ethical/philosophical and legal/political analysis in a way that recognizes the irreducible interrelations among these fields of human experience.)

Moreover, most of us would agree that law and public policy do not and decidedly should not entirely encompass the whole scope of a society’s moral and ethical concerns. There is an indispensable role to be played by norms and institutions that exist outside the scope of law and government. It is unclear to me how exactly these norms and institutions are to evolve so as to more effectively promote the health of civil society and private life if they are not informed by the work of scholars, journalists, educators, and writers who deliberately pursue a better understanding of technology’s ethical and moral consequences. Technology’s moral and ethical consequences far exceed the scope of law and public policy; are we to limit our thinking about these matters to what can be addressed by law and policy? Might it not be that possible that it is precisely these morally formative aspects of contemporary technology that have already compromised our society’s capacity to enact legal and political remedies?

Perhaps the very best thing we can do now is to focus on the hard, deliberate work of educating and building with a view not to our own immediate future but to the forthcoming generations. As Dietrich Bonhoeffer put it in another age, “The ultimate question for a responsible man to ask is not how he is to extricate himself heroically from the affair, but how the coming generation is to live. It is only from this question, with its responsibility towards history, that fruitful solutions can come, even if for the time being they are very humiliating.”

(For a good discussion of Bonhoeffer’s view explicitly applied to our technological predicament, see Alan Jacobs’s reflections here.)

Lastly, while we wait for better policies, better regulations, better laws to address the worst excesses of the technology sector, what exactly are individuals supposed to do about what they may experience as the disordering and disorienting impact of technology upon their daily lives? Wait patiently and do nothing? Remain uninformed and passive? Better, I say, to empower individuals with legal protections and knowledge. Encourage action and reflection: better yet, action grounded in reflection.

Yes, by all means let us also resist the cheap instrumentalization of ethics and the capture of ethical reflection by the very interests that ought to be the objects of ethical critique and, where appropriate, legal action. But please, let’s dispense with the rhetorical trope that opposes ethical reflection as such to the putatively serious work of law and policy.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.