The World Will Be Our Skinner Box

Writing about two recent patents by Google, Sydney Fussell makes the following observations about the possibility of using a “smart” environments to create elaborate architectures for social engineering:

For reward systems created by either users or companies to be possible, the devices would have to know what you’re doing at all times. The language of these patents make it clear that Google is acutely aware of the powers of inference it has already, even without cameras, by augmenting speakers to recognize the noises you make as you move around the house. The auditory inferences are startling: Google’s smart home system can infer “if a household member is working” from “an audio signature of keyboard clicking, a desk chair moving, and/or papers shuffling.” Google can make inferences on your mood based on whether it hears raised voices or crying, when you’re in the kitchen based on the sound of the fridge door opening, your dental hygiene based on “the sounds and/or images of teeth brushing.”

I read Fussell’s article right after I read this tweet from Kevin Bankston: “Ford’s CEO just said on NPR that the future of profitability for the company is all the data from its 100 million vehicles (and the people in them) they’ll be able to monetize. Capitalism & surveillance capitalism are becoming increasingly indistinguishable (and frightening).”

I thought about that tweet for awhile. It really is the case that data is to the digital economy not unlike what oil has been for the industrial economy. Jathan Sadowski notes as much in a comment on the same tweet: “Good reminder that as the ‘operations of capital’ adapt to the digital age, they maintain the same essential features of extraction, exploitation, and accumulation. Rather than a disruption, surveillance capitalism is more like old wine poured into new ‘smart’ bottles.”

It also recalls Evgeny Morozov’s discussions of data mining or data extractivism and Nick Carr’s suggestion that we think rather along the lines of a factory metaphor: “The factory metaphor makes clear what the mining metaphor obscures: ‘We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead.'”

Taking all of this together, I found myself asking what all of this looks like if we try to extrapolate into the future (a risky venture, I concede). There are two interrelated trends here as far as I can see: toward surveillance and conditioning. The trends, in other words, lead toward a world turned into a Skinner box.

I think it was from Carr that I first encountered the Skinner box analogy applied to the internet a few years ago. Now that digital connectedness has extended beyond the information on our screens to the material environment, the Skinner box, too, has grown.

As Rick Searle, points out in his discussion of B. F. Skinner’s utopian novel, Walden Two,

We are the inheritors of a darker version of Skinner’s freedomless world—though by far not the darkest. Yet even should we get the beneficent paternalism contemporary Skinnerites—such as Richard Thaler and Cass R. Sunstein who wish to “nudge” us this-way-and-that, it would harm freedom not so much by proving that our decisions are indeed in large measure determined by our environment as from the fact that the shape of that environment would be in the hands of someone other than ourselves, individually and collectively.

This, of course, is also the theme of Frischmann and Selinger’s Reengineering Humanity.  It recalls, as well, a line from Hannah Arendt, “The trouble with modern theories of behaviorism is not that they are wrong but that they could become true.”

But there is something else Arendt wrote that I’ve kept coming back to whenever I’ve thought about these trajectories toward ever more sophisticated mechanisms for the extraction of data to fuel what Frischmann and Selinger have call the project of engineered determinism. “There is only one thing,” Arendt claimed, “that seems discernible: we may say that radical evil has emerged in connection with a system in which all men have become equally superfluous.” What we must remember about “our” data for the purposes of social engineering is that the least important thing about it is that it is “ours.” It matters chiefly as an infinitesimally small drop in a vast ocean of data, which fuels the tools of prediction and conditioning. You and I, in our particularity, are irrelevant, superfluous.

Arendt described the model citizen of a totalitarian state in this way: “Pavlov’s dog, the human specimen reduced to the most elementary reactions, the bundle of reactions that can always be liquidated and replaced by other bundles of reactions that behave in exactly the same way.”

Ominously, she warned, “Totalitarian solutions may well survive the fall of totalitarian regimes in the form of strong temptations which will come up whenever it seems impossible to alleviate political, social, or economic misery in a manner worthy of man.”

In Arendt’s time, totalitarianism emerged in what we might think of as Orwellian guise. The threat we face, however, will come in Huxleyan guise: we will half-knowingly embrace the regime that promises us a reasonable measure of stability, control, and predictability, what we mistake for the conditions of happiness. Paradoxically, the technologies we increasingly turn to in order to manage the chaotic flux of life are also the technologies that have generated the flux we seek to tame.

Engineered for Joy?

I recently learned about a popular decluttering method that invites you to ask if some item you own brings you joy. Your answer to this simple, straightforward question determines whether or not you should keep the item. If it brings you joy you keep it, if not then you discard or donate it. And in this way you declutter your life and move toward greater peace and joy.

The KonMari Method, as it is known, was developed by the Japanese tidying consultant Marie Kondo and popularized in her 2014 book, The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing. Of course, you probably already knew as much. I seem to have been an outlier in my ignorance of the KonMari Method.

I’m temperamentally attracted by minimalism, but I’ve not exactly ordered my life around the practice. But that question—Does it spark joy?—has  lingered in my mind.

It has lingered, I think, because there’s something both appealing and yet disconcerting about the formulation. On the one hand, it seems obvious that many of us own much more than we could possibly justify and that we are not happier for it. We would do well, then, to purge our lives of superfluous stuff and all the better to do so for the sake of joy.

Not surprisingly, I also thought about how this is not a bad question to ask of our technology, especially the digital accoutrements of our daily lives. Does a smartphone bring me joy? Does Facebook bring me joy? Does Twitter bring me joy? Etc. How many of our digital tools actually make us happier? How many of them do we continue to use against our better judgment and not without a small measure of self-loathing? Why do we carry on with them? Why can’t we let them go? The answers will be complicated, I’m sure, but it’s worth our time to ask the questions.

Kondo’s question also interestingly draws our attention to the emotional valence of the objects in our lives rather than, say, their instrumental value. It suggests that our material environment can be fine-tuned, engineered, or hacked, if you like, so as to regulate our affective lives. Alternatively, insofar as our material environment also structures our memory, the method can also be read as a way engineering how and what we remember.

As I’ve written elsewhere, certain objects are mementos that form the last tenuous tie to some cross-section of our past. Without them, whole swaths of time would almost certainly be lost to us. You recognize such objects when you come across them rarely and each time they recall to mind what you have not thought about since last you encountered the same object. Maybe years pass between such encounters. They are the sorts of things that you always consider throwing away, but can never quite bring yourself to do so. Why? Because it is not the object you would miss—you never think of it as it is—it is some small part of yourself that would become almost certainly irretrievable. It is no small thing for someone to release a part of themselves in this way. Not all that we hoard, it turns out, is made of material stuff.

Do these objects bring me joy? Sometimes, but not always. Should they be discarded because of this? I’m not entirely sure. I can see how hoarding the self in this way can be a dangerous business. I can see as well, though, that it would be wrong to refuse to remember something because it failed to bring me joy.

Should I not remember my mistakes, so that, if nothing else, I might not repeat them. Or should I not remember the suffering of others so that I might work, so far as it is in my power, to alleviate this suffering? Is there any way to recall the loved ones we have lost without an admixture of both joy and sorrow? Would it not be a vice rather than a virtue if I insisted on forgetting all the ways I’ve hurt others or let them down?

Of course, there is something of a moral art involved here. A rightly ordered memory involves both remembering and forgetting. “They tell, and here is the enigma,” Jacques Derrida once wrote, “that those consulting the oracle of Trophonios in Boetia found there two springs and were supposed to drink from each, from the spring of memory and from the spring of forgetting.”

To be clear, I didn’t read Kondo’s book. She may take up these matters for all I know. I realize, too, that I may be thinking about this question in manner that is irrelevant to what chiefly concerns her. Whatever the case may be, I found it useful to think about how our material environment structures our feeling and our remembering.

 

Technology After the Great War

Today marks the 100th anniversary of Armistice Day, which brought the Great War to an end. The Great War, of course, would later come to be known as the First World War because an even greater war would follow within twenty years.

Although it has been eclipsed in the public imagination by the Second World War, a case can be made that the first was indeed the more significant. This is true in a superficial sense, of course:  it’s hard to see how you get the second war without the first. But it is true even apart from that rather obvious claim. It was the first war that dealt a mortal blow to much of what we associate with the culture of modernity: its confidence, its commitment to reason, its belief in progress, and its faith in technology.

There are few better summaries of the losses that can be attributed to the First World War than the following lines from Jacques Barzun:

“Varying estimates have been made of the losses that must be credited to the great illusion. Some say 10 million lives were snuffed out in the 52 months and double that number wounded. Others propose higher or lower figures. The exercise is pointless, because loss is a far wider category than death alone. The maimed, the tubercular, the incurables, the shell-shocked, the sorrowing, the driven mad, the suicides, the broken spirits, the destroyed careers, the budding geniuses plowed under, the missing births were losses, and they are incommensurable … One cannot pour all human and material resources into a fiery cauldron year after year and expect to resume normal life at the end of the prodigal enterprise.”

Barzun was right, of course, and he knew of what he spoke; he lived through it all. Barzun turned twelve just a few days after the war came to end. He was 104 when he died not that long ago in 2012.

In Machines as the Measure of Men, historian Michael Adas devotes a chapter to “The Great War and the Assault on Scientific and Technological Measures of Human Worth.” In it, he makes a number of observations about the impact of the Great War on the place of technology in the public imagination.

In the years leading up to the war, Adas writes, “little serious discussion was devoted to the horrific potential of the new weapons that had been spawned by the union of science and technology in the ever changing industrial order.”

“The failure of most Europeans to fathom the potential for devastation of the new weapons they were constantly devising,” Adas added, “owed much to their conviction that no matter how rapid the advances, Western men were in control of the machines they were creating.”

The failure had not been total, however. A few military specialists had taken important lessons from the closing campaigns of the American Civil War and the Russo-Japanese War, and they foresaw the great destructive potential of industrialized warfare. Adas cites Baron von der Goltz, for example, who, in the 1880s, concluded, “All advances made by modern science and technical art are immediately applied to the abominable art of annihilating mankind.”

Adas continued:

“Numerous writers [of the time] lamented the extent to which scientific research, formerly seen as overwhelmingly beneficial to humanity, had been channeled into the search for ever more lethal weapons. Some of the most brilliant minds of a civilization ‘devoured by geometry’ had labored for generations to ensure that death could be dealt on a mass scale ‘with exactitude, logarithmic, dial-timed, millesimal-calculated velocity.’ Many of those who watched their compatriots die cursed not the enemy, who was equally a victim, but the ‘mean chemist’s contrivance’ and the ‘stinking physicist’s destroying toy.'”

The following is worth quoting at length:

“The theme of humanity betrayed and consumed by the technology that Europeans had long considered the surest proof of their civilization’s superiority runs throughout the accounts of those engaged in the trench madness. The enemy is usually hidden in fortresses of concrete, barbed wire, and earth. The battlefield is seen as a ‘huge, sleeping machine with innumerable eyes and ears and arms.’ Death is delivered by ‘impersonal shells’ from distant machines; one is spared or obliterated by chance alone. The ‘engines of war’ grind on relentlessly; the ‘massacre mecanique‘ knows no limits, gives no quarter. Men are reduced to ‘slaves of machines’ or “wheels [or cogs] in the great machinery of war.’ Their bodies become machines; they respond to one’s questions mechanically; they ‘sing the praises’ of the machines that crush them. War has become ‘an industry of professionalized human slaughter,’ and technology is equated with tyranny. Western civilization is suffocating as a result of overproduction; it is being, destroyed by the wheels of great machines or has been lost in a labyrinth of machines. Its very future is threatened by the machines it has created. Like David Jones, many of those who fought on the Western Front and lived long enough to write about their encounter with war in the industrial age began to ‘doubt the decency of [their] own inventions, and [were] certainly in terror of their possibilities.’ To have any chance of survival, all who entered the battle zone were forced to ‘do gas-drill, be attuned to many newfangled technicalities, respond to increasingly exacting mechanical devices; some fascinating and compelling, others sinister in the extreme; all requiring a new and strange direction of the mind, a new sensitivity certainly, but at a considerable cost.'”

One is reminded of how one survivor of the horror of the trenches came to characterize “the Machine” in his most famous work, The Lord of the Rings. Readers will remember many of the obvious ways in which Tolkien linked the apparatus of industrialization with the forces of darkness. Saruman, to take one memorable example, is described by another character in these terms:  “He has a mind of metal and wheels; and he does not care for growing things, except as far as they serve him for the moment.” Saruman harnessed the power of mechanized magic and wielded this power to destroy, among other things, the forests surrounding Isengard, which it is clear we are to understand as a great offense and one for which he pays dearly. It would be a mistake to dismiss Tolkien as an opponent of technology per se pining for a romanticized pre-modern world. Tolkien’s views are not, as I understand them, romantic; quite the contrary. They were forged in the fires of the war as he experienced first hand the fruits of industrialized violence. “One has indeed personally to come under the shadow of war to feel fully its oppression,” Tolkien once wrote, “but as the years go by it seems now often forgotten that to be caught in youth by 1914 was no less hideous an experience than to be involved in 1939 and the following years. By 1918 all but one of my close friends were dead.”

In a letter to a friend, Tolkien wrote of Lord of the Rings, “Anyway all this stuff is mainly concerned with Fall, Mortality, and the Machine.” He went on to explain what he meant by “the Machine” in this way:

“By the last I intend all use of external plans or devices (apparatus) instead of development of the inherent inner powers or talents — or even the use of these talents with the corrupted motive of dominating: bulldozing the real world, or coercing other wills. The Machine is our more obvious modern form though more closely related to Magic than is usually recognised …. The Enemy in successive forms is always ‘naturally’ concerned with sheer Domination, and so the Lord of magic and machines.”

It is likely that Tolkien’s depiction of Saruman’s destruction of the trees owed something to the devastation he witnessed on the front lines. According to Adas, “‘Uprooted, smashed’ trees, ‘pitted, rownsepyked out of nature, cut off in their sap rising,’ were also central images in participants’ descriptions of the war zone wasteland.” “As Georges Duhamel observed in 1919,” Adas went on to write, “the Western obsession with inventing new tools and discovering new ways to force nature to support material advancement for its own sake had inevitably led to the trench wasteland in which ‘man had achieved this sad miracle of denaturing nature, of rendering it ignoble and criminal.'”

“As a number of intellectuals noted after the war,” Adas concluded, “the Europeans’ prewar association of the future with progress and improvement was also badly shaken by the mechanization of slaughter in the trenches.” He went on:

“Henry James’s poignant expression of the sense of betrayal that Europeans felt in the early months of the war, when they realized that technical advance could lead to massive slaughter as readily as to social betterment, was elaborated upon in the years after the war by such thinkers as William Inge, who declared that the conflict had exposed the ‘law of inevitable progress’ as a mere superstition. The Victorian mold, Inge declared, had been smashed by the war, and ‘the gains of that age now seem to some of us to have been purchased too high, or even to be themselves of doubtful value.’ Science had produced perhaps the ugliest of civilizations; technological marvels had been responsible for unimaginable destruction. Never again, he concluded, would there be ‘an opportunity for gloating over this kind of improvement.’ The belief in progress, the ‘working faith’ of the West for 150 years, had been forever discredited.”

What is most striking about all of this may be the fact that the effect was so short-lived. Narratives of technological progress, as it turned out, were not altogether discredited. The 1933 World’s Fair in Chicago, celebrating “a century of progress,” took as one of its mottos the line “Science Finds, Industry Applies, Man Conforms.” Even after similar fears spiked once again following the Second World War, especially in light of the development of atomic weapons, it would be only a matter of time before concerns subsided and faith in technological progress reemerged. It’s hard to live without a myth, and it seems that the myth of technological progress is the only one still kicking around at this late hour. Although it is also true that ever since the Great War, the embrace has never again been quite so earnest.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.

Attention and Memory in the Age of the Disciplinary Spectacle

A while ago, I wrote about what I took to be the convergence of the society of the spectacle and the disciplinary society, the convergence, that is, of the analysis offered by Debord and Foucault respectively. It was in some ways an odd suggestion given Foucault’s expressed hostility to spectacle theorizing, but it struck me that fusing these two critical strands would be useful because, as I saw it, the material apparatus of spectacle and disciplinary surveillance had merged with the advent of digital technology. You can read my initial musings on that score here: “Eight Theses Regarding the Society of the Disciplinary Spectacle.”

As it turns out someone had, not surprisingly, beaten me to the punch: Debord himself. I discovered this while reading a paper presented by Jonathan Crary in 1989 titled, “Spectacle, Attention, Counter-Memory.”  (h/t Nick Seaver). Here are some particularly interesting sections.

“It is easy to forget that in Society of the Spectacle Debord outlined two different models of the spectacle; one he called ‘concentrated’ and the other ‘diffused,’ preventing the word spectacle from simply being synonymous with consumer or late capitalism. Concentrated spectacle was what characterized Nazi Germany, Stalinist Russia, and Maoist China; the preeminent model of diffused spectacle was the United States: ‘Wherever the concentrated spectacle rules so does the police … it is accompanied by permanent violence. The imposed image of the good envelops in its spectacle the totality of what officially exists and is usually concentrated in one man who is the guarantee of totalitarian cohesion. Everyone must magically identify with this absolute celebrity — or disappear.’ The diffuse spectacle, on the other hand, accompanies the abundance of commodities.”

And:

“I suspect that Foucault did not spend much time watching television or thinking about it, because it would not be difficult to make a case that television is a further perfecting of panoptic technology. In it surveillance and spectacle are not opposed terms, as he insists, but collapsed onto one another in a more effective disciplinary apparatus. Recent developments have confirmed literally this overlapping model: television sets that contain advanced image recognition technology in order to monitor and quantify the behavior, attentiveness, and eye movement of a spectator.”

Recall that Crary is writing in 1989. I was surprised by the claim in the last sentence. But in a footnote he cites an article in the Times from June of that year: “TV Viewers, Beware: Nielsen May Be Looking.” Happily, it’s available online so we can read about this then cutting edge technology. As an aside, here is an interesting excerpt from the Times piece:

“Nielsen and Sarnoff demonstrated a working model of the device at a news conference yesterday, at which Nielsen executives faced questions about the system’s similarities to the surveillance of Big Brother in George Orwell’s novel ‘Nineteen Eighty-Four.’

But Nielsen executives argued that the system will not be an invasion of privacy. ‘I don’t think we’re talking about Big Brother here at all,’ said John A. Dimling, executive vice president of Nielsen. ‘We’re not scanning the room to find out what people are doing. We’re sensitive to the issue of privacy.’ Mr. Dimling said it will be at least three years before the system goes into service.”

“We’re sensitive to the issues of privacy.” Right. It’s useful to remember how long we have been hearing these rejoinders. Needless to say, the whole thing seems quaint in light of present realities.

It turns out that “in 1988 Debord sees his two original models of diffused and concentrated spectacle becoming indistinct, converging into what he calls ‘the integrated society of the spectacle.'” A more elegant formulation than what I came up with, naturally.

More from Crary:

“As much as any single feature, Debord sees the core of the spectacle as the annihilation of historical knowledge — in particular the destruction of the recent past. In its place there is the reign of a perpetual present. History, he writes, had always been the measure by which novelty was assessed, but whoever is in the business of selling novelty has an interest in destroying the means by which it could be judged. Thus there is a ceaseless appearance of the important, and almost immediately its annihilation and replacement: ‘That which the spectacle ceases to speak of for three days no longer exists.'”

This sort of thing always strikes me as susceptible to two very different readings. The one concludes something like this: “You see, back then they worried about technology in similar ways to how some people worry about technology today and we now know those concerns were silly. Everything turned out okay.” The other reading goes something like this: “We really do have an amazing capacity to apathetically acclimate to a gradually emerging dystopia.”

Crary concludes his paper with two responses to the society of the spectacle. The first was embodied in a 1924 essay by the French painter Fernand Leger titled, “The Spectacle.” Here is Crary’s assessment of his project (emphasis mine):

“… the confused program he comes up with in this text is an early instance of the ploys of all those — from Warhol to today’s so-called simulationist— who believe, or at least claim, they are outwitting the spectacle at its own game. Leger summarizes this kind of ambition: ‘Let’s push the system to the extreme,’ he states, and offers vague suggestions for polychroming the exterior of factories and apartment buildings, for using new materials and setting them in motion. But this ineffectual inclination to outdo the allure of the spectacle becomes complicit with its annihilation of the past and fetishization of the new.

This seems to me like a perennially useful judgment.

Against this project he opposes “what Walter Benjamin called the ‘anthropological’ dimension of surrealism.”

“It was a strategy of turning the spectacle of the city inside out through counter-memory and counter-itineraries. These would reveal the potency of outmoded objects excluded from its slick surfaces, and of derelict spaces off its main routes of circulation. The strategy incarnated a refusal of the imposed present, and in reclaiming fragments of a demolished past it was implicitly figuring an alternative future.”

However, Crary concludes on a cautious note and raises useful questions for us to consider thirty years later:

“Whether these practices have any vitality or even relevance today depends in large measure on what an archaeology of the present tells us. Are we still in the midst of a society that is organized as appearance? Or have we entered a nonspectacular global system arranged primarily around the control and flow of information, a system whose management and regulation of attention would demand wholly new forms of resistance and memory?”


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.

The Original Sin of Internet Culture

I recently encountered the claim that “the foundational sin of internet culture was pretending like online wasn’t real life.” A familiar claim to anyone who has kept up with what I once dubbed the Cyborgology school of digital criticism, whose enduring contribution was the introduction of the term digital dualism. Digital dualism, according to the the scholars associated with the website Cyborgology, is a fallacy that misconstrues the digital world as a “virtual” world in opposition to the offline world, which is understood to be the “real” world. The usefulness of the term occasioned some spirited debates in which I played a minor role.

I wonder, though, about the idea that “pretending like online life wasn’t real life” is somehow that original sin of Internet culture. At the very least it seems to me that the claim can be variously understood. The sort of pretending the author had in mind probably involves the mistaken belief that online words and deeds do not have offline consequences. We could, however, also take the claim to mean something like this: the original sin of internet culture was the mistaken belief that our online experience could somehow transcend our offline faults, flaws, and frailties. Or, to put it otherwise, the original sin of Internet culture was its peculiar brand of gnostic utopianism: the belief that digital media could usher us into a period of quasi-mystic and disembodied harmony and unity.

Of course, as we now know all too well, this was a deeply destructive myth: we are no different online than we are offline. Indeed, a credible and compelling case could be made for the proposition that we are, in fact, a far worse version of ourselves online. In any event, we bring to the digital realm exactly the same propensity for vice that we exhibit in the so-called real world although with fewer of the “real world” constraints that might have curbed our vicious behavior. And, of course, because the boundaries between the digital realm and the analog realm are indeed porous if not exactly fictive, these vices then spill back over into the “real world.” Who we are offline is who we are online, and who we become through our online experience is who we will be offline.

This original sin, then, this digital utopianism encouraged us to uncritically cede to our digital tools, devices, and platforms ever expanding swaths of our experience in the mistaken hope that down this path lay our salvation and our liberation. We burdened the internet with messianic hopes—of course we were bound to be disappointed.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.