AI Hype and the Fraying Social Fabric

An online service called Predictim promises to help parents weed out potentially bad babysitters by analyzing their social media feeds with its “advanced artificial intelligence.” The company requires the applicant’s name, their email, and their consent. Of course, you know how consent works in these cases: refusal to subject yourself to a blackbox algorithm’s evaluation with no possibility of recourse or appeal must obviously mean that you’ve got something to hide.

I’ll say more in the next newsletter about this service and predictive technologies in general, but here I’ll briefly note the context in which these sorts of tools attain a measure of plausibility and even desirability.

All such tools are symptoms and accelerators of the breakdown of the kind of social trust and capacity for judgment that emerges organically within generally healthy, human-scale communities. The ostensible demand for these services suggests that something has gone wrong. It’s almost as if the rapid disintegration of the communal structures within which human beings have meaningfully related to one another and to the world might have real and not altogether benign consequences.

There is a way of making this point in a reactionary and romanticized manner, of course. But it need not be so. It’s obviously true that such communities could have some very rough edges. That said, when you lose the habitats that sustain trust, both in others and in your ability to make sound judgments, you end up seeking artificial means to compensate.

Enter the promise of “data,” “algorithms,” and “artificial intelligence.” I place each of those in quotation marks not to be facetious, but to suggest that what is at work here is something more than the bare technical realities to which those terms refer. In other words, each of those terms also conveys a set of dubious assumptions, a not insignificant measure of hype, and a host of misplaced hopes—in short, they amount to magical thinking.

In this case, what is promised is “peace of mind” for parents and peace of mind will be delivered by “AI algorithms using billions of social media data points to increase the accuracy of our reports about important personality traits.” There are a number of problems with this method, Drew Harwell addresses some of them here, but that seems not to matter. As the social fabric continues to fray, we will increasingly seek to apply technical patches. These patches, however, will only accelerate the deterioration they are intended to repair. They will not solve the problem they are designed to address, and they will make heighten the underlying disorders of which the problem is a symptom. The less we inhabit a common world, a shared world, the more we will turn to our tools to judge one another only deepening our alienation.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.

The Accelerated Life

The following is from the Translator’s Preface to the English edition of Harmut Rosa’s Social Acceleration: A New Theory of Modernity. (The translator is Jonathan Trejo-Mathys.):

A further weighty obstacle to the realization of any ethical life project lies in the way individuals are increasingly caught in an ever denser web of deadlines required by the various social spheres (‘subsystems’) in which they participate: work, family, (school and sports activities of children), church, credit systems (i.e., loan payment due dates), energy systems (utility bills), communications systems (Internet and cell phone bills), etc. The requirement of synchronizing and managing this complicated mesh of imperatives places one under the imperious control of a systematically induced ‘urgency of the fixed-term’ (Luhmann). In practice, the surprising—and ethically disastrous—result is that individuals’ reflective value and preference orderings are not (and tendentially cannot) be reflected in their actions. As Luhmann explains, ‘the division of time and value judgments can no longer be separated. The priority of deadlines flips over into a primacy of deadlines, into an evaluative choiceworthiness that is not in line with the rest of the values that one otherwise professes …. Tasks that are always at a disadvantage must in the end be devalued and ranked as less important in order to reconcile fate and meaning. Thus a restructuring of the order of values can result simply from time problems.’

People compelled to continually defer the activities they value most in order to meet an endless and multiplying stream of pressing deadlines inevitably become haunted by the feeling expressed in the trenchant bon mot of Ödön von Horváth cited by Rosa: ‘I’m actually a quite different person, I just never get around to being him.’

Sounds familiar.

A bit more from Rosa work to follow in the coming days.

Relations of Power, Relations of Grace

Technology is a critical component of what I like to call the material infrastructure of our moral lives. I’m perhaps overly fond of that line, but I do think it does a nice job of capturing an important reality:  we become who we are in relationship with our material environment. This is true because we are the sort of creatures who make their way in the world with a body. With our bodies we think and feel our way through the world. Our perception of the world is inextricably bound up with our bodies and our character is inextricably bound up with the habits that have written themselves on our bodies. The shape of our built environment and the tools we take in hand, then, are drawn into this matrix of moral formation.

Those claims deserve further elaboration and consideration, but I offer them chiefly to get on to another point. What matters about our technology is not merely how this or that particular artifact or system affects us. It is also a matter of how the distinctive shape of our material environment conditions us in deep and broad ways, ways that may often be imperceptible precisely because they are not objects of perception but rather constitute our perceiving. It is a matter of understanding what we might call the total effect of our techno-material milieu.

One way of thinking about this is to explore how the total effect of our techno-material milieu positions us vis-a-vis the world. How are we encouraged to perceive and relate to the world and all who share it with us? I am generally convinced by those who, in trying to get at how we are positioned in this way, find that this positioning is best characterized as a relation of mastery and control, which is to say, a relation of power. Positioned in this way, we are tempted to see the world, including ourselves, as a field of objects to be limitlessly manipulated and exploited.

It would be hard to overestimate how productive this positioning has been: it has yielded valuable fruit that we should not lightly discount. But it has not come without costs. The costs are material, social, psychic, and, if we still entertain such notions, spiritual. It seems safe to say that we still await a final accounting if not a final judgment.

What alternative do we have to this positioning, to this stance toward the world that is characterized by a relation of mastery and whose inevitable consequence is a generalized degree of alienation, anxiety, and apprehension?

We have a hint of it in Arendt’s warning against a “future man,” who is “possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.”  We hear it, too, in Wendell Berry’s poetic reminder that “We live the given life, not the planned.” It is, I would say, a capacity to receive the world as gift, as something given with an integrity of its own that we do best to honor. It is, in other words, to refuse a relation of “regardless power, ” in Albert Borgmann’s apt phrase, and to entertain the possibility of inhabiting a relation of gratitude and wonder. In one way or another, all that I have to say about technology comes down to this. We must learn like Gloucester in King Lear to see the world and our life in it anew. We, like Gloucester bent in despair on taking his own life, must hear his son Edgar’s voice: “Thy life’s a miracle. Speak yet again.”

Modernity Taketh and Modernity Selleth Back to You

Overwhelmed by the demands of the attention economy? We’ve got just the thing:

panasonic-blinkers-technology-design_dezeen_2364_hero-1-852x479

I think of the design as dystopian chic.

Frazzled by noise pollution? Not to worry, $350 noise cancelling headphones are here for you.

5a870069d0307289038b472f-960-720.jpg

That is, of course, if you simply cannot afford the conservatively priced four-day $4200 silent detox retreats.

Stressed out by the crowds? There are a slew of services lined up to sell you the luxury of not having to deal with other people.

Miss the deep stillness and beauty of the night? No worries. There’s a slice of the tourism industry devoted to service just this need.

In short: public goods become private luxuries. No news here, I realize. I’m curious if you have any examples of the pattern you care to share in the comments below.

The World Will Be Our Skinner Box

Writing about two recent patents by Google, Sydney Fussell makes the following observations about the possibility of using a “smart” environments to create elaborate architectures for social engineering:

For reward systems created by either users or companies to be possible, the devices would have to know what you’re doing at all times. The language of these patents make it clear that Google is acutely aware of the powers of inference it has already, even without cameras, by augmenting speakers to recognize the noises you make as you move around the house. The auditory inferences are startling: Google’s smart home system can infer “if a household member is working” from “an audio signature of keyboard clicking, a desk chair moving, and/or papers shuffling.” Google can make inferences on your mood based on whether it hears raised voices or crying, when you’re in the kitchen based on the sound of the fridge door opening, your dental hygiene based on “the sounds and/or images of teeth brushing.”

I read Fussell’s article right after I read this tweet from Kevin Bankston: “Ford’s CEO just said on NPR that the future of profitability for the company is all the data from its 100 million vehicles (and the people in them) they’ll be able to monetize. Capitalism & surveillance capitalism are becoming increasingly indistinguishable (and frightening).”

I thought about that tweet for awhile. It really is the case that data is to the digital economy not unlike what oil has been for the industrial economy. Jathan Sadowski notes as much in a comment on the same tweet: “Good reminder that as the ‘operations of capital’ adapt to the digital age, they maintain the same essential features of extraction, exploitation, and accumulation. Rather than a disruption, surveillance capitalism is more like old wine poured into new ‘smart’ bottles.”

It also recalls Evgeny Morozov’s discussions of data mining or data extractivism and Nick Carr’s suggestion that we think rather along the lines of a factory metaphor: “The factory metaphor makes clear what the mining metaphor obscures: ‘We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead.'”

Taking all of this together, I found myself asking what all of this looks like if we try to extrapolate into the future (a risky venture, I concede). There are two interrelated trends here as far as I can see: toward surveillance and conditioning. The trends, in other words, lead toward a world turned into a Skinner box.

I think it was from Carr that I first encountered the Skinner box analogy applied to the internet a few years ago. Now that digital connectedness has extended beyond the information on our screens to the material environment, the Skinner box, too, has grown.

As Rick Searle, points out in his discussion of B. F. Skinner’s utopian novel, Walden Two,

We are the inheritors of a darker version of Skinner’s freedomless world—though by far not the darkest. Yet even should we get the beneficent paternalism contemporary Skinnerites—such as Richard Thaler and Cass R. Sunstein who wish to “nudge” us this-way-and-that, it would harm freedom not so much by proving that our decisions are indeed in large measure determined by our environment as from the fact that the shape of that environment would be in the hands of someone other than ourselves, individually and collectively.

This, of course, is also the theme of Frischmann and Selinger’s Reengineering Humanity.  It recalls, as well, a line from Hannah Arendt, “The trouble with modern theories of behaviorism is not that they are wrong but that they could become true.”

But there is something else Arendt wrote that I’ve kept coming back to whenever I’ve thought about these trajectories toward ever more sophisticated mechanisms for the extraction of data to fuel what Frischmann and Selinger have call the project of engineered determinism. “There is only one thing,” Arendt claimed, “that seems discernible: we may say that radical evil has emerged in connection with a system in which all men have become equally superfluous.” What we must remember about “our” data for the purposes of social engineering is that the least important thing about it is that it is “ours.” It matters chiefly as an infinitesimally small drop in a vast ocean of data, which fuels the tools of prediction and conditioning. You and I, in our particularity, are irrelevant, superfluous.

Arendt described the model citizen of a totalitarian state in this way: “Pavlov’s dog, the human specimen reduced to the most elementary reactions, the bundle of reactions that can always be liquidated and replaced by other bundles of reactions that behave in exactly the same way.”

Ominously, she warned, “Totalitarian solutions may well survive the fall of totalitarian regimes in the form of strong temptations which will come up whenever it seems impossible to alleviate political, social, or economic misery in a manner worthy of man.”

In Arendt’s time, totalitarianism emerged in what we might think of as Orwellian guise. The threat we face, however, will come in Huxleyan guise: we will half-knowingly embrace the regime that promises us a reasonable measure of stability, control, and predictability, what we mistake for the conditions of happiness. Paradoxically, the technologies we increasingly turn to in order to manage the chaotic flux of life are also the technologies that have generated the flux we seek to tame.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.