An online service called Predictim promises to help parents weed out potentially bad babysitters by analyzing their social media feeds with its “advanced artificial intelligence.” The company requires the applicant’s name, their email, and their consent. Of course, you know how consent works in these cases: refusal to subject yourself to a blackbox algorithm’s evaluation with no possibility of recourse or appeal must obviously mean that you’ve got something to hide.
I’ll say more in the next newsletter about this service and predictive technologies in general, but here I’ll briefly note the context in which these sorts of tools attain a measure of plausibility and even desirability.
All such tools are symptoms and accelerators of the breakdown of the kind of social trust and capacity for judgment that emerges organically within generally healthy, human-scale communities. The ostensible demand for these services suggests that something has gone wrong. It’s almost as if the rapid disintegration of the communal structures within which human beings have meaningfully related to one another and to the world might have real and not altogether benign consequences.
There is a way of making this point in a reactionary and romanticized manner, of course. But it need not be so. It’s obviously true that such communities could have some very rough edges. That said, when you lose the habitats that sustain trust, both in others and in your ability to make sound judgments, you end up seeking artificial means to compensate.
Enter the promise of “data,” “algorithms,” and “artificial intelligence.” I place each of those in quotation marks not to be facetious, but to suggest that what is at work here is something more than the bare technical realities to which those terms refer. In other words, each of those terms also conveys a set of dubious assumptions, a not insignificant measure of hype, and a host of misplaced hopes—in short, they amount to magical thinking.
In this case, what is promised is “peace of mind” for parents and peace of mind will be delivered by “AI algorithms using billions of social media data points to increase the accuracy of our reports about important personality traits.” There are a number of problems with this method, Drew Harwell addresses some of them here, but that seems not to matter. As the social fabric continues to fray, we will increasingly seek to apply technical patches. These patches, however, will only accelerate the deterioration they are intended to repair. They will not solve the problem they are designed to address, and they will make heighten the underlying disorders of which the problem is a symptom. The less we inhabit a common world, a shared world, the more we will turn to our tools to judge one another only deepening our alienation.