An online service called Predictim promises to help parents weed out potentially bad babysitters by analyzing their social media feeds with its “advanced artificial intelligence.” The company requires the applicant’s name, their email, and their consent. Of course, you know how consent works in these cases: refusal to subject yourself to a blackbox algorithm’s evaluation with no possibility of recourse or appeal must obviously mean that you’ve got something to hide.
I’ll say more in the next newsletter about this service and predictive technologies in general, but here I’ll briefly note the context in which these sorts of tools attain a measure of plausibility and even desirability.
All such tools are symptoms and accelerators of the breakdown of the kind of social trust and capacity for judgment that emerges organically within generally healthy, human-scale communities. The ostensible demand for these services suggests that something has gone wrong. It’s almost as if the rapid disintegration of the communal structures within which human beings have meaningfully related to one another and to the world might have real and not altogether benign consequences.
There is a way of making this point in a reactionary and romanticized manner, of course. But it need not be so. It’s obviously true that such communities could have some very rough edges. That said, when you lose the habitats that sustain trust, both in others and in your ability to make sound judgments, you end up seeking artificial means to compensate.
Enter the promise of “data,” “algorithms,” and “artificial intelligence.” I place each of those in quotation marks not to be facetious, but to suggest that what is at work here is something more than the bare technical realities to which those terms refer. In other words, each of those terms also conveys a set of dubious assumptions, a not insignificant measure of hype, and a host of misplaced hopes—in short, they amount to magical thinking.
In this case, what is promised is “peace of mind” for parents and peace of mind will be delivered by “AI algorithms using billions of social media data points to increase the accuracy of our reports about important personality traits.” There are a number of problems with this method, Drew Harwell addresses some of them here, but that seems not to matter. As the social fabric continues to fray, we will increasingly seek to apply technical patches. These patches, however, will only accelerate the deterioration they are intended to repair. They will not solve the problem they are designed to address, and they will make heighten the underlying disorders of which the problem is a symptom. The less we inhabit a common world, a shared world, the more we will turn to our tools to judge one another only deepening our alienation.
If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.
6 thoughts on “AI Hype and the Fraying Social Fabric”
Michael, All read well until I got to “The ostensible demand for these services suggests that something has gone wrong” – you imply we have lost something “good” by handing over our power to decide to a machine. But I think it’s fair to say that there are things that have gone wrong, such as biased hiring decisions, hiring based on gender, nepotism, jobs-for-mates and so on, where AU tools are on the side of those often discriminated against. If I fell into a marginalised category, I may consider AI a helpful tool to get a job based on my skills rather than a physical feature.
Thanks for the comment, Rob. I was not referring to hiring practices generally speaking. Of course, bias has been all too common as you note. That said, I’m pretty sure this tool is not designed with labor in mind. Also, the results of automated hiring have not been altogether promising. This is but one case in point: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
Wow, good points. You certainly have insight on what that service actually means. Thanks for sharing.
Interesting post, but this bit suggests a certain kind of causality I’m not sure I agree with: “All such tools are symptoms and accelerators of the breakdown of the kind of social trust and capacity for judgment that emerges organically within generally healthy, human-scale communities.”
Yes, these tools might be a poor patch to deal with the decline in social trust, but the cohort effects muddle the picture. For example, this article (https://medium.com/@monarchjogs/the-decline-of-trust-in-the-united-states-fb8ab719b82a) decomposed the GSS trust question by age cohorts to get this picture: https://cdn-images-1.medium.com/max/800/1*yjU9FmdTa2kWZaCbQQ2x0Q.png. Now, the responses could have changed dramatically in the last couple of years but there is still a massive difference in cohorts that needs to be accounted for. Each new generation shows marked levels of distrust. Algorithmic methods of establishing trust might have countless problems, but I cannot help but think that these methods are just tools meant to understand a perceived environment by Millennials.
Interesting topic, but why exactly would the “social fabric be fraying”? Is it not rather a reconfiguring of social relations? Polanyi and Weber often talked about how capitalism destroyed communities, but they never tackled the very real communities that corporations are and how these corporations work as a specific kind of community, with specific kind of relations within it, and how these arrive at imposing their own structure to the rest of social institutions…