One salutary aspect of the tech backlash, as the wave of critical attention Silicon Valley has received over the last year or so has come to be called, has been the increasing willingness—particularly, it seems to me, among tech journalists—to acknowledge that technology is not neutral. But in reality, I’m not sure that we have come all that far.
These discussions tend to center on social media platforms designed so as to generate compulsive engagement for the sake of capturing user data or the deployment of algorithms, which, rather than operating objectively, simply redeploy the biases, blind-spots, and prejudices of their programmers.
Such cases, of course, deserve all the critical attention they receive, but they should not exhaust our understanding of what it mean to claim that technology is not neutral. It could even be argued that these cases are instances of technological neutrality. The platforms and algorithms have simply been weaponized by the designers rather than by some set of users.
From this perspective, these technologies, platforms and algorithms, are not neutral because they have been designed, intentionally or otherwise, to take advantage of unwitting users. In theory, for example, if the platforms were differently designed so as to not engender compulsive engagement or if they were not operated so as to aggressively collect user data or if they were not liable to be used for finely-targeted manipulation campaigns, then all would be well. The extent of their non-neutrality, so to speak, or what is ethically significant about them is co-extensive with the explicitly malicious design practices employed by the social media company. Eliminate these practices, whether by law or regulation, and you no longer have to worry about the moral/ethical consequences of social media.
It seems to me that what you lose here is actually close to what matters most. To say that technology is not neutral is not merely to say that it can be maliciously designed. Even benevolently designed technologies are not neutral; they, too, can be morally formative or deforming.
The tech backlash has focused on maliciously designed technology. Moreover, focusing on law and regulation will, likewise, only address a limited set of what is morally consequential about our technology use. Even the renewed focus on ethics of technology as it is construed in the tech sector falls far short of the mark. All of this just scratches the surface of what ought to concern us or at least warrant our critical attention.
If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.
Given, technology is not and cannot be neutral. So we need a (fairly simple) moral yardstick by which to judge this or that use of technology. To the extent that our technology leads to the ‘good life’ (as in ‘How much is enough’ by Skidelsky, for example) we can say OK to it. If it leads away from the ‘good life’ we should resist it. So a technology that enables me to make a living while living off the grid, in a beautiful place, in a real community, without excessive use of resources / airplanes etc seems to me a simple good. A technology that leads me in to a virtual solipsistic world where I only experience what I think I want to experience, is clearly not OK. The tricky bit is to distinguish the obvious short term benefits from the maybe more subtle long term harms. Over to you.