L.M. Sacasas

The Rhetorical “We” and the Ethics of Technology

“Questioning AI ethics does not make you a gloomy Luddite,” or so the title of a recent article in a London business newspaper assures us. The most important thing to be learned here is that someone feels this needs to be said. Beyond that, there is also something instructive about the concluding paragraphs. If we read them against the grain, these paragraphs teach us something about how difficult it is to bring ethics to bear on technology.

“I’m simply calling for us to use all the tools at our disposal to build a better digital future,” the author tells us.

In practice, this means never forgetting what makes us human. It means raising awareness and entering into dialogue about the issue of ethics in AI. It means using our imaginations to articulate visions for a future that’s appealing to us.

If we can decide on the type of society we’d like to create, and the type of existence we’d like to have, we can begin to forge a path there.

All in all, it’s essential that we become knowledgeable, active, and influential on AI in every small way we can. This starts with getting to grips with the subject matter and past extreme and sensationalised points of view. The decisions we collectively make today will influence many generations to come.

Here are the challenges:

We have no idea what makes us human. You may, but we don’t.

We have nowhere to conduct meaningful dialogue; we don’t even know how to have meaningful dialogue.

Our imaginations were long ago surrendered to technique.

We can’t decide on the type of society we’d like to create or the type of existence we’d like to have, chiefly because this “we” is rhetorical. It is abstract and amorphous.

There is no meaningful antecedent to the pronouns we and us used throughout these closing paragraphs. Ethics is a communal business. This is no less true with regards to technology, perhaps it is all the more true. There is, however, no we there.

As individuals, we are often powerless against larger forces dictating how we are to relate to technology. The state is in many respects beholden to the technological–ideologically, politically, economically. Regrettably, we have very few communities located between the individual and the state constituting a we that can meaningfully deliberate and effectively direct the use of technology.

Technologies like AI emerge and evolve in social spaces that are resistant to substantial ethical critique. They also operate at a scale that undermines the possibility of ethical judgment and responsibility. Moreover, our society is ordered in such a way that there is very little to be done about it, chiefly because of the absence of structures that would sustain and empower ethical reflection and practice, the absence, in other words, of a we that is not merely rhetorical.