Token Ethicists and Non-existent Moral Communities

“As it ponders important social choices that involves the application of new technology,” Langdon Winner wrote in 1992, “contemporary moral philosophy works within a vacuum.”

“The vacuum,” he goes on to say, “is a social as well as an intellectual one. Often there are no persons or organizations with clear authority to make the decisions that matter. In fact, there may be no clearly defined social channels in which important moral issues can be addressed at all.”

Instead we get “jerrybuilt policies” that emerge out of a jumble of competing private and public interests. “But given the number of points at which technologies generate significant social stress and conflict,” Winner concluded, “this familiar pattern is increasingly unsatisfactory.” (Again, remember Winner was writing in 1992.)

Cue the philosophers who specialize in ethical matters and are eager to deploy their expertise. Unfortunately, in Winner’s view, they “may find themselves involved in an exercise that is essentially technocratic.” At a certain point in the design process, they are called in as “values experts” to “provide ‘solutions’ to the kind of ‘problems’ whose features are ethical rather than solely technical … ‘Everything else looks good. What are the results from the ethics lab?'”

Winner understands that “philosophers sometimes find it tempting to play along with these expectations, gratifying to find that anyone cares about what they think, exhilarating to notice that their ideas might actually have some effect.”

But, he wonders, “Is it wise to don the mantle of the values expert?”

After they have preformed all of their intellectual labor and applied their expertise, “there remains the embarrassing question: Who in the world are we talking to? Where is the community in which our wisdom will be welcome?”

Citing two passages from then-recent articles advancing ethical claims about emerging technologies, Winner notes the frequent deployment of the rhetorical we. Acknowledging that he, too, has been guilty of using the first person plural pronoun whose antecedents are always vague and ill-defined, Winner correctly notes, “What matters here is that this lovely ‘we’ suggests the presence of a moral community that may not, in fact, exist at all, at least not in any coherent, self-conscious form.”

The important “first task” for ethics of technology, Winner suggested, would be to ask “what is the identity and character of the moral communities that will make the crucial, world-altering judgments and take appropriate action as a result?”

Winner believed this was a question about “politics and political philosophy rather than a question for ethics considered solely as a matter of right and wrong in individual conduct.” (I appreciated the qualifier solely in Winner’s claim. More on that momentarily.) As “technological things” increasingly become “central features in widely shared arrangements and conditions of life,” Winner argued, it is urgent that they be considered in a “political light.” Rather than continue in the technocratic and mostly ineffective “values expert” pattern, Winner believes “todays thinkers would do better to reexamine the role of the public in matters of this kind.” The question they should be asking is this: “How can and should democratic citizenry participate in decision making about technology?”

It seems to me that not much has changed in the 28 or so years since Winner published these reflections; we have learned very little and the challenges have become more complex, consequential, and urgent. Indeed, what strikes me about this article is that it could be re-published today without substantive changes and it would appear to be speaking directly to our situation.

In his article, Winner went on to survey ancient and modern attitudes toward the relationship between techne and politics. According to Winner, “the Western tradition of moral and political philosophy has … almost nothing to say about the ways in which persons in their role as citizens might be involved in making choices about the development, deployment, and use of new technology.” Indeed, the chief feature of both classical and modern political reflection regarding technology has been the tendency to compartmentalize the political and the technological, although for different reasons.

Wrapping up his overview of both traditions, Winner concludes:

“If one compares liberal ideology about politics and technology with its classical precursors, an interesting irony emerges. In modern thought the ancient pessimism about techne is eventually replaced by all-out enthusiasm for technological advance. At the same time basic conceptions of politics and political membership are reformulated in ways that help create new contexts for the exercise of power and authority. Despite the radical thrust of these intellectual developments, however, the classical separation between the political and the technical spheres is strongly preserved, but for entirely new reasons. Technology is still isolated from public life in both principle and practice. Citizens are strongly encouraged to become involved in improving modern material culture, but only in the market or other highly privatized settings. There is no moral community or public space in which technological issues are topics for deliberation, debate, and shared action.”

Just so.

Again, nearly three decades later, it seems the problems, on all counts, are both more acute and more vexing.

I was, in fact, reminded of a striking observation Winner made in an even earlier work, Autonomous Technology published in 1977:

Different ideas of social and political life entail different technologies for their realization. One can create systems of production, energy, transportation, information handling, and so forth that are compatible with the growth of autonomous, self-determining individuals in a democratic polity. Or one can build, perhaps unwittingly, technical forms that are incompatible with this end and then wonder how things went strangely wrong. The possibilities for matching political ideas with technological configurations appropriate to them are, it would seem, almost endless. If, for example, some perverse spirit set out deliberately to design a collection of systems to increase the general feeling of powerlessness, enhance the prospects for the dominance of technical elites, create the belief that politics is nothing more than a remote spectacle to be experienced vicariously, and thereby diminish the chance that anyone would take democratic citizenship seriously, what better plan to suggest than that we simply keep the systems we already have?

“There is, of course, hope that we may decide to do better than that,” Winner added. A necessary hope that is nonetheless difficult to sustain. Critically, though, Winner concluded this line of thought by reminding readers (in 1977!) that “the notion that technical forms are merely neutral … is a myth that no longer merits the least respect.”

And yet the idea that technical forms are merely neutral has proven hard to shake. For a very long time, it has been a cornerstone principle of our thinking about technology and society. Or, more to the point, we have taken it for granted and have consequently done very little thinking about technology with regards to society.

I’ll note in passing that the liberal democratic structures of modern political culture and the development of technology are deeply intertwined, and they have both depended upon the presumption of their ostensible neutrality. I tempted to think that our present crisis is a function of a growing realization that neither our political structures nor our technologies are, in fact, merely neutral instruments.

Making our way back to Winner’s claim about the absence of moral-political communities within which technological ethics can be enacted, our political structures (or better our political-economic structures) and our technologies have, as one function of their non-neutrality, made such communities very difficult to sustain. It has rendered them implausible.

We are, at present, stuck in an unhelpful tendency to imagine that our only options with regard to how we govern technology are, on the one hand, individual choices and, on the other, regulation by the state. What’s worse, we’ve also tended to oppose these to one another. But this way of conceptualizing our situation is both a symptom of the deepest consequences of modern technology and part of the reason why it is so difficult to make any progress.

Technology operates at different scales and effective mechanisms of governance need to correspond to the challenges that arise at each scale. Mechanism of governance that makes sense at one end of the spectrum will be ineffective at the other end and vice versa.

Our problem is basically this: technologies that operate at the macro-level cannot be effectively governed by micro-level mechanisms, which basically amount to individual choices. At the macro-level, however, governance is limited by the degree to which we can arrive at public consensus, and the available tools of governance at the macro-level cannot address all of the ways technologies impact individuals. What is required is a cocktail of strategies that address the consequences of technology as they manifest themselves across the spectrum of scale.

The problem, of course, as Winner diagnosed long ago, is that the further up the scale we move, the more unlikely we are to find a relevant moral community with either the prerequisite coherence or authority to effectively grapple with the problems we face. We lack those communities, in part, because of the moral and political consequences of existing technology, so it would seem that we are stuck in a vicious cycle of sorts.

I have no solution for this, of course. I think it would be helpful, though, if whatever moral communities we have left that occupy the broad space between the individual on the one hand and the state on the other would take up the challenge of thinking critically about the morally formative consequences of technology and see their way to leveraging their existing structures of deliberation and practice with a view to helping their members better navigate the challenges posed by emerging technologies. If we need some model of what this might look like, we could do worse than consider the Amish.


You can subscribe to my newsletter, The Convivial Society, here.

2 thoughts on “Token Ethicists and Non-existent Moral Communities

Leave a comment