Jacques Ellul on Technique As An Obstacle To Ethics

The following excerpts are taken from “The Search for Ethics In a Technicist Society” (1983) by Jacques Ellul. In this essay, Ellul considers the challenges posed to traditional morality in a society dominated by technique.

James Fowler on what Ellul meant by technique: “Ellul’s issue was not with technological machines but with a society necessarily caught up in efficient methodological techniques. Technology, then, is but an expression and by-product of the underlying reliance on technique, on the proceduralization whereby everything is organized and managed to function most efficiently, and directed toward the most expedient end of the highest productivity.

In Ellul’s view, “The ethical problem, that is human behavior, can only be considered in relation to this system, not in relation to some particular technical object or other.” “If technique is a milieu and a system, ” he adds, “the ethical problem can only be posed in terms of this global operation. Behavior and particular choices no longer have much significance. What is required is thus a global change in our habits or values, the rediscovery of either an existential ethics or a new ontology.”

Emphasis in boldface below is mine.

On the call to subordinate means to ends:

“It is quite right to say that technique is only made of means, it is an ensemble of means (We shall return to this later), but only with the qualification that these means obey their own laws and are no longer subordinated to ends. Besides, one must distinguish ideal ends (values, for example), goals (national, for example), and the objectives (immediate objectives: a researcher who tries to solve some particular problem). Science and technique develop according to objectives, rarely and accidentally in relation to more general goals, and never for ethical or spiritual ideals. There is no relation between the proclamation of values (justice, freedom, etc.) and the orientation of technical development. Those who are concerned with values (theologians, philosophers, etc.) have no influence on the specialists of technique and cannot require, for example, that some aspect of current research or other means should be abandoned for the sake of some value.

On the difficulty of determining who exactly must act to subordinate technique to moral ends:

To adopt one of these first two ethical orientations is to argue that it is human beings who must create a good use for technique or impose ends on it, but always neglecting to specify which human beings. Is the “who” not important? Is technique able to be mastered by just any passer-by, every worker, some ordinary person? Is this person the politician? The public at large? The intellectual and technician? Some collectivity? Humanity as a whole? For the most part politicians cannot grasp technique, and each specialist can understand an infinitesimal portion of the technical universe, just as each citizen only makes use of an infinitesimal piece of the technical apparatus. How could such a person possibly modify the whole? As for the collectivity or some class (if they exist as specific entities) they are wholly ignorant of the problem of technique as a system. Finally, what might be called “Councils of the Wise” […] have often been set up only to demonstrate their own importance, just as have international commissions and international treaties [….] Who is supposed to impose ends or get hold of the technical apparatus? No one knows.

On the compromised position from which we try think ethically about technique:

At the same time, one should not forget the fact that human beings are themselves already modified by the technical phenomenon. When infants are born, the environment in which they find themselves is technique, which is a “given.” Their whole education is oriented toward adaptation to the conditions of technique (learning how to cross streets at traffic lights) and their instruction is destined to prepare them for entrance into some technical employment. Human beings are psychologically modified by consumption, by technical work, by news, by television, by leisure activities (currently, the proliferation of computer games), etc., all of which are techniques. In other words, it must not be forgotten that it is this very humanity which has been pre-adapted to and modified by technique that is supposed to master and reorient technique. It is obvious that this will not be able to be done with any independence.

On the pressure to adapt to technique:

Finally, one other ethical orientation in regard to technique is that of adaptation. And this can be added to the entire ideology of facts: technique is the ultimate Fact. Humanity must adapt to facts. What prevents technique from operating better is the whole stock of ideologies, feelings, principles, beliefs, etc. that people continue to carry around and which are derived from traditional situations. It is necessary (and this is the ethical choice!) to liquidate all such holdovers, and to lead humanity to a perfect operational adaptation that will bring about the greatest possible benefit from the technique. Adaptation becomes a moral criterion.


Jacques Ellul On Adaptation of Human Beings to the Technical Milieu

Jacques Ellul coined the term Technique in an attempt to capture the true nature of contemporary Western society. Ellul was a French sociologist and critic of technology who was active throughout the mid to late twentieth century. He was a prolific writer but is best remembered as the author of The Technological Society. You can read an excellent introduction to his thought here.

Ellul defined Technique (la technique) as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.” It was an expansive term meant to describe far more than what we ordinarily think of as technology, even when we use that term in the widest sense.

In a 1963 essay titled “The Technological Order,” Ellul referred to technique as “the new and specific milieu in which man is required to exist,” and he offered the six defining characteristics of this “new technical milieu”:

a. It is artificial;
b. It is autonomous with respect to values, ideas, and the state;
c. It is self-determining in a closed circle. Like nature, it is a closed organization which permits it to be self-determinative independently of all human intervention;
d. It grows according to a process which is causal but not directed to ends;
e. It is formed by an accumulation of means which have established primacy over ends;
f. All its parts are mutually implicated to such a degree that it is impossible to separate them or to settle any technical problem in isolation.

In the same essay, Ellul offers this dense elaboration of how Technique “comprises organizational and psychosociological techniques”:

It is useless to hope that the use of techniques of organization will succeed in compensating for the effects of techniques in general; or that the use of psycho-sociological techniques will assure mankind ascendancy over the technical phenomenon. In the former case we will doubtless succeed in averting certain technically induced crises, disorders, and serious social disequilibrations; but this will but confirm the fact that Technique constitutes a closed circle. In the latter case we will secure human psychic equilibrium in the technological milieu by avoiding the psychobiologic pathology resulting from the individual techniques taken singly and thereby attain a certain happiness. But these results will come about through the adaptation of human beings to the technical milieu. Psycho-sociological techniques result in the modification of men in order to render them happily subordinate to their new environment, and by no means imply any kind of human domination over Technique.”

That paragraph will bear re-reading and no small measure of unpacking, but here is the short version: Nudging is merely the calibration of the socio-biological machine into which we are being incorporated. Ditto life-hacking, mindfulness programs, and basically every app that offers to enhance your efficiency and productivity.

Ellul’s essay is included in Philosophy and Technology: Readings in the Philosophical Problems of Technology (1983), edited by Carl Mitcham and Robert Mackey.

Democracy and Technology

Alexis Madrigal has written a long and thoughtful piece on Facebook’s role in the last election. He calls the emergence of social media, Facebook especially, “the most significant shift in the technology of politics since the television.” Madrigal is pointed in his estimation of the situation as it now consequently stands.

Early on, describing the widespread (but not total) failure to understand the effect Facebook could have on an election, Madrigal writes, “The informational underpinnings of democracy have eroded, and no one has explained precisely how.”

Near the end of the piece, he concludes, “The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

Madrigal’s piece brought to mind, not surprisingly, two important observations by Neil Postman that I’ve cited before.

My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.


Surrounding every technology are institutions whose organization–not to mention their reason for being–reflects the world-view promoted by the technology. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis.

In these two passages, I find the crux of Postman’s enduring insights, the insights, more generally, of the media ecology school of tech criticism. It seems to me that this is more or less where we are: a culture in crisis, as Madrigal’s comments suggest. Read what he has to say.

On Twitter, replying to a tweet from Christopher Mims endorsing Madrigal’s work, Zeynep Tufekci took issue with Madrigal’s framing. Madrigal, in fact, cited Tufekci as one of the few people who understood a good deal of what was happening and, indeed, saw it coming years ago. But Tufekci nonetheless challenged Madrigal’s point of departure, which is that the entirety of Facebook’s role caught nearly everyone by surprise and couldn’t have been foreseen.

Tufekci has done excellent work exploring the political consequences of Big Data, algorithms, etc. This 2014 article, for example, is superb. But in reading Tufekci’s complaint that her work and the work of many other academics was basically ignored, my first thought was that the similarly prescient work of technology critics has been more or less ignored for much longer. I’m thinking of Mumford, Jaspers, Ellul, Jonas, Grant, Winner, Mander, Postman and a host of others. They have been dismissed as too pessimistic, too gloomy, too conservative, too radical, too broad in their criticism and too narrow, as Luddites and reactionaries, etc. Yet here we are.

In a 1992 article about democracy and technology, Ellul wrote, “In my view, our Western political institutions are no longer in any sense democratic. We see the concept of democracy called into question by the manipulation of the media, the falsification of political discourse, and the establishment of a political class that, in all countries where it is found, simply negates democracy.”

Writing in the same special issue of the journal Philosophy and Technology edited by Langdon Winner, Albert Borgmann wrote, “Modern technology is the acknowledged ruler of the advanced industrial democracies. Its rule is not absolute. It rests on the complicity of its subjects, the citizens of the democracies. Emancipation from this complicity requires first of alI an explicit and shared consideration of the rule of technology.”

It is precisely such an “explicit and shared consideration of the rule of technology” that we have failed to seriously undertake. Again, Tufekci and her colleagues are hardly the first to have their warnings, measured, cogent, urgent as they may be, ignored.

Roger Berkowitz of the Hannah Arendt Center for Politics and the Humanities, recently drew attention to a commencement speech given by John F. Kennedy at Yale in 1962. Kennedy noted the many questions that America had faced throughout her history, from slavery to the New Deal. These were questions “on which the Nation was sharply and emotionally divided.” But now, Kennedy believed we were ready to move on:

Today these old sweeping issues very largely have disappeared. The central domestic issues of our time are more subtle and less simple. They relate not to basic clashes of philosophy or ideology but to ways and means of reaching common goals — to research for sophisticated solutions to complex and obstinate issues.

These issues were “administrative and executive” in nature. They were issues “for which technical answers, not political answers, must be provided,” Kennedy concluded. You should read the rest of Berkowitz reflections on the prejudices exposed by our current crisis, but I want to take Kennedy’s technocratic faith as a point of departure for some observations.

Kennedy’s faith in the technocratic management of society was just the latest iteration of modernity’s political project, the quest for a neutral and rational mode of politics for a pluralistic society.

I will put it this way: liberal democracy is a “machine” for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.

It was machine-like in its promised objectivity and efficiency. But, of course, it would work only to the degree that it generated the subjects it required for its own operation. (A characteristic it shares with all machines.) Human beings have been, on this score, rather recalcitrant, much to the chagrin of the administrators of the machine.

Kennedy’s own hopes were just a renewed version of this vision, only they had become more explicitly a-political and technocratic in nature. It was not enough that citizens check certain aspects of their person at the door to the public sphere, now it would seem that citizens would do well to entrust the political order to experts, engineers, and technicians.

Leo Marx recounts an important part of this story, unfolding throughout the 19th to early 20th century, in an article accounting for what he calls “postmodern pessimism” about technology. Marx outlines how “the simple [small r] republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.” I would also include the emergence of bureaucratic and scientific management in the telling of this story.

Presently we are witnessing a further elaboration of this same project along the same trajectory. It is the rise of governance by algorithm, a further, apparent distancing of the human from the political. I say apparent because, of course, the human is never fully out of the picture, we just create more elaborate technical illusions to mask the irreducibly human element. We buy into these illusions, in part, because of the initial trajectory set for the liberal democratic order, that of machine-like objectivity, rationality, and efficiency. It is on this ideal that Western society staked its hopes for peace and prosperity. At every turn, when the human element, in its complexity and messiness, broke through the facade, we doubled-down on the ideal rather than question the premises. Initially, at least the idea was that the “machine” would facilitate the deliberation of citizens by establishing rules and procedures to govern their engagement. When it became apparent that this would no longer work, we explicitly turned to technique as the common frame by which we would proceed. Now that technique has failed because again the human manifested itself, we overtly turn to machines.

This new digital technocracy takes two, seemingly paradoxical paths. One of these paths is the increasing reliance on Big Data and computing power in the actual work of governing. The other, however, is the deployment of these same tools for the manipulation of the governed. It is darkly ironic that this latter deployment of digital technology is intended to agitate the very passions liberal democracy was initially advanced to suppress (at least according to the story liberal democracy tells about itself). It is as if, having given up on the possibility of reasonable political discourse and deliberation within a pluralistic society, those with the means to control the new apparatus of government have simply decided to manipulate those recalcitrant elements of human nature to their own ends.

It is this latter path that Madrigal and Tufekci have done their best to elucidate. However, my rambling contention here is that the full significance of our moment is only intelligible within a much broader account of the relationship between technology and democracy. It is also my contention that we will remain blind to the true nature of our situation so long as we are unwilling to submit our technology to the kind of searching critique Borgmann advocated and Ellul thought hardly possible. But we are likely too invested in the promise of technology and too deeply compromised in our habits and thinking to undertake such a critique.

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.