Jacques Ellul On Adaptation of Human Beings to the Technical Milieu

Jacques Ellul coined the term Technique in an attempt to capture the true nature of contemporary Western society. Ellul was a French sociologist and critic of technology who was active throughout the mid to late twentieth century. He was a prolific writer but is best remembered as the author of The Technological Society. You can read an excellent introduction to his thought here.

Ellul defined Technique (la technique) as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.” It was an expansive term meant to describe far more than what we ordinarily think of as technology, even when we use that term in the widest sense.

In a 1963 essay titled “The Technological Order,” Ellul referred to technique as “the new and specific milieu in which man is required to exist,” and he offered the six defining characteristics of this “new technical milieu”:

a. It is artificial;
b. It is autonomous with respect to values, ideas, and the state;
c. It is self-determining in a closed circle. Like nature, it is a closed organization which permits it to be self-determinative independently of all human intervention;
d. It grows according to a process which is causal but not directed to ends;
e. It is formed by an accumulation of means which have established primacy over ends;
f. All its parts are mutually implicated to such a degree that it is impossible to separate them or to settle any technical problem in isolation.

In the same essay, Ellul offers this dense elaboration of how Technique “comprises organizational and psychosociological techniques”:

It is useless to hope that the use of techniques of organization will succeed in compensating for the effects of techniques in general; or that the use of psycho-sociological techniques will assure mankind ascendancy over the technical phenomenon. In the former case we will doubtless succeed in averting certain technically induced crises, disorders, and serious social disequilibrations; but this will but confirm the fact that Technique constitutes a closed circle. In the latter case we will secure human psychic equilibrium in the technological milieu by avoiding the psychobiologic pathology resulting from the individual techniques taken singly and thereby attain a certain happiness. But these results will come about through the adaptation of human beings to the technical milieu. Psycho-sociological techniques result in the modification of men in order to render them happily subordinate to their new environment, and by no means imply any kind of human domination over Technique.”

That paragraph will bear re-reading and no small measure of unpacking, but here is the short version: Nudging is merely the calibration of the socio-biological machine into which we are being incorporated. Ditto life-hacking, mindfulness programs, and basically every app that offers to enhance your efficiency and productivity.

Ellul’s essay is included in Philosophy and Technology: Readings in the Philosophical Problems of Technology (1983), edited by Carl Mitcham and Robert Mackey.

Democracy and Technology

Alexis Madrigal has written a long and thoughtful piece on Facebook’s role in the last election. He calls the emergence of social media, Facebook especially, “the most significant shift in the technology of politics since the television.” Madrigal is pointed in his estimation of the situation as it now consequently stands.

Early on, describing the widespread (but not total) failure to understand the effect Facebook could have on an election, Madrigal writes, “The informational underpinnings of democracy have eroded, and no one has explained precisely how.”

Near the end of the piece, he concludes, “The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

Madrigal’s piece brought to mind, not surprisingly, two important observations by Neil Postman that I’ve cited before.

My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.


Surrounding every technology are institutions whose organization–not to mention their reason for being–reflects the world-view promoted by the technology. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis.

In these two passages, I find the crux of Postman’s enduring insights, the insights, more generally, of the media ecology school of tech criticism. It seems to me that this is more or less where we are: a culture in crisis, as Madrigal’s comments suggest. Read what he has to say.

On Twitter, replying to a tweet from Christopher Mims endorsing Madrigal’s work, Zeynep Tufekci took issue with Madrigal’s framing. Madrigal, in fact, cited Tufekci as one of the few people who understood a good deal of what was happening and, indeed, saw it coming years ago. But Tufekci nonetheless challenged Madrigal’s point of departure, which is that the entirety of Facebook’s role caught nearly everyone by surprise and couldn’t have been foreseen.

Tufekci has done excellent work exploring the political consequences of Big Data, algorithms, etc. This 2014 article, for example, is superb. But in reading Tufekci’s complaint that her work and the work of many other academics was basically ignored, my first thought was that the similarly prescient work of technology critics has been more or less ignored for much longer. I’m thinking of Mumford, Jaspers, Ellul, Jonas, Grant, Winner, Mander, Postman and a host of others. They have been dismissed as too pessimistic, too gloomy, too conservative, too radical, too broad in their criticism and too narrow, as Luddites and reactionaries, etc. Yet here we are.

In a 1992 article about democracy and technology, Ellul wrote, “In my view, our Western political institutions are no longer in any sense democratic. We see the concept of democracy called into question by the manipulation of the media, the falsification of political discourse, and the establishment of a political class that, in all countries where it is found, simply negates democracy.”

Writing in the same special issue of the journal Philosophy and Technology edited by Langdon Winner, Albert Borgmann wrote, “Modern technology is the acknowledged ruler of the advanced industrial democracies. Its rule is not absolute. It rests on the complicity of its subjects, the citizens of the democracies. Emancipation from this complicity requires first of alI an explicit and shared consideration of the rule of technology.”

It is precisely such an “explicit and shared consideration of the rule of technology” that we have failed to seriously undertake. Again, Tufekci and her colleagues are hardly the first to have their warnings, measured, cogent, urgent as they may be, ignored.

Roger Berkowitz of the Hannah Arendt Center for Politics and the Humanities, recently drew attention to a commencement speech given by John F. Kennedy at Yale in 1962. Kennedy noted the many questions that America had faced throughout her history, from slavery to the New Deal. These were questions “on which the Nation was sharply and emotionally divided.” But now, Kennedy believed we were ready to move on:

Today these old sweeping issues very largely have disappeared. The central domestic issues of our time are more subtle and less simple. They relate not to basic clashes of philosophy or ideology but to ways and means of reaching common goals — to research for sophisticated solutions to complex and obstinate issues.

These issues were “administrative and executive” in nature. They were issues “for which technical answers, not political answers, must be provided,” Kennedy concluded. You should read the rest of Berkowitz reflections on the prejudices exposed by our current crisis, but I want to take Kennedy’s technocratic faith as a point of departure for some observations.

Kennedy’s faith in the technocratic management of society was just the latest iteration of modernity’s political project, the quest for a neutral and rational mode of politics for a pluralistic society.

I will put it this way: liberal democracy is a “machine” for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.

It was machine-like in its promised objectivity and efficiency. But, of course, it would work only to the degree that it generated the subjects it required for its own operation. (A characteristic it shares with all machines.) Human beings have been, on this score, rather recalcitrant, much to the chagrin of the administrators of the machine.

Kennedy’s own hopes were just a renewed version of this vision, only they had become more explicitly a-political and technocratic in nature. It was not enough that citizens check certain aspects of their person at the door to the public sphere, now it would seem that citizens would do well to entrust the political order to experts, engineers, and technicians.

Leo Marx recounts an important part of this story, unfolding throughout the 19th to early 20th century, in an article accounting for what he calls “postmodern pessimism” about technology. Marx outlines how “the simple [small r] republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.” I would also include the emergence of bureaucratic and scientific management in the telling of this story.

Presently we are witnessing a further elaboration of this same project along the same trajectory. It is the rise of governance by algorithm, a further, apparent distancing of the human from the political. I say apparent because, of course, the human is never fully out of the picture, we just create more elaborate technical illusions to mask the irreducibly human element. We buy into these illusions, in part, because of the initial trajectory set for the liberal democratic order, that of machine-like objectivity, rationality, and efficiency. It is on this ideal that Western society staked its hopes for peace and prosperity. At every turn, when the human element, in its complexity and messiness, broke through the facade, we doubled-down on the ideal rather than question the premises. Initially, at least the idea was that the “machine” would facilitate the deliberation of citizens by establishing rules and procedures to govern their engagement. When it became apparent that this would no longer work, we explicitly turned to technique as the common frame by which we would proceed. Now that technique has failed because again the human manifested itself, we overtly turn to machines.

This new digital technocracy takes two, seemingly paradoxical paths. One of these paths is the increasing reliance on Big Data and computing power in the actual work of governing. The other, however, is the deployment of these same tools for the manipulation of the governed. It is darkly ironic that this latter deployment of digital technology is intended to agitate the very passions liberal democracy was initially advanced to suppress (at least according to the story liberal democracy tells about itself). It is as if, having given up on the possibility of reasonable political discourse and deliberation within a pluralistic society, those with the means to control the new apparatus of government have simply decided to manipulate those recalcitrant elements of human nature to their own ends.

It is this latter path that Madrigal and Tufekci have done their best to elucidate. However, my rambling contention here is that the full significance of our moment is only intelligible within a much broader account of the relationship between technology and democracy. It is also my contention that we will remain blind to the true nature of our situation so long as we are unwilling to submit our technology to the kind of searching critique Borgmann advocated and Ellul thought hardly possible. But we are likely too invested in the promise of technology and too deeply compromised in our habits and thinking to undertake such a critique.

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

Technology Will Not Save Us

A day after writing about technology, culture, and innovation, I’ve come across two related pieces.

At Walter Mead’s blog, the novel use of optics to create a cloaking effect provided a springboard into a brief discussion of technological innovation. Here’s the gist of it:

“Today, Big Science is moving ahead faster than ever, and the opportunities for creative tinkerers and home inventors are greater than ever. but the technology we’ve got today is more dynamic than what people had in the 19th and early 20th centuries. IT makes it possible to invent new services and not just new gadgets, though smarter gadgets are also part of the picture.

Unleashing the creativity of a new generation of inventors may be the single most important educational and policy task before us today.”

And …

“Technology in today’s world has run way ahead of our ability to exploit its riches to enhance our daily lives. That’s OK, and there’s nothing wrong with more technological progress. But in the meantime, we need to think much harder about how we can cultivate and reward the kind of innovative engineering that can harness the vast potential of the tech riches around us to lift our society and ultimately the world to the next stage of human social development.”

Then in this weekend’s WSJ, Walter Isaacson has a feature essay titled, “Where Innovation Comes From.” The essay is in part a consideration of the life of Alan Turing and his approach to AI. Isaacson’s point, briefly stated, is that, in the future, innovation will not come from so-called intelligent machines. Rather, in Isaacson’s view, innovation will come from the coupling of human intelligence and machine intelligence, each of them possessed of unique powers. Here is a representative paragraph:

“Perhaps the latest round of reports about neural-network breakthroughs does in fact mean that, in 20 years, there will be machines that think like humans. But there is another possibility, the one that Ada Lovelace envisioned: that the combined talents of humans and computers, when working together in partnership and symbiosis, will indefinitely be more creative than any computer working alone.”

I offer these two you for your consideration. As I read them, I thought again about what I had posted yesterday. Since the post was more or less stream of consciousness, thinking by writing as it were, I realized that an important qualification remained implicit. I am not qualified to speak about technological innovation from the perspective of the technologist or the entrepreneur. Quite frankly, I’m not sure I’m qualified to speak about technological innovation from any vantage point. Perhaps it is simply better to say that my interests in technological innovation are historical, sociological, and ethical.

For what it is worth, then, what I was after in my previous post was something like the cultural sources of technological innovation. Assuming that technological innovation does not unfold in a value-neutral vacuum, then what cultural forces shape technological innovation? Many, of course, but perhaps we might first say that while technological innovation is certainly driven by cultural forces, these cultural forces are not the only relevant factor. Those older philosophers of technology who focused on what we might, following Aristotle, call the formal and material causes of technological development were not altogether misguided. The material nature of technology imposes certain limits upon the shape of innovation. From this angle, perhaps it is the case that if innovation has stalled, as Peter Thiel among others worry, it is because all of the low-hanging fruit has been plucked.

When we consider the efficient and final causes of technological innovation, however, we enter the complex and messy realm human desires and cultural dynamics. It is in this realm that the meaning of technology and the direction of its unfolding is shaped. (As an aside, we might usefully frame the perennial debate between the technological determinists and the social constructivists as a failure to hold together and integrate Aristotle’s four causes into our understanding of technology.) It is this cultural matrix of technological innovation that most interests me, and it was at this murky target that my previous post was aimed.

Picking up on the parenthetical comment above, one other way of framing the problem of technological determinism is by understanding it as type of self-fulfilling prophecy. Or, perhaps it is better to put it this way: What we call technological determinism, the view that technology drives history, is not itself a necessary characteristic of technology. Rather, technological determinism is the product of cultural capitulation. It is a symptom of social fragmentation.

Allow me to borrow from what I’ve written in another context to expand on this point via a discussion of the work of Jacques Ellul.

Ellul defined technique (la technique) as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.” This is an expansive definition that threatens, as Langdon Winner puts it, to make everything technology and technology everything. But Winner is willing to defend Ellul’s usage against its critics. In Winner’s view, Ellul’s expansive definition of technology rightly points to a “vast, diverse, ubiquitous totality that stands at the center of modern culture.”

Although Winner acknowledges the weaknesses of Ellul’s sprawling work, he is, on the whole, sympathetic to Ellul’s critique of technological society. Ellul believed that technology was autonomous in the sense that it dictated its own rules and was resistant to critique. “Technique has become autonomous,” Ellul concluded, “it has fashioned an omnivorous world which obeys its own laws and which has renounced all tradition.”

Additionally, Ellul claimed that technique “tolerates no judgment from without and accepts no limitation.” Moreover, “The power and autonomy of technique are so well secured that it, in its turn, has become the judge of what is moral, the creator of a new morality.” Ellul’s critics have noted that in statements such as these, he has effectively personified technology/technique. Winner thinks that this is exactly the case, but in his view this is not an unintended flaw in Ellul’s argument, it is his argument: “Technique is entirely anthropomorphic because human beings have become thoroughly technomorphic. Man has invested his life in a mass of methods, techniques, machines, rational-productive organizations, and networks. They are his vitality. He is theirs.”

And here is the relevant point for the purposes of this post: Elllul claims that he is not a technological determinist.

By this he means that technology did not always hold society hostage, and society’s relationship to technology did not have to play out the way that it did. He is merely diagnosing what is now the case. He points to ancient Greece and medieval Europe as two societies that kept technology in its place as it were, as means circumscribed and directed by independent ends. Now as he sees it, the situation is reversed. Technology dictates the ends for which it alone can be the means. Among the factors contributing to this new state of affairs, Ellul points to the rise of individualism in Western societies. The collapse of mediating institutions fractured society, leaving individuals exposed and isolated. Under these conditions, society was “perfectly malleable and remarkably flexible from both the intellectual and material points of view,” consequently “the technical phenomenon had its most favorable environment since the beginning of history.”

This last consideration is often forgotten by critics of Ellul’s work. In any case, it is in my view, a point that is tremendously relevant to our contemporary discussions of technological innovation. As I put it yesterday, our focus on technological innovation as the key to the future is a symptom of a society in thrall to technique. Our creative and imaginative powers are thus constrained and caught in a loop of diminishing returns.

I hasten to add that this is surely not the whole picture, but it is, I think, an important aspect of it.

One final point related to my comments about our Enlightenment heritage. It is part of that heritage that we transformed technology into an idol of the god we named Progress. It was a tangible manifestation of a concept we deified, took on faith, and in which we invested our hope. If there is a palpable anxiety and reactionary defensiveness in our discussions about the possible stalling of technological innovation, it is because, like the prophets of Baal, we grow ever more frantic and feverish as it becomes apparent that the god we worshipped was false and our hopes are crushed. And it is no small things to have your hopes crushed. But idols always break the hearts of their worshippers, as C.S. Lewis has put it.

Technology will not save us. Paradoxically, the sooner we realize that, the sooner we might actually begin to put it to good use.