Jacques Ellul Was Not Impressed With “Technical Humanism”

Forgive the rather long excerpt, but I think it’s worth reading. This is Jacques Ellul in The Technological Society discussing the attempt to humanize technology/technique.

It useful to consider Ellul’s perspective in light of the recent revival of a certain kind of tech humanism that seeks to mitigate what it takes to be the dehumanizing effects of digital technologies. This passage is also worth considering in light of the proliferation of devices and apps designed to gather data about everything from our steps to our sleep habits in order to help us optimize, maximize, manage, or otherwise finely calibrate ourselves. The calibration becomes necessary because the rhythms and patterns of the modern world, owing at least in part to its technological development, have indeed become unlivable. So now the same techno-economic forces present themselves as the solution to the problems they have generated.

The claims of the human being have thus come to assert themselves to a greater and greater degree in the development of techniques; this is known as “humanizing the techniques.” Man is not supposed to be merely a technical object, but a participant in a complicated movement. His fatigue, pleasures, nerves, and opinions are carefully taken into account. Attention is paid to his reactions to orders, his phobias, and his earnings. All this fills the uneasy with hope. From the moment man is taken seriously, it seems to them that they are witnessing the creation of a technical humanism.

If we seek the real reason, we hear over and over again that there is “something out of line” in the technical system, an insupportable state of affairs for a technician. A remedy must be found. What is out of line? According to the usual superficial analysis, it is man that is amiss. The technician thereupon tackles the problem as he would any other. He has a method which has hitherto enabled him to solve all difficulties, and he uses it here too. But he considers man only as an object of technique and only to the degree that man interferes with the proper function of the technique. Technique reveals its essential efficiency in discerning that man has a sentimental and moral life which can have great influence on his material behavior and in proposing to do something about such factors on the basis of its own ends. These factors are, for technique, human and subjective; but if means can be found to act upon them, to rationalize them and bring them into line, they need not be a technical drawback. Of course, man as such does not count.

When the technical problem is well in hand, the professional humanists look at the situation and dub it “humanist.” This procedure suits the literati, moralists, and philosophers who are concerned about the human situation. What is more natural than for philosophers to say: “See how we are concerned with Man?”; and for their literary admirers to echo: “At last, a humanism which is not confined to playing with ideas but which penetrates the facts!” Unfortunately, it is a historical fact that this shouting of humanism always comes after the technicians have intervened; for a true humanism, it ought to have occurred before. This is nothing more than the traditional psychological maneuver called rationalizing.

Since 1947 we have witnessed the same humanist rationalizing with respect to the earth itself. In the United States, for example, methods of large-scale agriculture had been savagely applied. The humanists became alarmed by this violation of the sacred soil, this lack of respect for nature; but the technical people troubled themselves not at all until a steady decline in agricultural productivity became apparent. Technical research discovered that the earth contains certain trace elements which become exhausted when the soil is mistreated. This discovery, made by Sir Albert Howard in his thorough investigation of Indian agriculture, led to the conclusion that animal and vegetable (“organic”) fertilizers were superior to any and all artificial fertilizers, and that it is essential not to exhaust the earth’s reserves. Up to now no one has succeeded in finding a way of replacing trace elements artificially. The technicians have recommended more care in the use of fertilizers and moderation in the utilization of machinery; in short. “respect” for the soil. And all nature lovers rejoice. But was any real respect for the earth involved here? Clearly not. The important thing was agricultural yield.

It might be objected: “Who cares what the real causes were if the result is respect for man or for nature? If technical excess brings us to wisdom, let us by all means develop techniques. If man must be effectively protected by a technique that understands him, we may at least rest assured that he will be better protected than he ever was by all his philosophies.” This is hocus-pocus. Today’s technique may respect man because it is in its interest and part of its normal course of development to do so. But we have no certainty that this will be so in the future. We could have a measure of certainty only if technique, by necessity and for deep and lasting reasons, subordinated its power in principle to the interests of man Otherwise, a complete reversal is always possible. Tomorrow it might be in technique’s interest to exploit man brutally, to mutilate and suppress him. We have, as of now, no guarantee whatsoever that this is not the road it will take. On the contrary, we see all around us at least as many signs of increasing contempt for man as of respect for him. Technique mixes the one with the other indiscriminately. The only law it follows is that of its own autonomous development. To me, therefore, it seems impossible to speak of a technical humanism.

The Political Paradoxes of Gene Editing Technology

By now you’ve probably heard about the breakthrough in gene editing that was announced on November 25th: the birth of twin girls in China whose genetic code was altered to eliminate a gene, CCR5, in an effort to make the girls resistant to HIV. The genetic alteration was accomplished by the Chinese scientist He Jiankui using the technique known as CRISPR.

Antonio Regalado broke the story and has continued to cover the aftermath at MIT’s Technology Review: “Chinese scientists are creating CRISPR babies,” “CRISPR inventor Feng Zhang calls for moratorium on gene-edited babies.” Ed Yong has two good pieces at  The Atlantic as well: “A Reckless and Needless Use of Gene Editing on Human Embryos” and “The CRISPR Baby Scandal Gets Worse by the Day.”

The public response, such as it is, has been generally negative. Concerns have rightly focused on consequences for the longterm health of the twin girls but also on the impact that this “reckless” venture would have on public acceptance of gene editing moving forward and the apparent secrecy with which this work was conducted.

These concerns, it should be noted, appear to be mostly procedural rather than substantive. And they are not quite the whole story either. News about the birth of these two girls came on the eve of the Second International Summit on Human Genome Editing. On the first night of the conference, Antonio Regalado, who was in attendance, tweeted the following:

holy cow Harvard Medical School dean George Daley is making the case, big time, and eloquently, FOR editing embryos, at #geneeditsummit

he is says technically we are *ready* for RESPONSIBLE clinic use.

He went on to add, “he’s basically saying, stop debating ethics, start talking about the pathway forward.” The whole series of tweets is worth considering. At one point, Regalado notes, “They are talking about these babies like they are lab rats.”

Two things seem clear at this point. First, we have crossed a threshold and there is likely no going back. Second, among those with meaningful power in these matters, there is little interest in much else besides moving forward.

Among the 15 troubling aspects of He Jiankui’s work detailed by Ed Yong, we find the following:

8. He acted in contravention of global consensus.
9. He acted in contravention of his own stated ethical views.
10. He sought ethical advice and ignored it.
12. He has doubled down.
15. This could easily happen again.

One is reminded of what Alan Jacobs, in another context, dubbed the Oppenheimer Principle.

I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb” ….

Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.

Jacobs’ discussion here recalls Lewis’ analysis of the Conditioners in The Abolition of Man, where he observed that “What we call Man’s power is, in reality, a power possessed
by some men which they may, or may not, allow other men to profit by” and, similarly, “what we call Man’s power over Nature turns out to be a power exercised by some men over other Men with Nature as its instrument.” This was especially true, in Lewis’ view, when the human person became the last frontier of the Nature that was to be conquered.

I’m reminded as well of Langdon Winner’s question for philosophers and ethicists of technology. After they have done their work, however admirably, “there remains the embarrassing question: Who in the world are we talking to? Where is the community in which our wisdom will be welcome?”

“It is time to ask,” Winner writes, “what is the identity and character of the moral communities that will make the crucial, world-altering judgments and take appropriate action as a result?” Or, to put it another way, “How can and should democratic citizenry participate in decision making about technology?”

As I’ve suggested before, there is no “we” there. We appear to be stuck with an unfortunate paradox: as the scale and power of technology increases, technology simultaneously becomes more of a political problem and less susceptible to political processes.

It also seems to be the case that an issue such as genetic engineering lies beyond the threshold of how politics has been designed to work in western liberal societies. It is a matter of profound moral consequence involving fundamental questions about the meaning of human life. In other words, the sorts of questions ostensibly bracketed by the liberal democratic order are the very questions raised by this technology.


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

The Allegory of the Cave for the Digital Age

You remember Plato’s famous allegory of the cave. Plato invites us to imagine a cave in which prisoners have been shackled to a wall unable to move or turn their heads. On the wall before them are shadows that are being cast as the light of a fire shines on people walking along a path above and behind the prisoners. Plato asks us to consider whether these prisoners would not take the shadows for reality and then whether our situation were not quite similar to that of these prisoners.

So far as Plato is concerned, sense experience cannot reveal to us what is, in fact, real. What is real, what is true, what is good exists outside the material realm and is accessible chiefly by the operations of the mind acting independently of sense experience.

In the allegory, Plato imagines that one of the prisoners has managed to free himself. He turns around and sees the fire that has been casting the light and the people whose shadows he had mistaken for reality. It is a painful experience, of course, chiefly because the fire dazzles the eyes. Plato also tells us that this same prisoner manages with much difficulty to climb out of the cave in order to see the light of the sun and behold the sun itself. That experience is analogous to the experience of one, who through the exercise of reason and contemplation, has attained to behold the highest order of being, the form of the good.

The allegory of the cave, odd as it might strike us, memorably exemplifies one of Plato’s enduring contributions to what we might think of as the Western imagination. It posits the existence of two worlds, as it were:  one material and one immaterial, the former accessible to the senses, the latter not. In Plato’s account, it is the philosopher who takes it upon himself, like the man whose has escaped the cave, to discover the truth of things.

I thought of the allegory of the cave as I read about an online service I wrote about a few days ago, Predictim, which promises to vet potential babysitters by analyzing their social media feeds. The service is representative of a class of tools and services that claim to leverage the combined power of data, AI, and/or algorithms in order to arrive at otherwise inaccessible knowledge, insight, or certainty. They claim, in other words, to lead us out of the cave to see the truth of things, to grant us access to what is real beyond appearances.

Only in the new allegory, it is not the philosopher who is able to ascend out of the cave, it is the data analyst or, more precisely, the tools of data gathering and analysis that the data analyst himself may not fully understand. Indeed, the knowledge gained is very often knowledge without understanding. This is, of course, an element of the residual but transmuted Platonism, the knowledge is inaccessible to the senses not because it lies in an immaterial realm, unless one conceives of information or abstracted data as basically immaterial, but because it requires the encompassing of data so expansive no one mind can comprehend it.

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central
Ad for Microsoft Band c. 2014

Arendt noted that the two-tiered Platonic structure was subject to a series of inversions in the 19th century, most notably at the hands of Kierkegaard, Marx, and Nietzsche. But, as she points out, they cannot altogether escape Plato because they are working within the tensions he generated. It might be better to imagine, then, that the two-tiered world was simply immanentized, the dual structure was merely brought into this world. Rather than conceiving of an immaterial realm that transcends the material realm, we can conceive of the material realm itself divided into two spheres. The first is generally accessible to our senses, but the second is not.

As Arendt herself observes, modernity was characterized in part by this shattering of confidence in our perceptive capacities. The world was not the center of the universe, for example, as our senses suggested, and it turned out there were whole worlds, telescopic and microscopic, which were inaccessible to our unaided perceptive apparatus. Indeed, Arendt and others have suggested that in the modern world we claim to know only what we can make.

We might say that border of the two-tiered world now runs through self. Data and some magic algorithmic sauce is now the key that unlocks the truth about the self. It’s knowledge that others seek about us and it is knowledge we also seek about ourselves. My main interest here has not been to question the validity of such knowledge, although that’s a necessary exercise, but to note the assumptions that make the promises of data analytics plausible. One of those assumptions seems to be the belief that the truth, about the self in this case, lies in a realm that is inaccessible to our ordinary means of perception but accessible by other esoteric means.

Regarding the validity of the knowledge of the self to be gained, I would suggest the only self we’ll discover is the self we’ve become as we have sought the self in our data. In this respect then, the Platonic structure is modernized: it is not ultimately about beholding and knowing what is there but about knowing what we are fabricating. The instruments constitute the reality they purport to reveal, both by my awareness of them and by the their delimiting the legible aspects of the self. The only self I can know on these terms is the self that I am in the process of creating through my tools. In some non-trivial way it would seem that the work of climbing out of the cave is simultaneously the work of creating a new cave in which we will dwell.

______________________

See also “Data Science as Machinic Neoplatonism” by Dan McQuillan.


If you’ve appreciated what you’ve read, consider supporting (Patreon) or tipping (Paypal) the writer.

 

AI Hype and the Fraying Social Fabric

An online service called Predictim promises to help parents weed out potentially bad babysitters by analyzing their social media feeds with its “advanced artificial intelligence.” The company requires the applicant’s name, their email, and their consent. Of course, you know how consent works in these cases: refusal to subject yourself to a blackbox algorithm’s evaluation with no possibility of recourse or appeal must obviously mean that you’ve got something to hide.

I’ll say more in the next newsletter about this service and predictive technologies in general, but here I’ll briefly note the context in which these sorts of tools attain a measure of plausibility and even desirability.

All such tools are symptoms and accelerators of the breakdown of the kind of social trust and capacity for judgment that emerges organically within generally healthy, human-scale communities. The ostensible demand for these services suggests that something has gone wrong. It’s almost as if the rapid disintegration of the communal structures within which human beings have meaningfully related to one another and to the world might have real and not altogether benign consequences.

There is a way of making this point in a reactionary and romanticized manner, of course. But it need not be so. It’s obviously true that such communities could have some very rough edges. That said, when you lose the habitats that sustain trust, both in others and in your ability to make sound judgments, you end up seeking artificial means to compensate.

Enter the promise of “data,” “algorithms,” and “artificial intelligence.” I place each of those in quotation marks not to be facetious, but to suggest that what is at work here is something more than the bare technical realities to which those terms refer. In other words, each of those terms also conveys a set of dubious assumptions, a not insignificant measure of hype, and a host of misplaced hopes—in short, they amount to magical thinking.

In this case, what is promised is “peace of mind” for parents and peace of mind will be delivered by “AI algorithms using billions of social media data points to increase the accuracy of our reports about important personality traits.” There are a number of problems with this method, Drew Harwell addresses some of them here, but that seems not to matter. As the social fabric continues to fray, we will increasingly seek to apply technical patches. These patches, however, will only accelerate the deterioration they are intended to repair. They will not solve the problem they are designed to address, and they will make heighten the underlying disorders of which the problem is a symptom. The less we inhabit a common world, a shared world, the more we will turn to our tools to judge one another only deepening our alienation.


If you’ve appreciated what you’ve read, consider supporting the writer: Patreon/Paypal.

The Accelerated Life

The following is from the Translator’s Preface to the English edition of Harmut Rosa’s Social Acceleration: A New Theory of Modernity. (The translator is Jonathan Trejo-Mathys.):

A further weighty obstacle to the realization of any ethical life project lies in the way individuals are increasingly caught in an ever denser web of deadlines required by the various social spheres (‘subsystems’) in which they participate: work, family, (school and sports activities of children), church, credit systems (i.e., loan payment due dates), energy systems (utility bills), communications systems (Internet and cell phone bills), etc. The requirement of synchronizing and managing this complicated mesh of imperatives places one under the imperious control of a systematically induced ‘urgency of the fixed-term’ (Luhmann). In practice, the surprising—and ethically disastrous—result is that individuals’ reflective value and preference orderings are not (and tendentially cannot) be reflected in their actions. As Luhmann explains, ‘the division of time and value judgments can no longer be separated. The priority of deadlines flips over into a primacy of deadlines, into an evaluative choiceworthiness that is not in line with the rest of the values that one otherwise professes …. Tasks that are always at a disadvantage must in the end be devalued and ranked as less important in order to reconcile fate and meaning. Thus a restructuring of the order of values can result simply from time problems.’

People compelled to continually defer the activities they value most in order to meet an endless and multiplying stream of pressing deadlines inevitably become haunted by the feeling expressed in the trenchant bon mot of Ödön von Horváth cited by Rosa: ‘I’m actually a quite different person, I just never get around to being him.’

Sounds familiar.

A bit more from Rosa work to follow in the coming days.