Do Things Want?

Alan Jacobs’ 79 Theses on Technology were offered in the spirit of a medieval disputation, and they succeeded in spurring a number of stimulating responses in a series of essays posted to the Infernal Machine over the last two weeks. Along with my response to Jacobs’ provocations, I wanted to engage a debate between Jacobs’ and Ned O’Gorman about whether or not we may meaningfully speak of what technologies want. Here’s a synopsis of the exchange with my own commentary along the way.

O’Gorman’s initial response focused on the following theses from Jacobs:

40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.

46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.

O’Gorman framed these theses by saying that he found it “perplexing” that Jacobs “is so seemingly unsympathetic to the meaningfulness of things, the class to which technologies belong.” I’m not sure, however, that Jacobs was denying the meaningfulness of things; rather, as I read him, he is contesting the claim that it is from technology that our lives derive their meaning. That may seem a fine distinction, but I think it is an important one. In any case, a little clarification about what exactly “meaning” entails, may go a long way in clarify that aspect of the discussion.

A little further on, O’Gorman shifts to the question of agency: “Our technological artifacts aren’t wholly distinct from human agency; they are bound up with it.” It is on this ground that the debate mostly unfolds, although there is more than a little slippage between the question of meaning and the question of agency.

O’Gorman appealed to Mary Carruthers’ fascinating study of the place of memory in medieval culture, The Book of Memory: A Study of Memory in Medieval Culture, to support his claim, but I’m not sure the passage he cites supports his claim. He is seeking to establish, as I read him, two claims. First, that technologies are things and things are meaningful. Second, that we may properly attribute agency to technology/things. Now here’s the passage he cites from Carruthers’ work (brackets and elipses ellipses are O’Gorman’s):

“[In the middle ages] interpretation is not attributed to any intention of the man [the author]…but rather to something understood to reside in the text itself.… [T]he important “intention” is within the work itself, as its res, a cluster of meanings which are only partially revealed in its original statement…. What keeps such a view of interpretation from being mere readerly solipsism is precisely the notion of res—the text has a sense within it which is independent of the reader, and which must be amplified, dilated, and broken-out from its words….”

“Things, in this instance manuscripts,” O’Gorman adds, “are indeed meaningful and powerful.” But in this instance, the thing (res) in view is not, in fact, the manuscripts. As Carruthers explains at various other points in The Book of Memory, the res in this context is not a material thing, but something closer to the pre-linguistic essence or idea or concept that the written words convey. It is an immaterial thing.

That said, there are interesting studies that do point to the significance of materiality in medieval context. Ivan Illich’s In the Vineyard of the Text, for example, dwells at length on medieval reading as a bodily experience, an “ascetic discipline focused by a technical object.” Then there’s Caroline Bynum’s fascinating Christian Materiality: An Essay on Religion in Late Medieval Europe, which explores the multifarious ways matter was experienced and theorized in the late middle ages.

Bynum concludes that “current theories that have mostly been used to understand medieval objects are right to attribute agency to objects, but it is an agency that is, in the final analysis, both too metaphorical and too literal.” She adds that insofar as modern theorizing “takes as self-evident the boundary between human and thing, part and whole, mimesis and material, animate and inanimate,” it may be usefully unsettled by an encounter with medieval theories and praxis, which “operated not from a modern need to break down such boundaries but from a sense that they were porous in some cases, nonexistent in others.”

Of course, taking up Bynum’s suggestion does not entail a re-imagining of our smartphone as a medieval relic, although one suspects that there is but a marginal difference in the degree of reverence granted to both objects. The question is still how we might best understand and articulate the complex relationship between our selves and our tools.

In his reply to O’Gorman, Jacobs focused on O’Gorman’s penultimate paragraph:

“Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve ‘wanting’ for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.”

It’s an interesting feature of the exchange from this point forward that O’Gorman and Jacobs at once emphatically disagree, and yet share very similar concerns. The disagreement is centered chiefly on the question of whether or not it is helpful or even meaningful to speak of technologies “wanting.” Their broad agreement, as I read their exchange, is about the inadequacy of what O’Gorman calls “philosophical volunatarianism” and “technological instrumentalism.”

In other words, if you begin by assuming that the most important thing about us is our ability to make rational and unencumbered choices, then you’ll also assume that technologies are neutral tools over which we can achieve complete mastery.

If O’Gorman means what I think he means by this–and what Jacobs takes him to mean–then I share his concerns as well. We cannot think well about technology if we think about technology as mere tools that we use for good or evil. This is the “guns don’t kill people, people kill people” approach to the ethics of technology, and it is, indeed, inadequate as a way of thinking about the ethical status of artifacts, as I’ve argued repeatedly.

Jacobs grants these concerns, but, with a nod to the Borg Complex, he also thinks that we do not help ourselves in facing them if we talk about technologies “wanting.” Here’s Jacobs’ conclusion:

“It seems that [O’Gorman] thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility, and an inability to confront the political dimensions of technological modernity.”

This seems basically right to me, but it prompted a second reply from O’Gorman that brought some further clarity to the debate. O’Gorman identified three distinct “directions” his disagreement with Jacobs takes: rhetorical, ontological, and ethical.

He frames his discussion of these three differences by insisting that technologies are meaningful by virtue of their “structure of intention,” which entails a technology’s affordances and the web of practices and discourse in which the technology is embedded. So far, so good, although I don’t think intention is the best choice of word. From here O’Gorman goes on to show why he thinks it is “rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.”

Rhetorically, O’Gorman appears to be advocating a Wittgenstein-ian, “look and see” approach. Let’s see how people are using language before we rush to delimit a word’s semantic range. To a certain degree, I can get behind this. I’ve advocated as much when it comes to the way we use the word “technology,” itself a term that abstracts and obfuscates. But I’m not sure that once we look we will find much. While our language may animate or personify our technology, I’m less sure that we typically speak about technology “wanting” anything.  We do not ordinarily say things like “my iPhone wants to be charged,” “the car wants to go out for a drive,” “the computer wants to play.” Although, I can think of an exception or two. I have heard, for example, someone explain to an anxious passenger that the airplane “wants” to stay in the air. The phrase, “what technology wants,” owes much of its currency, such as it is, to the title of Kevin Kelly’s book, and I’m pretty sure Kelly means more by it than what O’Gorman might be prepared to endorse.

Ontologically, O’Gorman is “skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.” “What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of ‘wanting’ that do not seem to be a product of our will?” he wonders. Fair enough, but all of the examples he cites are connected with beings that are, in a literal sense, alive. Of course I can’t attribute all of my desires to my conscious will, sure my dog wants to eat, and maybe in some sense my plant wants water. But there’s still a leap involved in saying that my clock wants to tell time. Wanting may not be neatly tied to willing, but I don’t see how it is not tied to sentience.

There’s one other point worth making at this juncture. I’m quite sympathetic to what is basically a phenomenological account of how our tools quietly slip into our subjective, embodied experience of the world. This is why I can embrace so much of O’Gorman’s case. Thinking back many years, I can distinctly remember a moment when I held a baseball in my hand and reflected on how powerfully I felt the urge to throw it, even though I was standing inside my home. This feeling is, I think, what O’Gorman wants us to recognize. The baseball wanted to be thrown! But how far does this kind of phenomenological account take us?

I think it runs into limits when we talk about technologies that do not enter quite so easily into the circuit of mind, body, and world. The case for the language of wanting is strongest the closer I am to my body; it weakens the further away we get from it. Even if we grant that the baseball in hand feels like it wants to be thrown, what exactly does the weather satellite in orbit want? I think this strongly suggests the degree to which the wanting is properly ours, even while acknowledging the degree to which it is activated by objects in our experience.

Finally, O’Gorman thinks that it is “perfectly legitimate and indeed ethically good and right to speak of technologies as ‘wanting.'” He believes this to be so because “wanting” is not only a matter of willing, it is “more broadly to embody a structure of intention within a given context or set of contexts.” Further, “Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.”

“It is good and right,” O’Gorman insists, “to call this ‘wanting’ because ‘wanting’ suggests that things, even machine things, have an active presence in our life—they are intentional” and, what’s more, their “active presence cannot be neatly traced back to their design and, ultimately, some intending human.”

I agree with O’Gorman that the ethical considerations are paramount, but I’m finally unpersuaded that we are on firmer ground when we speak of technologies wanting, even though I recognize the undeniable importance of the dynamics that O’Gorman wants to acknowledge by speaking so.

Consider what O’Gorman calls the “structure of intention.” I’m not sure intention is the best word to use here. Intentionality resides in the subjective experience of the “I,” but it is true, as phenomenologists have always recognized, that intentionality is not unilaterally directed by the self-consciously willing “I.” It has conscious and non-conscious dimensions, and it may be beckoned and solicited by the world that it simultaneously construes through the workings of perception.

I think we can get at what O’Gorman rightly wants us to acknowledge without attributing “wanting” to objects. We may say, for instance, that objects activate our wanting as they are intended to do by design and also in ways that are unintended by any person. But it’s best to think of this latter wanting as an unpredictable surplus of human intentionality rather than inject a non-human source of wanting. The wanting is always mine, but it may be prompted, solicited, activated, encouraged, fostered, etc. by aspects of the non-human world. So, we may correctly talk about a structure of desire that incorporates non-human aspects of the world and thereby acknowledge the situated nature of our own wanting. Within certain contexts, if we were so inclined, we may even call it a structure of temptation.

To fight the good fight, as it were, we must acknowledge how technology’s consequences exceed and slip loose of our cost/benefit analysis and our rational planning and our best intentions. We must take seriously how their use shapes our perception of the world and both enable and constrain our thinking and acting. But talk about what technology wants will ultimately obscure moral responsibility. “What the machine/algorithm wanted” too easily becomes the new “I was just following orders.” I believe this to be true because I believe that we have a proclivity to evade responsibility. Best, then, not to allow our language to abet our evasions.

The Spectrum of Attention

Late last month, Alan Jacobs presented 79 Theses on Technology at a seminar hosted by the Institute for Advanced Studies in Culture at the University of Virginia. The theses, dealing chiefly with the problem of attention in digital culture, were posted to the Infernal Machine, a terrific blog hosted by the Institute and edited by Chad Wellmon, devoted to reflection on technology, ethics, and the human person. I’ve long thought very highly of both Jacobs and the Institute, so when Wellmon kindly extended an invitation to attend the seminar, I gladly and gratefully accepted.

Wellmon has also arranged for a series of responses to Jacobs’ theses, which have appeared on The Infernal Machine. Each of these is worth considering. In my response, “The Spectrum of Attention,” I took the opportunity to work out a provisional taxonomy of attention that considers the difference our bodies and our tools make to what we generally call attention.

Here’s a quick excerpt:

We can think of attention as a dance whereby we both lead and are led. This image suggests that receptivity and directedness do indeed work together. The proficient dancer knows when to lead and when to be led, and she also knows that such knowledge emerges out of the dance itself. This analogy reminds us, as well, that attention is the unity of body and mind making its way in a world that can be solicitous of its attention. The analogy also raises a critical question: How ought we conceive of attention given that we are  embodied creatures?

Click through to read the rest.

Fit the Tool to the Person, Not the Person to the Tool

I recently had a conversation with a student about the ethical quandaries raised by the advent of self-driving cars. Hypothetically, for instance, how would a self-driving car react to a pedestrian who stepped out in front of it? Whose safety would it be programmed to privilege?

The relatively tech-savvy student was unfazed. Obviously this would only be a problem until pedestrians were forced out of the picture. He took it for granted that the recalcitrant human element would be eliminated as a matter of course in order to perfect the technological system. I don’t think he took this to be a “good” solution, but he intuited the sad truth that we are more likely to bend the person to fit the technological system than to design the system to fit the person.

Not too long ago, I made a similar observation:

… any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

A few days ago, Elon Musk put it all very plainly:

“Tesla co-founder and CEO Elon Musk believes that cars you can control will eventually be outlawed in favor of ones that are controlled by robots. The simple explanation: Musk believes computers will do a much better job than us to the point where, statistically, humans would be a liability on roadways [….] Musk said that the obvious move is to outlaw driving cars. ‘It’s too dangerous,’ Musk said. ‘You can’t have a person driving a two-ton death machine.'”

Mind you, such a development, were it to transpire, would be quite a boon for the owner of a company working on self-driving cars. And we should also bear in mind Dale Carrico’s admonition “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

If autonomous cars become the norm and transportation systems are designed to accommodate their needs, it will not have happened because of some force inherent in the technology itself. It will happen because interested parties will make it happen, with varying degrees of acquiescence from the general public.

This was precisely the case with the emergence of the modern highway system that we take for granted. Its development was not a foregone conclusion. It was heavily promoted by government and industry. As Walter Lippmann observed during the 1939 World’s Fair, “General motors has spent a small fortune to convince the american public that if it wishes to enjoy the full benefit of private enterprise in motor manufacturing, it will have to rebuild its cities and its highways by public enterprise.”

Consider as well the film below produced by Dow Chemicals in support of the 1956 Federal Aid-Highway Act:

Whatever you think about the virtues or vices of the highway system and a transportation system designed premised on the primacy the automobile, my point is that such a system did not emerge in a cultural or political vacuum. Choices were made; political will was exerted; money was spent. So it is now, and so it will be tomorrow.

Stuck Behind a Plow in India

So this is going to come off as more than a bit cynical, but, for what it’s worth, I don’t intend it to be.

Over the last few weeks, I’ve heard an interesting claim expressed by disparate people in strikingly similar language. The claim was always some variation of the following: the most talented person in the world is most likely stuck behind a plow in some third world country. The recurring formulation caught my attention, so I went looking for the source.

As it turns out, sometime in 2014, Google’s chief economist, Hal Varian, proposed the following:

“The biggest impact on the world will be universal access to all human knowledge. The smartest person in the world currently could well be stuck behind a plow in India or China. Enabling that person — and the millions like him or her — will have a profound impact on the development of the human race.”

It occurred to me that this “stuck behind a plow” claim is the 21st century version of the old “rags to riches” story. The rags to riches story promoted certain virtues–hard work, resilience, thrift, etc.–by promising that they will be extravagantly rewarded. Of course, such extravagant rewards have always been rare and rarely correlated to how hard one might be willing to work. Which is not, I hasten to add, a knock against hard work and its rewards, such as they may be. But, to put the point more critically, the genre served interests other than those of its ostensible audience. And so it is with the “stuck behind a plow” pitch.

The “rags to riches/stuck behind a plow” narrative is an egalitarian story, at least on the surface. It inspires the hope that an undiscovered Everyman languishing in impoverished obscurity, properly enabled, can hope to be a person of world-historical consequence, or at least remarkably prosperous. It’s a happy claim, and, of course, impossible to refute–not that I’m particularly interested in refuting the possibility.

The problem, as I see it, is that, coming from the would-be noble enablers, it’s also a wildly convenient, self-serving claim. Who but Google could enable such benighted souls by providing universal access to all human knowledge?

Never mind that the claim is hyperbolic and traffics in an impoverished notion of what counts as knowledge. Never mind, as well, that, even if we grant the hyperbole, access to knowledge by itself cannot transform a society, cure its ills, heal its injustices, or lift the poor out of their poverty.

I’m reminded of one of my favorite lines in Conrad’s Heart of Darkness. Before he ships off to the Congo, Marlow’s aunt, who had helped secure his job with the Company, gushes about the nobility of work he is undertaking. Marlow would be “something like an emissary of light, something like a lower sort of apostle.” In her view, he would be “weaning those ignorant millions from their horrid ways.”

Then comes the wonderfully deadpanned line that we would do well to remember:

“I ventured to hint that the Company was run for profit.”

The Ageless and the Useless

In The Religion of the Future, Roberto Unger, a professor of law at Harvard, identifies humanity’s three “irreparable flaws”: mortality, groundlessness, and insatiability. We are plagued by death. We are fundamentally ignorant about our origins and our place in the grand scheme of things. We are made perpetually restless by desires that cannot finally be satisfied. This is the human condition. In his view, all of the world’s major religions have tried to address these three irreparable flaws, and they have all failed. It is now time, he proposes, to envision a new religion that will be adequate to the challenges of the 21st century. His own proposal is a rather vague program of learning to be at once more god-like while eschewing certain god-like qualities, such as immortality, omniscience, and perfectibility. It strikes me as less than actionable.

There is, however, another religious option taking shape. In a wide-ranging Edge interview with Daniel Kahneman about the unfolding future, historian Yuval Noah Harari concluded with the following observation:

“In terms of history, the events in Middle East, of ISIS and all of that, is just a speed bump on history’s highway. The Middle East is not very important. Silicon Valley is much more important. It’s the world of the 21st century … I’m not speaking only about technology. In terms of ideas, in terms of religions, the most interesting place today in the world is Silicon Valley, not the Middle East. This is where people like Ray Kurzweil, are creating new religions. These are the religions that will take over the world, not the ones coming out of Syria and Iraq and Nigeria.”

This is hardly an original claim, although it’s not clear that Harari recognizes this. Indeed, just a few months ago I commented on another Edge conversation in which Jaron Lanier took aim at the “layer of religious thinking” being added “to what otherwise should be a technical field.” Lanier was talking about the field of AI. He went on to complain about a “core of technically proficient, digitally-minded people” who “reject traditional religions and superstitions,” but then “re-create versions of those old religious superstitions!” “In the technical world,” he added, “these superstitions are just as confusing and just as damaging as before, and in similar ways.”

This emerging Silicon Valley religion, which is just the latest iteration of the religion of technology, is devoted to addressing one of the three irreparable flaws identified by Unger: our mortality. From this angle it becomes apparent that there are two schools within this religious tradition. The first of these seeks immortality through the digitization of consciousness so that it may be downloaded and preserved forever. Decoupled from corruptible bodies, our essential self lives on in the cloud–a metaphor that now appears in a new light. We may call this the gnostic strain of the Silicon Valley religion.

The second school grounds its slightly more plausible hopes for immortality in the prospect of making the body imperishable through biogenetic and cyborg enhancements. It is this prospect that Harari takes to be a serious possibility:

“Yes, the attitude now towards disease and old age and death is that they are basically technical problems. It is a huge revolution in human thinking. Throughout history, old age and death were always treated as metaphysical problems, as something that the gods decreed, as something fundamental to what defines humans, what defines the human condition and reality ….

People never die because the Angel of Death comes, they die because their heart stops pumping, or because an artery is clogged, or because cancerous cells are spreading in the liver or somewhere. These are all technical problems, and in essence, they should have some technical solution. And this way of thinking is now becoming very dominant in scientific circles, and also among the ultra-rich who have come to understand that, wait a minute, something is happening here. For the first time in history, if I’m rich enough, maybe I don’t have to die.”

Harari expands on that last line a little further on:

“Death is optional. And if you think about it from the viewpoint of the poor, it looks terrible, because throughout history, death was the great equalizer. The big consolation of the poor throughout history was that okay, these rich people, they have it good, but they’re going to die just like me. But think about the world, say, in 50 years, 100 years, where the poor people continue to die, but the rich people, in addition to all the other things they get, also get an exemption from death. That’s going to bring a lot of anger.”

Kahneman pressed Harari on this point. Won’t the medical technology that yields radical life extension trickle down to the masses? In response, Harari draws on a second prominent theme that runs throughout the conversation: superfluous humans.

“But in the 21st century, there is a good chance that most humans will lose, they are losing, their military and economic value. This is true for the military, it’s done, it’s over …. And once most people are no longer really necessary, for the military and for the economy, the idea that you will continue to have mass medicine is not so certain.”

There is a lot to consider in these few paragraphs, but here are what I take to be the three salient points: the problem solving approach to death, the coming radical inequality, and the problem of “useless people.”

Harari is admirably frank about his status as a historian and the nature of the predictions he is making. He acknowledges that he is not a technologist nor a physician and that he is merely extrapolating possible futures from observable trends. That said, I think Harari’s discussion is compelling not only because of the elegance of his synthesis, but also because it steers clear of the more improbable possibilities–he does not think that AI will become conscious, for instance. It also helps that he is chastened by a historian’s understanding of the contingency of human affairs.

He is almost certainly right about the transformation of death into a technical problem. Adumbrations of this attitude are present at the very beginnings of modern science. Francis Bacon, the great Elizabethan promoter of modern science, wrote in his History of Life and Death, “Whatever can be repaired gradually without destroying the original whole is, like the vestal fire, potentially eternal.” Elsewhere, he gave as the goal of the pursuit of knowledge “a discovery of all operations and possibilities of operations from immortality (if it were possible) to the meanest mechanical practice.”

In the 1950’s, Hannah Arendt anticipated these concerns as well when, in the Prologue to The Human Condition, she wrote about the “hope to extend man’s life-span far beyond the hundred-year limit.” “This future man,” she added,

“whom scientists tell us they will produce in no more than a hundred years seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself. There is no reason to doubt our abilities to accomplish such an exchange, just as there is no reason to doubt our present ability to destroy all organic life on earth.”

Approaching death as a technical problem will surely yield some tangible benefits even if it fails to deliver immortality or even radical life extension. But what will be the costs? It will be the case that even if it fails to yield a “solution,” turning death into a technical problem will have profound social, psychological, and moral consequences. How will it affect the conduct of my life? How will this approach help us face death when it finally comes? As Harari himself puts it, “My guess, which is only a guess, is that the people who live today, and who count on the ability to live forever, or to overcome death in 50 years, 60 years, are going to be hugely disappointed. It’s one thing to accept that I’m going to die. It’s another thing to think that you can cheat death and then die eventually. It’s much harder.”

Strikingly, Arendt also commented on “the advent of automation, which in a few decades probably will empty the factories and liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity.” If this appears to us as an unmitigated blessing, Arendt would have us think otherwise:

“The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society. The fulfillment of the wish, therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won . . . What we are confronted with is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.”

So we are back to useless people. Interestingly, Harari locates this possibility in a long trend toward specialization that has been unfolding for some time:

“And when you look at it more and more, for most of the tasks that humans are needed for, what is required is just intelligence, and a very particular type of intelligence, because we are undergoing, for thousands of years, a process of specialization, which makes it easier to replace us.”

Intelligence as a opposed to consciousness. Harari makes the point that the two have been paired throughout human history. Increasingly, we are able to create intelligence apart from consciousness. The intelligence is very limited, it may be able to do one thing extremely well but utterly fail at other, seemingly simple tasks. But specialization, or the division of labor, has opened the door for the replacement of human or consciousness-based intelligence with machine intelligence. In other words, the mechanization of human action prepares the way for the replacement of human actors.

Some may object by noting that similar predictions have been made before and have not materialized. I think Harari’s rejoinder is spot on:

“And again, I don’t want to give a prediction, 20 years, 50 years, 100 years, but what you do see is it’s a bit like the boy who cried wolf, that, yes, you cry wolf once, twice, three times, and maybe people say yes, 50 years ago, they already predicted that computers will replace humans, and it didn’t happen. But the thing is that with every generation, it is becoming closer, and predictions such as these fuel the process.”

I’ve noted before that utopians often take the moral of Chicken Little for their interpretive paradigm: the sky never falls. Better I think, as Harari also suggests, to consider the wisdom of the story of the boy who cried wolf.

I would add here that the plausibility of these predictions is only part of what makes them interesting or disconcerting, depending on your perspective. Even if these predictions turn out to be far off the mark, they are instructive as symptoms. As Dale Carrico has put it, the best response to futurist rhetoric may be “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

Moreover, to the degree that these predictions are extrapolations from present trends, they may reveal something to us about these existing tendencies. Along these lines, I think the very idea of “useless people” tells us something of interest about the existing trend to outsource a wide range of human actions to machines and apps. This outsourcing presents itself as a great boon, of course, but it finally it raises a question: What exactly are we be liberated for?

It’s a point I’ve raised before in connection to the so-called programmable world of the Internet of Things:

For some people at least, the idea seems to be that when we are freed from these mundane and tedious activities, we will be free to finally tap the real potential of our humanity. It’s as if there were some abstract plane of human existence that no one had yet achieved because we were fettered by our need to be directly engaged with the material world. I suppose that makes this a kind of gnostic fantasy. When we no longer have to tend to the world, we can focus on … what exactly?

Put the possibility of even marginally extended life-spans together with the reductio ad absurdum of digital outsourcing, and we can render an even more pointed version of Arendt’s warning about a society of laborers without labor. We are being promised the extension of human life precisely when we have lost any compelling account of what exactly we should do with our lives.

As for what to do about the problem of useless people, or the permanently unemployed, Harari is less than sanguine:

“I don’t have a solution, and the biggest question maybe in economics and politics of the coming decades will be what to do with all these useless people. I don’t think we have an economic model for that. My best guess, which is just a guess, is that food will not be a problem. With that kind of technology, you will be able to produce food to feed everybody. The problem is more boredom, and what to do with people, and how will they find some sense of meaning in life when they are basically meaningless, worthless.

My best guess at present is a combination of drugs and computer games as a solution for most … it’s already happening. Under different titles, different headings, you see more and more people spending more and more time, or solving their inner problems with drugs and computer games, both legal drugs and illegal drugs. But this is just a wild guess.”

Of course, as Harari states repeatedly, all of this is conjecture. Certainly, the future need not unfold this way. Arendt, after commenting on the desire to break free of the human condition by the deployment of our technical know-how, added,

“The question is only whether we wish to use our new scientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order and therefore can hardly be left to the decision of professional scientists or professional politicians.”

Or, as Marshall McLuhan put it, “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”