Machines, Work, and the Value of People

Late last month, Microsoft released a “bot” that guesses your age based on an uploaded picture. The bot tended to be only marginally accurate and sometimes hilariously (or disconcertingly) wrong. What’s more, people quickly began having some fun with the program by uploading faces of actors playing fictional characters, such as Yoda or Gandalf. My favorite was Ian Bogost’s submission:

Shortly after the How Old bot had its fleeting moment of virality, Nathan Jurgenson tweeted the following:

This was an interesting observation, and it generated a few interesting replies. Jurgenson himself added, “much of the bigdata/algorithm debates miss how poor these often perform. many critiques presuppose & reify their untenable positivism.” He summed up this line of thought with this tweet: “so much ‘tech criticism’ starts first with uncritically buying all of the hype silicon valley spits out.”

Let’s pause here for a moment. All of this is absolutely true. Yet … it’s not all hype, not necessarily anyway. Let’s bracket the more outlandish claims made by the singularity crowd, of course. But take facial recognition software, for instance. It doesn’t strike me as wildly implausible that in the near future facial recognition programs will achieve a rather striking degree of accuracy.

Along these lines, I found Kyle Wrather’s replies to Jurgenson’s tweet particularly interesting. First, Wrather noted, “[How Old Bot] being wrong makes people more comfortable w/ facial recognition b/c it seems less threatening.” He then added, “I think people would be creeped out if we’re totally accurate. When it’s wrong, humans get to be ‘superior.'”

Wrather’s second comment points to an intriguing psychological dynamic. Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But as far as I know, it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? I’m not entirely sure, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

I think this latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

(It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”)

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions:  Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

So, to sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.

Fit the Tool to the Person, Not the Person to the Tool

I recently had a conversation with a student about the ethical quandaries raised by the advent of self-driving cars. Hypothetically, for instance, how would a self-driving car react to a pedestrian who stepped out in front of it? Whose safety would it be programmed to privilege?

The relatively tech-savvy student was unfazed. Obviously this would only be a problem until pedestrians were forced out of the picture. He took it for granted that the recalcitrant human element would be eliminated as a matter of course in order to perfect the technological system. I don’t think he took this to be a “good” solution, but he intuited the sad truth that we are more likely to bend the person to fit the technological system than to design the system to fit the person.

Not too long ago, I made a similar observation:

… any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

A few days ago, Elon Musk put it all very plainly:

“Tesla co-founder and CEO Elon Musk believes that cars you can control will eventually be outlawed in favor of ones that are controlled by robots. The simple explanation: Musk believes computers will do a much better job than us to the point where, statistically, humans would be a liability on roadways [….] Musk said that the obvious move is to outlaw driving cars. ‘It’s too dangerous,’ Musk said. ‘You can’t have a person driving a two-ton death machine.'”

Mind you, such a development, were it to transpire, would be quite a boon for the owner of a company working on self-driving cars. And we should also bear in mind Dale Carrico’s admonition “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

If autonomous cars become the norm and transportation systems are designed to accommodate their needs, it will not have happened because of some force inherent in the technology itself. It will happen because interested parties will make it happen, with varying degrees of acquiescence from the general public.

This was precisely the case with the emergence of the modern highway system that we take for granted. Its development was not a foregone conclusion. It was heavily promoted by government and industry. As Walter Lippmann observed during the 1939 World’s Fair, “General motors has spent a small fortune to convince the american public that if it wishes to enjoy the full benefit of private enterprise in motor manufacturing, it will have to rebuild its cities and its highways by public enterprise.”

Consider as well the film below produced by Dow Chemicals in support of the 1956 Federal Aid-Highway Act:

Whatever you think about the virtues or vices of the highway system and a transportation system designed premised on the primacy the automobile, my point is that such a system did not emerge in a cultural or political vacuum. Choices were made; political will was exerted; money was spent. So it is now, and so it will be tomorrow.

Stuck Behind a Plow in India

So this is going to come off as more than a bit cynical, but, for what it’s worth, I don’t intend it to be.

Over the last few weeks, I’ve heard an interesting claim expressed by disparate people in strikingly similar language. The claim was always some variation of the following: the most talented person in the world is most likely stuck behind a plow in some third world country. The recurring formulation caught my attention, so I went looking for the source.

As it turns out, sometime in 2014, Google’s chief economist, Hal Varian, proposed the following:

“The biggest impact on the world will be universal access to all human knowledge. The smartest person in the world currently could well be stuck behind a plow in India or China. Enabling that person — and the millions like him or her — will have a profound impact on the development of the human race.”

It occurred to me that this “stuck behind a plow” claim is the 21st century version of the old “rags to riches” story. The rags to riches story promoted certain virtues–hard work, resilience, thrift, etc.–by promising that they will be extravagantly rewarded. Of course, such extravagant rewards have always been rare and rarely correlated to how hard one might be willing to work. Which is not, I hasten to add, a knock against hard work and its rewards, such as they may be. But, to put the point more critically, the genre served interests other than those of its ostensible audience. And so it is with the “stuck behind a plow” pitch.

The “rags to riches/stuck behind a plow” narrative is an egalitarian story, at least on the surface. It inspires the hope that an undiscovered Everyman languishing in impoverished obscurity, properly enabled, can hope to be a person of world-historical consequence, or at least remarkably prosperous. It’s a happy claim, and, of course, impossible to refute–not that I’m particularly interested in refuting the possibility.

The problem, as I see it, is that, coming from the would-be noble enablers, it’s also a wildly convenient, self-serving claim. Who but Google could enable such benighted souls by providing universal access to all human knowledge?

Never mind that the claim is hyperbolic and traffics in an impoverished notion of what counts as knowledge. Never mind, as well, that, even if we grant the hyperbole, access to knowledge by itself cannot transform a society, cure its ills, heal its injustices, or lift the poor out of their poverty.

I’m reminded of one of my favorite lines in Conrad’s Heart of Darkness. Before he ships off to the Congo, Marlow’s aunt, who had helped secure his job with the Company, gushes about the nobility of work he is undertaking. Marlow would be “something like an emissary of light, something like a lower sort of apostle.” In her view, he would be “weaning those ignorant millions from their horrid ways.”

Then comes the wonderfully deadpanned line that we would do well to remember:

“I ventured to hint that the Company was run for profit.”

The Ageless and the Useless

In The Religion of the Future, Roberto Unger, a professor of law at Harvard, identifies humanity’s three “irreparable flaws”: mortality, groundlessness, and insatiability. We are plagued by death. We are fundamentally ignorant about our origins and our place in the grand scheme of things. We are made perpetually restless by desires that cannot finally be satisfied. This is the human condition. In his view, all of the world’s major religions have tried to address these three irreparable flaws, and they have all failed. It is now time, he proposes, to envision a new religion that will be adequate to the challenges of the 21st century. His own proposal is a rather vague program of learning to be at once more god-like while eschewing certain god-like qualities, such as immortality, omniscience, and perfectibility. It strikes me as less than actionable.

There is, however, another religious option taking shape. In a wide-ranging Edge interview with Daniel Kahneman about the unfolding future, historian Yuval Noah Harari concluded with the following observation:

“In terms of history, the events in Middle East, of ISIS and all of that, is just a speed bump on history’s highway. The Middle East is not very important. Silicon Valley is much more important. It’s the world of the 21st century … I’m not speaking only about technology. In terms of ideas, in terms of religions, the most interesting place today in the world is Silicon Valley, not the Middle East. This is where people like Ray Kurzweil, are creating new religions. These are the religions that will take over the world, not the ones coming out of Syria and Iraq and Nigeria.”

This is hardly an original claim, although it’s not clear that Harari recognizes this. Indeed, just a few months ago I commented on another Edge conversation in which Jaron Lanier took aim at the “layer of religious thinking” being added “to what otherwise should be a technical field.” Lanier was talking about the field of AI. He went on to complain about a “core of technically proficient, digitally-minded people” who “reject traditional religions and superstitions,” but then “re-create versions of those old religious superstitions!” “In the technical world,” he added, “these superstitions are just as confusing and just as damaging as before, and in similar ways.”

This emerging Silicon Valley religion, which is just the latest iteration of the religion of technology, is devoted to addressing one of the three irreparable flaws identified by Unger: our mortality. From this angle it becomes apparent that there are two schools within this religious tradition. The first of these seeks immortality through the digitization of consciousness so that it may be downloaded and preserved forever. Decoupled from corruptible bodies, our essential self lives on in the cloud–a metaphor that now appears in a new light. We may call this the gnostic strain of the Silicon Valley religion.

The second school grounds its slightly more plausible hopes for immortality in the prospect of making the body imperishable through biogenetic and cyborg enhancements. It is this prospect that Harari takes to be a serious possibility:

“Yes, the attitude now towards disease and old age and death is that they are basically technical problems. It is a huge revolution in human thinking. Throughout history, old age and death were always treated as metaphysical problems, as something that the gods decreed, as something fundamental to what defines humans, what defines the human condition and reality ….

People never die because the Angel of Death comes, they die because their heart stops pumping, or because an artery is clogged, or because cancerous cells are spreading in the liver or somewhere. These are all technical problems, and in essence, they should have some technical solution. And this way of thinking is now becoming very dominant in scientific circles, and also among the ultra-rich who have come to understand that, wait a minute, something is happening here. For the first time in history, if I’m rich enough, maybe I don’t have to die.”

Harari expands on that last line a little further on:

“Death is optional. And if you think about it from the viewpoint of the poor, it looks terrible, because throughout history, death was the great equalizer. The big consolation of the poor throughout history was that okay, these rich people, they have it good, but they’re going to die just like me. But think about the world, say, in 50 years, 100 years, where the poor people continue to die, but the rich people, in addition to all the other things they get, also get an exemption from death. That’s going to bring a lot of anger.”

Kahneman pressed Harari on this point. Won’t the medical technology that yields radical life extension trickle down to the masses? In response, Harari draws on a second prominent theme that runs throughout the conversation: superfluous humans.

“But in the 21st century, there is a good chance that most humans will lose, they are losing, their military and economic value. This is true for the military, it’s done, it’s over …. And once most people are no longer really necessary, for the military and for the economy, the idea that you will continue to have mass medicine is not so certain.”

There is a lot to consider in these few paragraphs, but here are what I take to be the three salient points: the problem solving approach to death, the coming radical inequality, and the problem of “useless people.”

Harari is admirably frank about his status as a historian and the nature of the predictions he is making. He acknowledges that he is not a technologist nor a physician and that he is merely extrapolating possible futures from observable trends. That said, I think Harari’s discussion is compelling not only because of the elegance of his synthesis, but also because it steers clear of the more improbable possibilities–he does not think that AI will become conscious, for instance. It also helps that he is chastened by a historian’s understanding of the contingency of human affairs.

He is almost certainly right about the transformation of death into a technical problem. Adumbrations of this attitude are present at the very beginnings of modern science. Francis Bacon, the great Elizabethan promoter of modern science, wrote in his History of Life and Death, “Whatever can be repaired gradually without destroying the original whole is, like the vestal fire, potentially eternal.” Elsewhere, he gave as the goal of the pursuit of knowledge “a discovery of all operations and possibilities of operations from immortality (if it were possible) to the meanest mechanical practice.”

In the 1950’s, Hannah Arendt anticipated these concerns as well when, in the Prologue to The Human Condition, she wrote about the “hope to extend man’s life-span far beyond the hundred-year limit.” “This future man,” she added,

“whom scientists tell us they will produce in no more than a hundred years seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself. There is no reason to doubt our abilities to accomplish such an exchange, just as there is no reason to doubt our present ability to destroy all organic life on earth.”

Approaching death as a technical problem will surely yield some tangible benefits even if it fails to deliver immortality or even radical life extension. But what will be the costs? It will be the case that even if it fails to yield a “solution,” turning death into a technical problem will have profound social, psychological, and moral consequences. How will it affect the conduct of my life? How will this approach help us face death when it finally comes? As Harari himself puts it, “My guess, which is only a guess, is that the people who live today, and who count on the ability to live forever, or to overcome death in 50 years, 60 years, are going to be hugely disappointed. It’s one thing to accept that I’m going to die. It’s another thing to think that you can cheat death and then die eventually. It’s much harder.”

Strikingly, Arendt also commented on “the advent of automation, which in a few decades probably will empty the factories and liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity.” If this appears to us as an unmitigated blessing, Arendt would have us think otherwise:

“The modern age has carried with it a theoretical glorification of labor and has resulted in a factual transformation of the whole of society into a laboring society. The fulfillment of the wish, therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won . . . What we are confronted with is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.”

So we are back to useless people. Interestingly, Harari locates this possibility in a long trend toward specialization that has been unfolding for some time:

“And when you look at it more and more, for most of the tasks that humans are needed for, what is required is just intelligence, and a very particular type of intelligence, because we are undergoing, for thousands of years, a process of specialization, which makes it easier to replace us.”

Intelligence as a opposed to consciousness. Harari makes the point that the two have been paired throughout human history. Increasingly, we are able to create intelligence apart from consciousness. The intelligence is very limited, it may be able to do one thing extremely well but utterly fail at other, seemingly simple tasks. But specialization, or the division of labor, has opened the door for the replacement of human or consciousness-based intelligence with machine intelligence. In other words, the mechanization of human action prepares the way for the replacement of human actors.

Some may object by noting that similar predictions have been made before and have not materialized. I think Harari’s rejoinder is spot on:

“And again, I don’t want to give a prediction, 20 years, 50 years, 100 years, but what you do see is it’s a bit like the boy who cried wolf, that, yes, you cry wolf once, twice, three times, and maybe people say yes, 50 years ago, they already predicted that computers will replace humans, and it didn’t happen. But the thing is that with every generation, it is becoming closer, and predictions such as these fuel the process.”

I’ve noted before that utopians often take the moral of Chicken Little for their interpretive paradigm: the sky never falls. Better I think, as Harari also suggests, to consider the wisdom of the story of the boy who cried wolf.

I would add here that the plausibility of these predictions is only part of what makes them interesting or disconcerting, depending on your perspective. Even if these predictions turn out to be far off the mark, they are instructive as symptoms. As Dale Carrico has put it, the best response to futurist rhetoric may be “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

Moreover, to the degree that these predictions are extrapolations from present trends, they may reveal something to us about these existing tendencies. Along these lines, I think the very idea of “useless people” tells us something of interest about the existing trend to outsource a wide range of human actions to machines and apps. This outsourcing presents itself as a great boon, of course, but it finally it raises a question: What exactly are we be liberated for?

It’s a point I’ve raised before in connection to the so-called programmable world of the Internet of Things:

For some people at least, the idea seems to be that when we are freed from these mundane and tedious activities, we will be free to finally tap the real potential of our humanity. It’s as if there were some abstract plane of human existence that no one had yet achieved because we were fettered by our need to be directly engaged with the material world. I suppose that makes this a kind of gnostic fantasy. When we no longer have to tend to the world, we can focus on … what exactly?

Put the possibility of even marginally extended life-spans together with the reductio ad absurdum of digital outsourcing, and we can render an even more pointed version of Arendt’s warning about a society of laborers without labor. We are being promised the extension of human life precisely when we have lost any compelling account of what exactly we should do with our lives.

As for what to do about the problem of useless people, or the permanently unemployed, Harari is less than sanguine:

“I don’t have a solution, and the biggest question maybe in economics and politics of the coming decades will be what to do with all these useless people. I don’t think we have an economic model for that. My best guess, which is just a guess, is that food will not be a problem. With that kind of technology, you will be able to produce food to feed everybody. The problem is more boredom, and what to do with people, and how will they find some sense of meaning in life when they are basically meaningless, worthless.

My best guess at present is a combination of drugs and computer games as a solution for most … it’s already happening. Under different titles, different headings, you see more and more people spending more and more time, or solving their inner problems with drugs and computer games, both legal drugs and illegal drugs. But this is just a wild guess.”

Of course, as Harari states repeatedly, all of this is conjecture. Certainly, the future need not unfold this way. Arendt, after commenting on the desire to break free of the human condition by the deployment of our technical know-how, added,

“The question is only whether we wish to use our new scientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order and therefore can hardly be left to the decision of professional scientists or professional politicians.”

Or, as Marshall McLuhan put it, “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”

Consider the Traffic Light Camera

It looks like I may be getting a traffic citation in the mail within the next few days. A few nights ago, while making a left into my neighborhood, I was slowed by a car that made a creeping right ahead of me onto the same street. As I finally completed my turn, I saw a bright flash go off behind me. While I’ve noted the proliferation of traffic light cameras around town with mildly disconcerted interest, I hadn’t yet noticed the camera on this rather inconsequential intersection. A day or two later at the same spot, I found myself coming to an abrupt stop once the light hit yellow to ensure that I wasn’t caught completing my turn as the light turned red. Automated surveillance had done its job; I had internalized the gaze of the unblinking eye.

For some time now I’ve been unsettled by the proliferation of traffic light cameras, but I’ve not yet been able to articulate why exactly. While Big Brother fears may be part of the concern, I’m mostly troubled by how the introduction of automated surveillance and ticketing seems to encourage the replacement of human judgment, erroneous as it may often be, by unthinking, habituated behavior.

The traffic light camera knows only that you have crossed or not crossed a certain point at a certain time. Its logic is binary: you are either out of the intersection or in it. Context matters not at all; there is no room for deliberation. If we can imagine a limited set of valid reasons for proceeding through a red light, automated ticketing cannot entertain them. While the intermittently monitored yellow light invited judgment and practical wisdom; the unceasingly monitored yellow light tolerates only unwavering compliance.

In this way, it hints at a certain pattern in the relationship between human beings and the complex technological systems we create. Take the work of getting from here to there as an example. Our baseline is walking. We can walk from here to there in just about any way that the terrain will allow and at whatever rate our needs dictate. And while the journey may have its perils, they are not inherent in walking itself. After all, it would be a strange accident indeed if I were incapacitated as a result of bumping into someone or even stumbling on a stone.

But walking is not necessarily the fastest or most efficient way of getting from here to there, especially if I have a sizable load to bear. Horse-drawn conveyances relieve me of the work of walking and potentially increase my rate of speed without, it seems to me, radically increasing the risks. But they also tend to limit my freedom of motion, a decently kept road being rather more of a necessity than it would’ve been for the walker. Desire paths illustrate this point neatly. The walker may make his own path, and frequently does so.

desire pathThen the train comes along. The train radically increased the rate of speed at which human beings may travel, but it also elevated risks–a derailment, after all, is not quite the same thing as a stumble–and restricted freedom of motion to the tracks laid out for it. It’s worth noting that the railway system was one of the first expansive technological systems requiring, for its efficient and safe operation, rigidly regimented and coordinated action. We even owe the time zones to the systematizing demands of the railroads. The railway system, then, was a massive feat of system building, and would-be travelers were integrated into this system.

The automobile, too, is powerful and potentially dangerous, and must also be carefully managed. Consequently, we created an elaborate system of roads and rules to govern how we use this powerful machine; we created a mechanistic environment to manage the machine. Interestingly, the car allows for a bit more freedom of action than the train, illustrated nicely by the off-roading ideal, which is a fantasy of liberation. But, for the most part, our driving, in order to be safe and efficient, is rationalized and systematized. Apart from this over-arching systematization, the off-roading fantasy would have little appeal. All of this is, of course, a “good thing.” Safety is important, etc., etc. The clip below, filmed at an intersection in Addis Ababa, illustrates what driving looks like in the absence of such rationalization and regimentation.

In an ideal world, one in which all rules and practical guidelines are punctiliously obeyed, the traffic flows freely and safely. Of course, this is far from an ideal world; accidents happen, and they are a leading source of inefficiency, expense, and harm. When driving is conceived of as an engineering problem solved by the fabrication of elaborate systems, accidents are human glitches in the machinery of automobile transportation. So, lanes, signals, signs, traffic lights, etc.–all of it is designed to discipline our driving so that it may resemble the smooth operation of machines that follow rules flawlessly. The more machine-like our driving, the more efficient the system.

As an illustration of the basic principle, take UPS’s deployment of Orion, a complex algorithm designed to plot out the best delivery route for drivers. “Driver reaction to Orion is mixed,” according to a WSJ piece on the software,

“The experience can be frustrating for some who might not want to give up a degree of autonomy, or who might not follow Orion’s logic. For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. But Orion often can see a payoff, measured in small amounts of time and money that the average person might not see.”

Commenting on this story at Marginal Revolution, Alex Taborrok added, “Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.” However we might frame the matter, it remains the case that, given the logic of the system, the driver’s judgment is the glitch that needs to be eradicated to achieve the best results.

Let’s consider the traffic light camera from this angle. Setting aside the not insignificant function of raising municipal funds through increased ticketing and fines, traffic light cameras are designed to mitigate pesky and erratic human judgment. Always-on surveillance ensures that my actions are ever more strictly synchronized with the vast technological system that orders automobile traffic. The traffic light camera assures that I am ever more fully assimilated into the logic of the system, to put it a bit too grimly perhaps. The upside, of course, is the promise of ever greater efficiency and safety–the only values technological systems can recognize, of course.

Ultimately, however, we don’t make very good machines. We get drowsy and angry and drunk; we are easily distracted and, even at our best, we can only take in a limited slice of the environment around us. Enter self-driving cars and the promise of eliminating human error from the complex automotive transportation system.

The trajectory that leads to self-driving cars was already envisioned years before the modern highway system was built. In the film that accompanied GM’s Futurama exhibit at the 1939 New York World’s Fair, we see a model highway system of the future (1960), and we are told, beginning around the 14:30 mark, “Traffic moves at an unreduced rates of speed. Safe distance between cars is maintained by automatic radio control …. The keynote of this motorway? Safety. Safety with increased speed.”

Most pitches I’ve heard for self-driving cars trade on the same idea: increased safety through automation, i.e., the elimination of human error. That, and increased screen time, because if you’re not having to pay attention to the road, then you’re free to dive into your device of choice. Or look at the scenery, if you’re quaint that way, but let’s be serious. Take, for example, the self-driving car Mercedes-Benz displayed at this year’s CES, the F015: “For interaction within the vehicle, the passengers rely on six display screens located around the cabin. They also interact with the vehicle through gestures, eye-tracking or by touching the high-resolution screens.”

mercedes self-driving

But setting the ubiquity of screens aside, we can extract a general principle from the trajectory I’ve just sketched out.

In a system that works best the more machine-like we become, the human component becomes expendable as soon as a machine can outperform it. Or to put it another way, any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

Of course, it should be noted that, as per usual, the hype surrounding self-driving cars is just that. Writing for Fortune, Nicholas Carr, cited Ford’s chief engineer, Raj Nair, who, following his boss’s promise of automated cars rolling off the assembly line in five years time, “explained that ‘full automation’ would be possible only in limited circumstances, particularly ‘where high definition mapping is available along with favorable environmental conditions for the vehicle’s sensors.’” Carr added,

“While it may be relatively straightforward to design a car that can drive itself down a limited-access highway in good weather, programming it to navigate chaotic city or suburban streets or to make its way through a snowstorm or a downpour poses much harder challenges. Many engineers and automation experts believe it will take decades of further development to build a completely autonomous car, and some warn that it may never happen, at least not without a massive and very expensive overhaul of our road system.”

That said, the dream of full automation will probably direct research and development for years to come, and we will continue to see incremental steps in that direction. In retrospect, one of those steps will have been the advent of traffic light cameras, not because it advanced the technology of self-driving cars, but because it prepared us to assent to the assumption that we would be ultimately expendable. The point, then, of this rambling post might be put this way: Our attitude toward new technologies may be less a matter of conscious thought than of tacit assumptions internalized through practices and habits.