A Man Walks Into A Bank

I walked into my bank a few days ago and found that the lobby had a different look. The space had been rearranged to highlight a new addition: an automated teller. While I was being helped, I overheard an exchange between a customer in line behind me and a bank worker whose new role appeared to be determining whether customers could be served by the automated teller and directing them in that direction.

She was upbeat about the automated teller and how it would speed things up for customers. The young man talking with her posed a question that occurred to me as I listened but that I’m not sure I would have had the temerity to raise: “Aren’t you afraid that pretty soon they’re not going to need you guys anymore?”

The bank employee was entirely unperturbed, or at least she pretended to be. “No, I’m not worried about that,” she said. “I know they’re going to keep us around.”

I hope they do, but I don’t share her optimism. I was reminded of passage from Neil Postman’s Technopoly: The Surrender of Culture to Technology. Writing in the early ’90s about the impact of television on education, Postman commented on teachers who enthusiastically embraced the transformations wrought by television. Believing the modern school system, and thus the teacher’s career, to be the product of print culture, Postman wrote,

[…] surely, there is something perverse about schoolteachers’ being enthusiastic about what is happening. Such enthusiasm always calls to my mind an image of some turn-of-the-century blacksmith who not only sings the praises of the automobile but also believes that his business will be enhanced by it. We know now that his business was not enhanced by it; it was rendered obsolete by it, as perhaps the clearheaded blacksmiths knew. What could they have done? Weep, if nothing else.

We might find it in us to weep, too, or at least acknowledge the losses, even when the gains are real and important, which they are not always. Perhaps we might also refuse a degree of personal convenience from time to time, or every time if we find it in us to do so, in order to embody principles that might at least, if nothing else, demonstrate a degree of solidarity with those who will not be the winners in the emerging digital economy.

Postman believed that computer technology created a similar situation to that of the blacksmiths, “for here too we have winners and losers.”

“There can be no disputing that the computer has increased the power of large-scale organizations like the armed forces, or airline companies or banks or tax-collecting agencies. And it is equally clear that the computer is now indispensable to high-level researchers in physics and other natural sciences. But to what extend has computer technology been an advantage to the masses of people? To steelworkers, vegetable-store owners, teachers, garage mechanics, musicians, bricklayers, dentists, and most of the rest into whose lives the computer now intrudes? Their private matters have been made more accessible to powerful institutions. They are more easily tracked and controlled; are subjected to more examinations; are increasingly mystified by the decisions made about them; are often reduced to mere numerical objects. They are inundated by junk mail. They are easy targets for advertising agencies …. In a word, almost nothing that they need happens to the losers. Which is why they are the losers.

It is to be expected that the winners will encourage the losers to be enthusiastic about computer technology. That is the way of winners … They also tell them that their lives will be conducted more efficiently. But discreetly they neglect to say from whose point of view the efficiency is warranted or what might be its costs.”

The religion of technology is a secular faith, and as such it should, at least, have the decency of striking a tragic note.

Resisting the Habits of the Algorithmic Mind

Algorithms, we are told, “rule our world.” They are ubiquitous. They lurk in the shadows, shaping our lives without our consent. They may revoke your driver’s license, determine whether you get your next job, or cause the stock market to crash. More worrisome still, they can also be the arbiters of lethal violence. No wonder one scholar has dubbed 2015 “the year we get creeped out by algorithms.” While some worry about the power of algorithms, other think we are in danger of overstating their significance or misunderstanding their nature. Some have even complained that we are treating algorithms like gods whose fickle, inscrutable wills control our destinies.

Clearly, it’s important that we grapple with the power of algorithms, real and imagined, but where do we start? It might help to disambiguate a few related concepts that tend to get lumped together when the word algorithm (or the phrase “Bid Data”) functions more as a master metaphor than a concrete noun. I would suggest that we distinguish at least three realities: data, algorithms, and devices. Through the use of our devices we generate massive amounts of data, which would be useless were it not for analytical tools, algorithms prominent among them. It may be useful to consider each of these separately; at least we should be mindful of the distinctions.

We should also pay some attention to the language we use to identify and understand algorithms. As Ian Bogost has forcefully argued, we should certainly avoid implicitly deifying algorithms by how we talk about them. But even some of our more mundane metaphors are not without their own difficulties. In a series of posts at The Infernal Machine, Kevin Hamilton considers the implications of the popular “black box” metaphor and how it encourages us to think about and respond to algorithms.

The black box metaphor tries to get at the opacity of algorithmic processes. Inputs are transformed into outputs, but most of us have no idea how the transformation was effected. More concretely, you may have been denied a loan or job based on the determinations of a program running an algorithm, but how exactly that determination was made remains a mystery.

In his discussion of the black box metaphor, Hamilton invites us to consider the following scenario:

“Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.”

But how effective is this new way of approaching her engagement with Facebook, now informed by the black box metaphor? Hamilton thinks “this grasp toward agency is also the beginning of a new system.” “Tweaking to account for black-boxed algorithmic processes,” Hamilton suggests, “could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” Ultimately, Hamilton concludes, “most of us are stuck in an ‘opt-in or opt-out’ scenario that never goes anywhere.”

If I read him correctly, Hamilton is describing an escalating, never-ending battle to achieve a variety of desired outcomes in relation to the algorithmic system, all of which involve securing some kind of independence from the system, which we now understand as something standing apart and against us. One of those outcomes may be understood as the state Evan Selinger and Woodrow Hartzog have called obscurity, “the idea that when information is hard to obtain or understand, it is, to some degree, safe.” “Obscurity,” in their view, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power.”

Another desired outcome that fuels resistance to black box algorithms involves what we might sum up as the quest for authenticity. Whatever relative success algorithms achieve in predicting our likes and dislikes, our actions, our desires–such successes are often experienced as an affront to our individuality and autonomy. Ironically, the resulting battle against the algorithm often secures the their relative victory by fostering what Frank Pasquale has called the algorithmic self, constantly modulating itself in response/reaction to the algorithms it encounters.

More recently, Quinn Norton expressed similar concerns from a slightly different angle: “Your internet experience isn’t the main result of algorithms built on surveillance data; you are. Humans are beautifully plastic, endlessly adaptable, and over time advertisers can use that fact to make you into whatever they were hired to make you be.”

Algorithms and the Banality of Evil

These concerns about privacy or obscurity on the one hand and agency or authenticity on the other are far from insignificant. Moving forward, though, I will propose another approach to the challenges posed by algorithmic culture, and I’ll do so with a little help from Joseph Conrad and Hannah Arendt.

In Conrad’s Heart of Darkness, as the narrator, Marlow, makes his way down the western coast of Africa toward the mouth of the Congo River in the service of a Belgian trading company, he spots a warship anchored not far from shore: “There wasn’t even a shed there,” he remembers, “and she was shelling the bush.”

“In the empty immensity of earth, sky, and water,” he goes on, “there she was, incomprehensible, firing into a continent …. and nothing happened. Nothing could happen.” “There was a touch of insanity in the proceeding,” he concluded. This curious and disturbing sight is the first of three such cases encountered by Marlow in quick succession.

Not long after he arrived at the Company’s station, Marlow heard a loud horn and then saw natives scurry away just before witnessing an explosion on the mountainside: “No change appeared on the face of the rock. They were building a railway. The cliff was not in the way of anything; but this objectless blasting was all the work that was going on.”

These two instances of seemingly absurd, arbitrary action are followed by a third. Walking along the station’s grounds, Marlow “avoided a vast artificial hole somebody had been digging on the slope, the purpose of which I found it impossible to divine.” As they say: two is a coincidence; three’s a pattern.

Nestled among these cases of mindless, meaningless action, we encounter as well another kind of related thoughtlessness. The seemingly aimless shelling he witnessed at sea, Marlow is assured, targeted an unseen camp of natives. Registering the incongruity, Marlow exclaims, “he called them enemies!” Later, Marlow recalls the shelling off the coastline when he observed the natives scampering clear of each blast on the mountainside: “but these men could by no stretch of the imagination be called enemies. They were called criminals, and the outraged law, like the bursting shells, had come to them, an insoluble mystery from the sea.”

Taken together these incidents convey a principle: thoughtlessness couples with ideology to abet violent oppression. We’ll come back to that principle in a moment, but, before doing so, consider two more passages from the novel. Just before that third case of mindless action, Marlow reflected on the peculiar nature of the evil he was encountering:

“I’ve seen the devil of violence, and the devil of greed, and the devil of hot desire; but, by all the stars! these were strong, lusty, red-eyed devils, that swayed and drove men–men, I tell you. But as I stood on this hillside, I foresaw that in the blinding sunshine of that land I would become acquainted with a flabby, pretending, weak-eyed devil of rapacious and pitiless folly.”

Finally, although more illustrations could be adduced, after an exchange with an insipid, chatty company functionary, who is also an acolyte of Mr. Kurtz, Marlow had this to say: “I let him run on, the papier-mâché Mephistopheles, and it seemed to me that if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

That sentence, to my mind, most readily explains why T.S. Eliot chose as an epigraph for his 1925 poem, “The Hollow Men,” a line from Heart of Darkness: “Mistah Kurtz – he dead.” This is likely an idiosyncratic reading, so take it with the requisite grain of salt, but I take Conrad’s papier-mâché Mephistopheles to be of a piece with Eliot’s hollow men, who having died are remembered “Not as lost

Violent souls, but only
As the hollow men
The stuffed men.”

For his part, Conrad understood that these hollow men, these flabby devils were still capable of immense mischief. Within the world as it is administered by the Company, there is a great deal of doing but very little thinking or understanding. Under these circumstances, men are characterized by a thoroughgoing superficiality that renders them willing, if not altogether motivated participants in the Company’s depredations. Conrad, in fact, seems to have intuited the peculiar dangers posed by bureaucratic anomie and anticipated something like what Hannah Arendt later sought to capture in her (in)famous formulation, “the banality of evil.”

If you are familiar with the concept of the banality of evil, you know that Arendt conceived of it as a way of characterizing the kind of evil embodied by Adolph Eichmann, a leading architect of the Holocaust, and you may now be wondering if I’m preparing to argue that algorithms will somehow facilitate another mass extermination of human beings.

Not exactly. I am circumspectly suggesting that the habits of the algorithmic mind are not altogether unlike the habits of the bureaucratic mind. (Adam Elkus makes a similar correlation here, but I think I’m aiming at a slightly different target.) Both are characterized by an unthinking automaticity, a narrowness of focus, and a refusal of responsibility that yields the superficiality or hollowness Conrad, Eliot, and Arendt all seem to be describing, each in their own way. And this superficiality or hollowness is too easily filled with mischief and cruelty.

While Eichmann in Jerusalem is mostly remembered for that one phrase (and also for the controversy the book engendered), “the banality of evil” appears, by my count, only once in the book. Arendt later regretted using the phrase, and it has been widely misunderstood. Nonetheless, I think there is some value to it, or at least to the condition that it sought to elucidate. Happily, Arendt returned to the theme in a later, unfinished work, The Life of the Mind.

Eichmann’s trial continued to haunt Arendt. In the Introduction, Arendt explained that the impetus for the lectures that would become The Life of the Mind stemmed from the Eichmann trial. She admits that in referring to the banality of evil she “held no thesis or doctrine,” but she now returns to the nature of evil embodied by Eichmann in a renewed attempt to understand it: “The deeds were monstrous, but the doer … was quite ordinary, commonplace, and neither demonic nor monstrous.” She might have added: “… if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

There was only one “notable characteristic” that stood out to Arendt: “it was not stupidity but thoughtlessness.” Arendt’s close friend, Mary McCarthy, felt that this word choice was unfortunate. “Inability to think” rather than thoughtlessness, McCarthy believed, was closer to the sense of the German word Gedankenlosigkeit.

Later in the Introduction, Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”

Arendt explained that it was this “absence of thinking–which is so ordinary an experience in our everyday life, where we have hardly the time, let alone the inclination, to stop and think–that awakened my interest.” And it posed a series of related questions that Arendt sought to address:

“Is evil-doing (the sins of omission, as well as the sins of commission) possible in default of not just ‘base motives’ (as the law calls them) but of any motives whatever, of any particular prompting of interest or volition?”

“Might the problem of good and evil, our faculty for telling right from wrong, be connected with our faculty of thought?”

All told, Arendt arrived at this final formulation of the question that drove her inquiry: “Could the activity of thinking as such, the habit of examining whatever happens to come to pass or to attract attention, regardless of results and specific content, could this activity be among the conditions that make men abstain from evil-doing or even actually ‘condition’ them against it?”

It is with these questions in mind–questions, mind you, not answers–that I want to return to the subject with which we began, algorithms.

Outsourcing the Life of the Mind

Momentarily considered apart from data collection and the devices that enable it, algorithms are principally problem solving tools. They solve problems that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind. So, to borrow the language that Evan Selinger has deployed so effectively in his critique of contemporary technology, we might say that algorithms outsource the life of the mind. And, if Arendt is right, this outsourcing of the life of the mind is morally consequential.

The outsourcing problem is at the root of much of our unease with contemporary technology. Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

But perhaps we have ignored another dimension of the problem, one that the outsourcing critique itself might, possibly, encourage. Consider this:  to say that algorithms are displacing the life of the mind is to unwittingly endorse a terribly impoverished account of the life of the mind. For instance, if I were to argue that the ability to “Google” whatever bit of information we happen to need when we need it leads to an unfortunate “outsourcing” of our memory, it may be that I am already giving up the game because I am implicitly granting that a real equivalence exists between all that is entailed by human memory and the ability to digitally store and access information. A moments reflection, of course, will reveal that human remembering involves considerably more than the mere retrieval of discreet bits of data. The outsourcing critique, then, valuable as it is, must also challenge the assumption that the outsourcing occurs without remainder.

Viewed in this light, the problem with outsourcing the life of the mind is that it encourages an impoverished conception of what constitutes the life of the mind in the first place. Outsourcing, then, threatens our ability to think not only because some of our “thinking” will be done for us; it will do so because, if we are not careful, we will be habituated into conceiving of the life of the mind on the model of the problem-solving algorithm. We would thereby surrender the kind of thinking that Arendt sought to describe and defend, thinking that might “condition” us against the varieties of evil that transpire in environments of pervasive thoughtlessness.

In our responses to the concerns raised by algorithmic culture, we tend to ask, What can we do? Perhaps, this is already to miss the point by conceiving of the matter as a problem to be solved by something like a technical solution. Perhaps the most important and powerful response is not an action we take but rather an increased devotion to the life of the mind. The phrase sounds quaint, or, worse, elitist. As Arendt meant it, it was neither. Indeed, Arendt was convinced that if thinking was somehow essential to moral action, it must be accessible to all: “If […] the ability to tell right from wrong should turn out to have anything to do with the ability to think, then we must be able to ‘demand’ its exercise from every sane person, no matter how erudite or ignorant, intelligent or stupid, he may happen to be.”

And how might we pursue the life of the mind? Perhaps the first, modest step in that direction is simply the cultivation of times and spaces for thinking, and perhaps also resisting the urge to check if there is an app for that.


Tip the Writer

$1.00

Machines, Work, and the Value of People

Late last month, Microsoft released a “bot” that guesses your age based on an uploaded picture. The bot tended to be only marginally accurate and sometimes hilariously (or disconcertingly) wrong. What’s more, people quickly began having some fun with the program by uploading faces of actors playing fictional characters, such as Yoda or Gandalf. My favorite was Ian Bogost’s submission:

Shortly after the How Old bot had its fleeting moment of virality, Nathan Jurgenson tweeted the following:

This was an interesting observation, and it generated a few interesting replies. Jurgenson himself added, “much of the bigdata/algorithm debates miss how poor these often perform. many critiques presuppose & reify their untenable positivism.” He summed up this line of thought with this tweet: “so much ‘tech criticism’ starts first with uncritically buying all of the hype silicon valley spits out.”

Let’s pause here for a moment. All of this is absolutely true. Yet … it’s not all hype, not necessarily anyway. Let’s bracket the more outlandish claims made by the singularity crowd, of course. But take facial recognition software, for instance. It doesn’t strike me as wildly implausible that in the near future facial recognition programs will achieve a rather striking degree of accuracy.

Along these lines, I found Kyle Wrather’s replies to Jurgenson’s tweet particularly interesting. First, Wrather noted, “[How Old Bot] being wrong makes people more comfortable w/ facial recognition b/c it seems less threatening.” He then added, “I think people would be creeped out if we’re totally accurate. When it’s wrong, humans get to be ‘superior.'”

Wrather’s second comment points to an intriguing psychological dynamic. Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But as far as I know, it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? I’m not entirely sure, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

I think this latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

(It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”)

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions:  Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

So, to sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.

Consider the Traffic Light Camera

It looks like I may be getting a traffic citation in the mail within the next few days. A few nights ago, while making a left into my neighborhood, I was slowed by a car that made a creeping right ahead of me onto the same street. As I finally completed my turn, I saw a bright flash go off behind me. While I’ve noted the proliferation of traffic light cameras around town with mildly disconcerted interest, I hadn’t yet noticed the camera on this rather inconsequential intersection. A day or two later at the same spot, I found myself coming to an abrupt stop once the light hit yellow to ensure that I wasn’t caught completing my turn as the light turned red. Automated surveillance had done its job; I had internalized the gaze of the unblinking eye.

For some time now I’ve been unsettled by the proliferation of traffic light cameras, but I’ve not yet been able to articulate why exactly. While Big Brother fears may be part of the concern, I’m mostly troubled by how the introduction of automated surveillance and ticketing seems to encourage the replacement of human judgment, erroneous as it may often be, by unthinking, habituated behavior.

The traffic light camera knows only that you have crossed or not crossed a certain point at a certain time. Its logic is binary: you are either out of the intersection or in it. Context matters not at all; there is no room for deliberation. If we can imagine a limited set of valid reasons for proceeding through a red light, automated ticketing cannot entertain them. While the intermittently monitored yellow light invited judgment and practical wisdom; the unceasingly monitored yellow light tolerates only unwavering compliance.

In this way, it hints at a certain pattern in the relationship between human beings and the complex technological systems we create. Take the work of getting from here to there as an example. Our baseline is walking. We can walk from here to there in just about any way that the terrain will allow and at whatever rate our needs dictate. And while the journey may have its perils, they are not inherent in walking itself. After all, it would be a strange accident indeed if I were incapacitated as a result of bumping into someone or even stumbling on a stone.

But walking is not necessarily the fastest or most efficient way of getting from here to there, especially if I have a sizable load to bear. Horse-drawn conveyances relieve me of the work of walking and potentially increase my rate of speed without, it seems to me, radically increasing the risks. But they also tend to limit my freedom of motion, a decently kept road being rather more of a necessity than it would’ve been for the walker. Desire paths illustrate this point neatly. The walker may make his own path, and frequently does so.

desire pathThen the train comes along. The train radically increased the rate of speed at which human beings may travel, but it also elevated risks–a derailment, after all, is not quite the same thing as a stumble–and restricted freedom of motion to the tracks laid out for it. It’s worth noting that the railway system was one of the first expansive technological systems requiring, for its efficient and safe operation, rigidly regimented and coordinated action. We even owe the time zones to the systematizing demands of the railroads. The railway system, then, was a massive feat of system building, and would-be travelers were integrated into this system.

The automobile, too, is powerful and potentially dangerous, and must also be carefully managed. Consequently, we created an elaborate system of roads and rules to govern how we use this powerful machine; we created a mechanistic environment to manage the machine. Interestingly, the car allows for a bit more freedom of action than the train, illustrated nicely by the off-roading ideal, which is a fantasy of liberation. But, for the most part, our driving, in order to be safe and efficient, is rationalized and systematized. Apart from this over-arching systematization, the off-roading fantasy would have little appeal. All of this is, of course, a “good thing.” Safety is important, etc., etc. The clip below, filmed at an intersection in Addis Ababa, illustrates what driving looks like in the absence of such rationalization and regimentation.

In an ideal world, one in which all rules and practical guidelines are punctiliously obeyed, the traffic flows freely and safely. Of course, this is far from an ideal world; accidents happen, and they are a leading source of inefficiency, expense, and harm. When driving is conceived of as an engineering problem solved by the fabrication of elaborate systems, accidents are human glitches in the machinery of automobile transportation. So, lanes, signals, signs, traffic lights, etc.–all of it is designed to discipline our driving so that it may resemble the smooth operation of machines that follow rules flawlessly. The more machine-like our driving, the more efficient the system.

As an illustration of the basic principle, take UPS’s deployment of Orion, a complex algorithm designed to plot out the best delivery route for drivers. “Driver reaction to Orion is mixed,” according to a WSJ piece on the software,

“The experience can be frustrating for some who might not want to give up a degree of autonomy, or who might not follow Orion’s logic. For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. But Orion often can see a payoff, measured in small amounts of time and money that the average person might not see.”

Commenting on this story at Marginal Revolution, Alex Taborrok added, “Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.” However we might frame the matter, it remains the case that, given the logic of the system, the driver’s judgment is the glitch that needs to be eradicated to achieve the best results.

Let’s consider the traffic light camera from this angle. Setting aside the not insignificant function of raising municipal funds through increased ticketing and fines, traffic light cameras are designed to mitigate pesky and erratic human judgment. Always-on surveillance ensures that my actions are ever more strictly synchronized with the vast technological system that orders automobile traffic. The traffic light camera assures that I am ever more fully assimilated into the logic of the system, to put it a bit too grimly perhaps. The upside, of course, is the promise of ever greater efficiency and safety–the only values technological systems can recognize, of course.

Ultimately, however, we don’t make very good machines. We get drowsy and angry and drunk; we are easily distracted and, even at our best, we can only take in a limited slice of the environment around us. Enter self-driving cars and the promise of eliminating human error from the complex automotive transportation system.

The trajectory that leads to self-driving cars was already envisioned years before the modern highway system was built. In the film that accompanied GM’s Futurama exhibit at the 1939 New York World’s Fair, we see a model highway system of the future (1960), and we are told, beginning around the 14:30 mark, “Traffic moves at an unreduced rates of speed. Safe distance between cars is maintained by automatic radio control …. The keynote of this motorway? Safety. Safety with increased speed.”

Most pitches I’ve heard for self-driving cars trade on the same idea: increased safety through automation, i.e., the elimination of human error. That, and increased screen time, because if you’re not having to pay attention to the road, then you’re free to dive into your device of choice. Or look at the scenery, if you’re quaint that way, but let’s be serious. Take, for example, the self-driving car Mercedes-Benz displayed at this year’s CES, the F015: “For interaction within the vehicle, the passengers rely on six display screens located around the cabin. They also interact with the vehicle through gestures, eye-tracking or by touching the high-resolution screens.”

mercedes self-driving

But setting the ubiquity of screens aside, we can extract a general principle from the trajectory I’ve just sketched out.

In a system that works best the more machine-like we become, the human component becomes expendable as soon as a machine can outperform it. Or to put it another way, any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

Of course, it should be noted that, as per usual, the hype surrounding self-driving cars is just that. Writing for Fortune, Nicholas Carr, cited Ford’s chief engineer, Raj Nair, who, following his boss’s promise of automated cars rolling off the assembly line in five years time, “explained that ‘full automation’ would be possible only in limited circumstances, particularly ‘where high definition mapping is available along with favorable environmental conditions for the vehicle’s sensors.’” Carr added,

“While it may be relatively straightforward to design a car that can drive itself down a limited-access highway in good weather, programming it to navigate chaotic city or suburban streets or to make its way through a snowstorm or a downpour poses much harder challenges. Many engineers and automation experts believe it will take decades of further development to build a completely autonomous car, and some warn that it may never happen, at least not without a massive and very expensive overhaul of our road system.”

That said, the dream of full automation will probably direct research and development for years to come, and we will continue to see incremental steps in that direction. In retrospect, one of those steps will have been the advent of traffic light cameras, not because it advanced the technology of self-driving cars, but because it prepared us to assent to the assumption that we would be ultimately expendable. The point, then, of this rambling post might be put this way: Our attitude toward new technologies may be less a matter of conscious thought than of tacit assumptions internalized through practices and habits.

More on Mechanization, Automation, and Animation

As I follow the train of thought that took the dream of a smart home as a point of departure, I’ve come to a fork in the tracks. Down one path, I’ll continue thinking about the distinctions among Mechanization, Automation, and Animation. Down the other, I’ll pursue the technological enchantment thesis that arose incidentally in my mind as a way of either explaining or imaginatively characterizing the evolution of technology along those three stages.

Separating these two tracks is a pragmatic move. It’s easier for me at this juncture to consider them separately, particularly to weigh the merits of the latter. It may be that the two tracks will later converge, or it may be that one or both are dead ends. We’ll see. Right now I’ll get back to the three stages.

In his comment on my last post, Evan Selinger noted that my schema was Borgmannesque in its approach, and indeed it was. If you’ve been reading along for awhile, you know that I think highly of Albert Borgmann’s work. I’ve drawn on it a time or two of late. Borgmann looked for a pattern that might characterize the development of technology, and he came up with what he called the device paradigm. Succinctly put, the device paradigm described the tendency of machines to become simultaneously more commodious and more opaque, or, to put it another way, easier to use and harder to understand.

In my last post, I used heating as an example to walk through the distinctions among mechanization, automation, and animation. Borgmann also uses heating to illustrate the device paradigm: lighting and sustaining a fire is one thing, flipping a switch to turn on the furnace is another. Food and music also serve as recurring illustrations for Borgmann. Preparing a meal from scratch is one thing, popping a TV dinner in the microwave is another. Playing the piano is one thing, listening to an iPod is another. In each case a device made access to the end product–heat, food, music–easier, instantaneous, safer, more efficient. In each case, though, the workings of the device beneath the commodious surface became more complex and opaque. (Note that in the case of food preparation, both the microwave and the TV dinner are devices.) Ease of use also came at the expense of physical engagement, which, in Borgmann’s view, results in an impoverishment of experience and a rearrangement of the social world.

Keep that dynamic in mind as we move forward. The device paradigm does a good job, I think, of helping us think about the transition to mechanization and from mechanization to automation and animation, chiefly by asking us to consider what we’re sacrificing in exchange for the commodiousness offered to us.

Ultimately, we want to avoid the impulse to automate for automation’s sake. As Nick Carr, whose forthcoming book, The Glass Cage: Automation and Us, will be an excellent guide in these matters, recently put it, “What should be automated is not what can be automated but what should be automated.”

That principle came at the end of a short post reflecting on comments made by Google’s “Android guru,” Sundar Pichai. Pichai offered a glimpse at how Google envisions the future when he described how useful it would be if your car could sense that your child was now inside and automatically changed the music playlists accordingly. Here’s part of Carr’s response:

“With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be ‘freed up’ to do something ‘more valuable.’ Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.”

But how do we come to know what should be automated? I’m not sure there’s a short answer to that question, but it’s safe to say that we’re going to need to think carefully about what we do and why we do it. Again, this is why I think Hannah Arendt was ahead of her time when she undertook the intellectual project that resulted in The Human Condition and the unfinished The Life of the Mind. In the first she set out to understand our doing and in the second, our thinking. And all of this in light of the challenges presented by emerging technological systems.

One of the upshots of new technologies, if we accept the challenge, is that they lead us to look again at what we might have otherwise taken for granted or failed to notice altogether. New communication technologies encourage us to think again about the nature of human communication. New medical technologies encourage us to think again about the nature of health. New transportation technologies encourage us to think again about the nature of place. And so on.

I had originally used the word “forced” where I settled for the word “encourage” above. I changed the wording because, in fact, new technologies don’t force us to think again about the realms of life they impact. It is quite easy, too easy perhaps, not to think at all, simply to embrace and adopt the new technology without thinking at all about its consequences. Or, what amounts to the same thing, it is just as easy to reject new technologies out of hand because they are new. In neither case would we be thinking at all. If we accept the challenge to think again about the world as new technologies cast aspects of it in a new light, we might even begin to see this development as a great gift by leading us to value, appreciate, and even love what was before unnoticed.

Returning to the animation schema, we might make a start at thinking by simply asking ourselves what exactly is displaced at each transition. When it comes to mechanization, it seems fairly straightforward. Mechanization, as I’m defining it, ordinarily displaces physical labor.

Capturing what exactly is displaced when it comes to automation is a bit more challenging. In part, this is because the distinctions I’m making between mechanization and automation on the one hand and automation and animation on the other are admittedly fuzzy. In fact, all three are often simply grouped together under the category of automation. This is a simpler move, but I’m concerned that we might not get a good grasp of the complex ways in which technologies interact with human action if we don’t parse things a bit more finely.

So let’s start by suggesting that automation, the stage at which machines operate without the need for constant human input and direction, displaces attention. When something is automated, I can pay much less attention to it, or perhaps, no attention at all. We might also say that automation displaces will or volition. When a process is automated, I don’t have to will its action.

Finally, animation– the stage at which machines not only act without direct human intervention, but also “learn” and begin to “make decisions” for themselves–displaces agency and judgment.

By noting what is displaced we can then ask whether the displaced element was an essential or inessential aspect of the good or end sought by the means, and so we might begin to arrive at some more humane conclusions about what ought to be automated.

I’ll leave things there for now, but more will be forthcoming. Right now I’ll leave you with a couple of questions I’ll be thinking about.

First, Borgmann distinguished between things and devices (see here or here). Once we move from automation to animation, do we need a new category?

Also, coming back to Arendt, she laid out two sets of three categories that overlap in interesting ways with the three stages as I’m thinking of them. In her discussion of human doing, she identifies labor, work, and action. In her discussion of human thinking, she identifies thought, will, and judgment. How can her theorizing of these categories help us understand what’s at stake in drive to automate and animate?