A Technological History of Modernity

I’m writing chiefly to commend to you what Alan Jacobs has recently called his “big fat intellectual project.”

The topic that has driven his work over the last few years Jacobs describes as follows: “The ways that technocratic modernity has changed the possibilities for religious belief, and the understanding of those changes that we get from studying the literature that has been attentive to them.” He adds,

“But literature has not been merely an observer of these vast seismic tremors; it has been a participant, insofar as literature has been, for many, the chief means by which a disenchanted world can be re-enchanted — but not fully — and by which buffered selves can become porous again — but not wholly. There are powerful literary responses to technocratic modernity that serve simultaneously as case studies (what it’s like to be modern) and diagnostic (what’s to be done about being modern).”

To my mind, such a project enjoys a distinguished pedigree, at least in some important aspects. I think, for example, of Leo Marx’s classic, The Machine in the Garden: Technology and the Pastoral Ideal in America, or the manner in which Katherine Hayles weaves close readings of contemporary fiction into her explorations of digital technology. Not that he needs me to say this, but I’m certain Jacobs’ work along these lines, particularly with its emphasis on religious belief, will be valuable and timely. You should click through to find links to a handful of essays Jacobs has already written in this vein.

On his blog, Text Patterns, Jacobs has, over the last few weeks, been describing one important thread of this wider project, a technological history of modernity, which, naturally, I find especially intriguing and necessary.

The first post in which Jacobs articulates the need for a technological history of modernity began as a comment on Matthew Crawford’s The World Beyond Your Head. In it, Jacobs repeats his critique of the “ideas have consequences” model of history, one in which the ideas of philosophers drive cultural change.

Jacobs took issue with the “ideas have consequences” model of cultural change in his critique of Neo-Thomist accounts of modernity, i.e., those that pin modernity’s ills on the nominalist challenge to the so-called medieval/Thomist synthesis of faith and reason. He finds that Crawford commits a similar error in attributing the present attention economy, in large measure, to conclusions about the will and the individual arrived at by Enlightenment thinkers.

Beyond the criticisms specific to the debate about the historical consequences of nominalism and the origins of our attention economy, Jacobs articulated concerns that apply more broadly to any account of cultural change that relies too heavily on the work of philosophers and theologians while paying too little attention to the significance of the material conditions of lived experience.

Moving toward the need for a technological history of modernity, Jacobs writes, “What I call the Oppenheimer Principle — ‘When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success’ — has worked far more powerfully to shape our world than any of our master thinkers. Indeed, those thinkers are, in ways we scarcely understand, themselves the product of the Oppenheimer Principle.”

Or, as Ken Myers, a cultural critic that Jacobs and I both hold in high esteem, often puts it: ideas may have consequences, but ideas also have antecedents. These antecedents may be described as unarticulated assumptions derived from the bodily, emotional, and, yes, cognitive consequences of society’s political, economic, and technological infrastructure. I’m not sure if Jacobs would endorse this move, but I find it helpful to talk about these assumptions by borrowing the concept of “plausibility structures” first articulated by the sociologist Peter Berger.

For Berger, plausibility structures are those chiefly social realities that render certain ideas plausible, compelling, or meaningful apart from whatever truth value they might be independently or objectively assigned. Or, as Berger has frequently quipped, the factors that make it easier to be a Baptist in Texas than it would be in India.

Again, Berger has in mind interpersonal relationships and institutional practices, but I think we may usefully frame our technological milieu similarly. In other words, to say that our technological milieu, our material culture constitutes a set of plausibility structures is to say that we derive tacit assumptions about what is possible, what is good, what is valuable from merely carrying on about our daily business with and through our tools. These implicit valuations and horizons of the possible are the unspoken context within which we judge and evaluate explicit ideas and propositions.

Consequently, Jacobs is quite right to insist that we understand the emergence of modernity as more than the triumph of a set of ideas about individuals, democracy, reason, progress, etc. And, as he puts it,

“Those of us who — out of theological conviction or out of some other conviction — have some serious doubts about the turn that modernity has taken have been far too neglectful of this material, economic, and technological history. We need to remedy that deficiency. And someone needs to write a really comprehensive and ambitious technological history of modernity. I don’t think I’m up to that challenge, but if no one steps up to the plate….”

All of this to say that I’m enthusiastic about the project Jacobs has presented and eager to see how it unfolds. I have a few more thoughts about it that I hope to post in the coming days–why, for example, Jacobs project is more appealing than Evgeny Morozov’s vision for tech criticism–but that may or may not materialize. Whatever the case, I think you’ll do well to tune in to Jacobs’ work on this as it progresses.

Et in Facebook ego

Today is the birthday of the friend whose death elicited this post two years ago. I republish it today for your consideration. 

In Nicolas Poussin’s mid-seventeenth century painting, Et in Arcadia ego, shepherds have stumbled upon an ancient tomb on which the titular words are inscribed. Understood to be the voice of death, the Latin phrase may be roughly translated, “Even in Arcadia there am I.” Because Arcadia symbolized a mythic pastoral paradise, the painting suggested the ubiquity of death. To the shepherds, the tomb was a momento mori: a reminder of death’s inevitability.

Nicolas Poussin, Et in Arcadia ego, 1637-38

Nicolas Poussin, Et in Arcadia ego, 1637-38

Poussin was not alone among artists of the period in addressing the certainty of death. During the seventeenth and eighteenth century, vanitas art flourished. The designation stems from the Latin phrase vanitas vanitatum omni vanitas, a recurring refrain throughout the biblical book of Ecclesiastes: ”vanity of vanities, all is vanity,” in the King James translation. Paintings in the genre were still lifes depicting an assortment of objects which represented all that we might pursue in this life: love, power, fame, fortune, happiness. In their midst, however, one might also find a skull or an hour glass. These were symbols of death and the brevity of life. The idea, of course, was to encourage people to make the most of their living years.

Edwart Collier, 1690

Edwart Collier, 1690

For the most part, we don’t go in for this sort of thing anymore. Few people, if any, operate under the delusion that we might escape death (excepting, perhaps, the Singularity crowd), but we do a pretty good job of forgetting what we know about death. We keep death out of sight and, hence, out of mind. We’re certainly not going out of our way to remind ourselves of death’s inevitability. And, who knows, maybe that’s for the better. Maybe all of those skulls and hourglasses were morbidly unhealthy.

But while vanitas art has gone out of fashion, a new class of memento mori has emerged: the social media profile.

I’m one of those on again, off again Facebook users. Lately, I’ve been on again, and recently I noticed one of those birthday reminders Facebook places in the column where it puts all of the things Facebook would like you to click on. It was for a high school friend who I had not spoken to in over eight years. It was in that respect a very typical Facebook friendship:  the sort that probably wouldn’t exist at all were it not for Facebook. And that’s not necessarily a knock on the platform. For the most part, I appreciate being able to maintain at least minimal ties to old friends. In this case, though, it demonstrated just how weak those ties can be.

Upon clicking over to their profile, I read a few odd notes, and very quickly it became disconcertingly clear that my friend had died over a year ago. Naturally, I was taken a back and saddened. He died while I was off Facebook, and news had not reached me by any other channel. But there it was. Out of nowhere and without warning my browser was haunted by the very real presence of death. Momento mori.

Just a few days prior I logged on to Facebook and was greeted by the tragic news of a former student’s sudden passing. Because we had several mutual connections, photographs of the young man found their way into my news feed for several days. It was odd and disconcerting and terribly sad all at once. I don’t know what I think of social media mourning. It makes me uneasy, but I won’t criticize what might bring others solace. In any case, it is, like death itself, an unavoidable reality of our social media experience. Death is no digital dualist.

Facebook sometimes feels like a modern-day Arcadia. It is a carefully cultivated space in which life appears Edenic. The pictures are beautiful, the events exciting, the faces always smiling, the children always amusing, the couples always adoring. Some studies even suggest that comparing our own experience to these immaculately curated slices of life leads to envy, discontent, and unhappiness. Understandably so … if we assume that these slices of life are comprehensive representations of the lives people acutally lead. Of course, they are not.

Lest we be fooled, however, there, alongside the pets and witty status updates and wedding pictures and birth announcements, we will increasingly find our virtual Arcadias haunted by the digital, disembodied presence of the dead. Our digital memento mori.

Et in Facebook ego.

Google Photos and the Ideal of Passive Pervasive Documentation

I’ve been thinking, recently, about the past and how we remember it. That this year marks the 20th anniversary of my high school graduation accounts for some of my reflective reminiscing. Flipping through my senior yearbook, I was surprised by what I didn’t remember. Seemingly memorable events alluded to by friends in their notes and more than one of the items I myself listed as “Best Memories” have altogether faded into oblivion. “I will never forget when …” is an apparently rash vow to make.

But my mind has not been entirely washed by Lethe’s waters. Memories, assorted and varied, do persist. Many of these are sustained and summoned by stuff, much of it useless, that I’ve saved for what we derisively call sentimental reasons. My wife and I are now in the business of unsentimentally trashing as much of this stuff as possible to make room for our first child. But it can be hard parting with the detritus of our lives because it is often the only tenuous link joining who we were to who we now are. It feels as if you risk losing a part of yourself forever if you were to throw away that last delicate link.

“Life without memory,” Luis Bunuel tells us, “is no life at all.” “Our memory,” he adds, “is our coherence, our reason, our feeling, even our action. Without it, we are nothing.” Perhaps this accounts for why tech criticism was born in a debate about memory. In the Phaedrus, Plato’s Socrates tells a cautionary tale about the invention of writing in which writing is framed as a technology that undermines the mind’s power to remember. What we can write down, we will no longer know for ourselves–or so Socrates worried. He was, of course, right. But, as we all know, this was an incomplete assessment of writing. Writing did weaken memory in the way Plato feared, but it did much else besides. It would not be the last time critics contemplated the effects of a new technology on memory.

I’ve not written nearly as much about memory as I once did, but it continues to be an area of deep interest. That interest was recently renewed not only by personal circumstances but also by the rollout of Google Photos, a new photo storage app with cutting edge sorting and searching capabilities. According to Steven Levy, Google hopes that it will be received as a “visual equivalent to Gmail.” On the surface, this is just another digital tool designed to store and manipulate data. But the data in question is, in this case, intimately tied up with our experience and how we remember it. It is yet another tool designed to store and manipulate memory.

When Levy asked Bradley Horowitz, the Google executive in charge of Photos, what problem does Google Photos solve? Horowitz replied,

“We have a proliferation of devices and storage and bandwidth, to the point where every single moment of our life can be saved and recorded. But you don’t get a second life with which to curate, review, and appreciate the first life. You almost need a second vacation to go through the pictures of the safari on your first vacation. That’s the problem we’re trying to fix — to automate the process so that users can be in the moment. We also want to bring all of the power of computer vision and machine learning to improve those photos, create derivative works, to make suggestions…to really be your assistant.”

It shouldn’t be too surprising that the solution to the problem of pervasive documentation enabled by technology is a new technology that allows you to continue documenting with even greater abandon. Like so many technological fixes to technological problems, it’s just a way of doubling down on the problem. Nor is it surprising that he also suggested this would help users “be in the moment” without of a hint of irony.

But here is the most important part of the whole interview, emphasis mine:

“[…] so part of Google photos is to create a safe space for your photos and remove any stigma associated with saving everything. For instance, I use my phone to take pictures of receipts, and pictures of signs that I want to remember and things like that. These can potentially pollute my photo stream. We make it so that things like that recede into the background, so there’s no cognitive burden to actually saving everything.”

Replace saving with remembering and the potential significance of a tool like Google Photos becomes easier to apprehend. Horowitz is here confirming that users will need to upload their photos to Google’s Cloud if they want to take advantage of Google Photos’ most impressive features. He anticipates that there will be questions about privacy and security, hence the mention of safety. But the really important issue here is this business about saving everything.

I’m not entirely sure what to make of the stigma Horowitz is talking about, but the cognitive burden of “saving everything” is presumably the burden of sorting and searching. How do you find the one picture you’re looking for when you’ve saved thousands of pictures across a variety of platforms and drives? How do you begin to organize all of these pictures in any kind of meaningful way? Enter Google Photos and its uncanny ability to identify faces and group pictures into three basic categories–People, Places, and Things–as well as a variety of sub-categories such as “food,” “beach,” or “cars.” Now you don’t need that second life to curate your photos. Google does it for you. Now we may document our lives to our heart’s content without a second thought about whether or not we’ll ever go back to curate our unwieldy hoard of images.

I’ve argued elsewhere that we’ve entered an age of memory abundance, and the abundance of memories makes us indifferent to them. When memory is scarce, we treasure it and care deeply about preserving it. When we generate a surfeit of memory, our ability to care about it diminishes proportionately. We can no longer relate to how Roland Barthes treasured his mother’s photograph; we are more like Andy Warhol, obsessively recording all of his interactions and never once listening to the recordings. Plato was, after all, even closer to the mark than we realized. New technologies of memory reconfigure the affections as well as the intellect. But is it possible that Google Photos will prove this judgement premature? Has Google figured out how we may have our memory cake and eat it too?

I think not, and there’s a historical precedent that will explain why.

Ivan Illich, in his brilliant study of medieval reading and the evolution of the book, In the Vineyard of the Text, noted how emerging textual technologies reconfigured how readers related to what they read. It is a complex, multifaceted argument and I won’t do justice to it here, but the heart of it is summed up in the title of Illich’s closing chapter, “From Book to Text.” After explaining what Illich meant by the that formulation, I’m going to suggest that we consider an analogous development: from photograph to image.

Like the photography, writing is, as Plato understood, a mnemonic technology. The book or codex is only one form the technology has taken, but it is arguably the most important form owing to its storage capacity and portability. Contrast the book to, for instance, a carved stone tablet or a scroll and you’ll immediately recognize the brilliance of the design. But the matter of sorting and searching remained a significant problem until the twelfth century. It is then that new features appeared to improve the book’s accessibility and user-friendliness, among them chapter titles, pagination, and the alphabetized index. Now one cloud access particular passages without having to either read the whole work or, more to the point, either memorize the passages or their location in the book (illuminated manuscripts were designed to aide with the latter).

My word choice in describing the evolution of the book above was, of course, calculated to make us see the book as a technology and also to make certain parallels to the case of digital photography more obvious. But what was the end result of all of this innovation? What did Illich mean by saying that the book became a text?

Borrowing a phrase Katherine Hayles deployed to describe a much later development, I’d say that Illich is getting at one example of how information lost its body. In other words, prior to these developments it was harder to imagine the text of a book as a free-floating reality that could be easily lifted and presented in a different format. The ideas, if you will, and the material that conveyed them–the message and medium–were intimately bound together; one could hardly imagine the two existing independently. This had everything to do with the embodied dimensions of the reading experience and the scarcity of books. Because there was no easy way to dip in and out of a book to look for a particular fragment and because one would likely encounter but one copy of a particular work, the work was experienced as a whole that lived within the particular pages of the book one held in hand.

The book had then been read reverentially as a window on the world; it yielded what Illich termed monastic reading. The text was later, after the technical innovations of the twelfth century, read as a window on the mind of the author; it yielded scholastic reading. We might also characterize these as devotional reading and academic reading, respectively. Illich summed it up this way:

“The text could now be seen as something distinct from the book. It was an object that could be visualized even with closed eyes [….] The page lost the quality of soil in which words are rooted. The new text was a figment on the face of the book that lifted off into autonomous existence [….] Only its shadow appeared on the page of this or that concrete book. As a result, the book was no longer the window onto nature or god; it was no longer the transparent optical device through which a reader gains access to creatures or the transcendent.”

Illich had, a few pages earlier, put the matter more evocatively: “Modern reading, especially of the academic and professional type, is an activity performed by commuters or tourists; it is no longer that of pedestrians and pilgrims.”

I recount Illich’s argument because it illuminates the changes we are witnessing with regards to photography. Illich demonstrated two relevant principles. First, that small technical developments can have significant and lasting consequences for the experience and meaning of media. The move from analog to digital photography should naturally be granted priority of place, but subsequent developments such as those in face recognition software and automated categorization should not be underestimated. Secondly, that improvements in what we might today call retrieval and accessibility can generate an order of abstraction and detachment from the concrete embodiment of media. And this matters because the concrete embodiment, the book as opposed to the text, yields kinds and degrees of engagement that are unique to it.

Let me try to put the matter more directly and simultaneously apply it to the case of photography. Improving accessibility meant that readers could approach the physical book as the mere repository of mental constructs, which could be poached and gleaned at whim. Consequently, the book was something to be used to gain access to the text, which now appeared for the first time as an abstract reality; it ceased to be itself a unique and precious window on the world and its affective power was compromised.

Now, just as the book yielded to the text, so the photograph yields to the image. Imagine a 19th century woman gazing lovingly at a photograph of her son. The woman does not conceive of the photograph as one instantiation of the image of her son. Today, however, we who hardly ever hold photographs anymore, we can hardly help thinking it terms of images, which may be displayed on any of a number of different platforms, not to mention manipulated at whim. The image is an order of abstraction removed from the photograph and it would be hard to imagine someone treasuring it in the same way that we might treasure an old photograph. Perhaps a thought experiment will drive this home. Try to imagine the emotional distance between the act of tearing up a photograph and deleting an image.

Now let’s come back to the problem Google Photos is intended to solve. Will automated sorting and categorization along with the ability to search succeed in making our documentation more meaningful? Moreover, will it overcome the problems associated with memory abundance? Doubtful. Instead, the tools will facilitate further abstraction and detachment. They are designed to encourage the production of even more documentary data and to further diminish our involvement in their production and storage. Consequently, we will continue to care less not more about particular images.

Of course, this hardly means the tools are useless or that images are meaningless. I’m certain that face recognition software, for instance, can and will be put to all sorts of uses, benign and otherwise and that the reams of data users will feed Google Photos will only help to improve and refine the software. And it is also true that images can be made use of in ways that photographs never could. But perhaps that is the point. A photograph we might cherish; we tend to make use of images. Unlike the useless stuff around which my memories accumulate and that I struggle to throw away, images are all use-value and we don’t think twice about deleting them when they have no use.

Finally, Google’s answer to the problem of documentation, that it takes us out of the moment as it were, is to encourage such pervasive and continual documentation that it is no longer experienced as a stepping out of the moment at all. The goal appears to be a state of continual passive documentation in which case the distinction between experience and documentation blurs so that the two are indistinguishable. The problem is not so much solved as it is altogether transcended. To experience life will be to document it. In so doing we are generating a second life, a phantom life that abides in the Cloud.

And perhaps we may, without stretching the bounds of plausibility too far, reconsider that rather ethereal, heavenly metaphor–the Cloud. As we generate this phantom life, this double of ourselves constituted by data, are we thereby hoping, half-consciously, to evade or at least cope with the unremitting passage of time and, ultimately, our mortality?

Resisting the Habits of the Algorithmic Mind

Algorithms, we are told, “rule our world.” They are ubiquitous. They lurk in the shadows, shaping our lives without our consent. They may revoke your driver’s license, determine whether you get your next job, or cause the stock market to crash. More worrisome still, they can also be the arbiters of lethal violence. No wonder one scholar has dubbed 2015 “the year we get creeped out by algorithms.” While some worry about the power of algorithms, other think we are in danger of overstating their significance or misunderstanding their nature. Some have even complained that we are treating algorithms like gods whose fickle, inscrutable wills control our destinies.

Clearly, it’s important that we grapple with the power of algorithms, real and imagined, but where do we start? It might help to disambiguate a few related concepts that tend to get lumped together when the word algorithm (or the phrase “Bid Data”) functions more as a master metaphor than a concrete noun. I would suggest that we distinguish at least three realities: data, algorithms, and devices. Through the use of our devices we generate massive amounts of data, which would be useless were it not for analytical tools, algorithms prominent among them. It may be useful to consider each of these separately; at least we should be mindful of the distinctions.

We should also pay some attention to the language we use to identify and understand algorithms. As Ian Bogost has forcefully argued, we should certainly avoid implicitly deifying algorithms by how we talk about them. But even some of our more mundane metaphors are not without their own difficulties. In a series of posts at The Infernal Machine, Kevin Hamilton considers the implications of the popular “black box” metaphor and how it encourages us to think about and respond to algorithms.

The black box metaphor tries to get at the opacity of algorithmic processes. Inputs are transformed in to outputs, but most of us have no idea how the transformation was effected. More concretely, you may have been denied a loan or job based on the determinations of a program running an algorithm, but how exactly that determination was made remains remains a mystery.

In his discussion of the black box metaphor, Hamilton invites us to consider the following scenario:

“Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.”

But how effective is this new way of approaching her engagement with Facebook, now informed by the black box metaphor? Hamilton thinks “this grasp toward agency is also the beginning of a new system.” “Tweaking to account for black-boxed algorithmic processes,” Hamilton suggests, “could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” Ultimately, Hamilton concludes, “most of us are stuck in an ‘opt-in or opt-out’ scenario that never goes anywhere.”

If I read him correctly, Hamilton is describing an escalating, never-ending battle to achieve a variety of desired outcomes in relation to the algorithmic system, all of which involve securing some kind of independence from the system, which we now understand as something standing apart and against us. One of those outcomes may be understood as the state Evan Selinger and Woodrow Hartzog have called obscurity, “the idea that when information is hard to obtain or understand, it is, to some degree, safe.” “Obscurity,” in their view, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power.”

Another desired outcome that fuels resistance to black box algorithms involves what we might sum up as the quest for authenticity. Whatever relative success algorithms achieve in predicting our likes and dislikes, our actions, our desires–such successes are often experienced as an affront to our individuality and autonomy. Ironically, the resulting battle against the algorithm often secures the their relative victory by fostering what Frank Pasquale has called the algorithmic self, constantly modulating itself in response/reaction to the algorithms it encounters.

More recently, Quinn Norton expressed similar concerns from a slightly different angle: “Your internet experience isn’t the main result of algorithms built on surveillance data; you are. Humans are beautifully plastic, endlessly adaptable, and over time advertisers can use that fact to make you into whatever they were hired to make you be.”

Algorithms and the Banality of Evil

These concerns about privacy or obscurity on the one hand and agency or authenticity on the other are far from insignificant. Moving forward, though, I will propose another approach to the challenges posed by algorithmic culture, and I’ll do so with a little help from Joseph Conrad and Hannah Arendt.

In Conrad’s Heart of Darkness, as the narrator, Marlow, makes his way down the western coast of Africa toward the mouth of the Congo River in the service of a Belgian trading company, he spots a warship anchored not far from shore: “There wasn’t even a shed there,” he remembers, “and she was shelling the bush.”

“In the empty immensity of earth, sky, and water,” he goes on, “there she was, incomprehensible, firing into a continent …. and nothing happened. Nothing could happen.” “There was a touch of insanity in the proceeding,” he concluded. This curious and disturbing sight is the first of three such cases encountered by Marlow in quick succession.

Not long after he arrived at the Company’s station, Marlow heard a loud horn and then saw natives scurry away just before witnessing an explosion on the mountainside: “No change appeared on the face of the rock. They were building a railway. The cliff was not in the way of anything; but this objectless blasting was all the work that was going on.”

These two instances of seemingly absurd, arbitrary action are followed by a third. Walking along the station’s grounds, Marlow “avoided a vast artificial hole somebody had been digging on the slope, the purpose of which I found it impossible to divine.” As they say: two is a coincidence; three’s a pattern.

Nestled among these cases of mindless, meaningless action, we encounter as well another kind of related thoughtlessness. The seemingly aimless shelling he witnessed at sea, Marlow is assured, targeted an unseen camp of natives. Registering the incongruity, Marlow exclaims, “he called them enemies!” Later, Marlow recalls the shelling off the coastline when he observed the natives scampering clear of each blast on the mountainside: “but these men could by no stretch of the imagination be called enemies. They were called criminals, and the outraged law, like the bursting shells, had come to them, an insoluble mystery from the sea.”

Taken together these incidents convey a principle: thoughtlessness couples with ideology to abet violent oppression. We’ll come back to that principle in a moment, but, before doing so, consider two more passages from the novel. Just before that third case of mindless action, Marlow reflected on the peculiar nature of the evil he was encountering:

“I’ve seen the devil of violence, and the devil of greed, and the devil of hot desire; but, by all the stars! these were strong, lusty, red-eyed devils, that swayed and drove men–men, I tell you. But as I stood on this hillside, I foresaw that in the blinding sunshine of that land I would become acquainted with a flabby, pretending, weak-eyed devil of rapacious and pitiless folly.”

Finally, although more illustrations could be adduced, after an exchange with an insipid, chatty company functionary, who is also an acolyte of Mr. Kurtz, Marlow had this to say: “I let him run on, the papier-mâché Mephistopheles, and it seemed to me that if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

That sentence, to my mind, most readily explains why T.S. Eliot chose as an epigraph for his 1925 poem, “The Hollow Men,” a line from Heart of Darkness: “Mistah Kurtz – he dead.” This is likely an idiosyncratic reading, so take it with the requisite grain of salt, but I take Conrad’s papier-mâché Mephistopheles to be of a piece with Eliot’s hollow men, who having died are remembered “Not as lost

Violent souls, but only
As the hollow men
The stuffed men.”

For his part, Conrad understood that these hollow men, these flabby devils were still capable of immense mischief. Within the world as it is administered by the Company, there is a great deal of doing but very little thinking or understanding. Under these circumstances, men are characterized by a thoroughgoing superficiality that renders them willing, if not altogether motivated participants in the Company’s depredations. Conrad, in fact, seems to have intuited the peculiar dangers posed by bureaucratic anomie and anticipated something like what Hannah Arendt later sought to capture in her (in)famous formulation, “the banality of evil.”

If you are familiar with the concept of the banality of evil, you know that Arendt conceived of it as a way of characterizing the kind of evil embodied by Adolph Eichmann, a leading architect of the Holocaust, and you may now be wondering if I’m preparing to argue that algorithms will somehow facilitate another mass extermination of human beings.

Not exactly. I am circumspectly suggesting that the habits of the algorithmic mind are not altogether unlike the habits of the bureaucratic mind. (Adam Elkus makes a similar correlation here, but I think I’m aiming at a slightly different target.) Both are characterized by an unthinking automaticity, a narrowness of focus, and a refusal of responsibility that yields the superficiality or hollowness Conrad, Eliot, and Arendt all seem to be describing, each in their own way. And this superficiality or hollowness is too easily filled with mischief and cruelty.

While Eichmann in Jerusalem is mostly remembered for that one phrase (and also for the controversy the book engendered), “the banality of evil” appears, by my count, only once in the book. Arendt later regretted using the phrase, and it has been widely misunderstood. Nonetheless, I think there is some value to it, or at least to the condition that it sought to elucidate. Happily, Arendt returned to the theme in a later, unfinished work, The Life of the Mind.

Eichmann’s trial continued to haunt Arendt. In the Introduction, Arendt explained that the impetus for the lectures that would become The Life of the Mind stemmed from the Eichmann trial. She admits that in referring to the banality of evil she “held no thesis or doctrine,” but she now returns to the nature of evil embodied by Eichmann in a renewed attempt to understand it: “The deeds were monstrous, but the doer … was quite ordinary, commonplace, and neither demonic nor monstrous.” She might have added: “… if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

There was only one “notable characteristic” that stood out to Arendt: “it was not stupidity but thoughtlessness.” Arendt’s close friend, Mary McCarthy, felt that this word choice was unfortunate. “Inability to think” rather than thoughtlessness, McCarthy believed, was closer to the sense of the German word Gedankenlosigkeit.

Later in the Introduction, Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”

Arendt explained that it was this “absence of thinking–which is so ordinary an experience in our everyday life, where we have hardly the time, let alone the inclination, to stop and think–that awakened my interest.” And it posed a series of related questions that Arendt sought to address:

“Is evil-doing (the sins of omission, as well as the sins of commission) possible in default of not just ‘base motives’ (as the law calls them) but of any motives whatever, of any particular prompting of interest or volition?”

“Might the problem of good and evil, our faculty for telling right from wrong, be connected with our faculty of thought?”

All told, Arendt arrived at this final formulation of the question that drove her inquiry: “Could the activity of thinking as such, the habit of examining whatever happens to come to pass or to attract attention, regardless of results and specific content, could this activity be among the conditions that make men abstain from evil-doing or even actually ‘condition’ them against it?”

It is with these questions in mind–questions, mind you, not answers–that I want to return to the subject with which we began, algorithms.

Outsourcing the Life of the Mind

Momentarily considered apart from data collection and the devices that enable it, algorithms are principally problem solving tools. They solve problems that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind. So, to borrow the language that Evan Selinger has deployed so effectively in his critique of contemporary technology, we might say that algorithms outsource the life of the mind. And, if Arendt is right, this outsourcing of the life of the mind is morally consequential.

The outsourcing problem is at the root of much of our unease with contemporary technology. Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

But perhaps we have ignored another dimension of the problem, one that the outsourcing critique itself might, possibly, encourage. Consider this:  to say that algorithms are displacing the life of the mind is to unwittingly endorse a terribly impoverished account of the life of the mind. For instance, if I were to argue that the ability to “Google” whatever bit of information we happen to need when we need it leads to an unfortunate “outsourcing” of our memory, it may be that I am already giving up the game because I am implicitly granting that a real equivalence exists between all that is entailed by human memory and the ability to digitally store and access information. A moments reflection, of course, will reveal that human remembering involves considerably more than the mere retrieval of discreet bits of data. The outsourcing critique, then, valuable as it is, must also challenge the assumption that the outsourcing occurs without remainder.

Viewed in this light, the problem with outsourcing the life of the mind is that it encourages an impoverished conception of what constitutes the life of the mind in the first place. Outsourcing, then, threatens our ability to think not only because some of our “thinking” will be done for us; it will do so because, if we are not careful, we will be habituated into conceiving of the life of the mind on the model of the problem-solving algorithm. We would thereby surrender the kind of thinking that Arendt sought to describe and defend, thinking that might “condition” us against the varieties of evil that transpire in environments of pervasive thoughtlessness.

In our responses to the concerns raised by algorithmic culture, we tend to ask, What can we do? Perhaps, this is already to miss the point by conceiving of the matter as a problem to be solved by something like a technical solution. Perhaps the most important and powerful response is not an action we take but rather an increased devotion to the life of the mind. The phrase sounds quaint, or, worse, elitist. As Arendt meant it, it was neither. Indeed, Arendt was convinced that if thinking was somehow essential to moral action, it must be accessible to all: “If […] the ability to tell right from wrong should turn out to have anything to do with the ability to think, then we must be able to ‘demand’ its exercise from every sane person, no matter how erudite or ignorant, intelligent or stupid, he may happen to be.”

And how might we pursue the life of the mind? Perhaps the first, modest step in that direction is simply the cultivation of times and spaces for thinking, and perhaps also resisting the urge to check if there is an app for that.

Machines, Work, and the Value of People

Late last month, Microsoft released a “bot” that guesses your age based on an uploaded picture. The bot tended to be only marginally accurate and sometimes hilariously (or disconcertingly) wrong. What’s more, people quickly began having some fun with the program by uploading faces of actors playing fictional characters, such as Yoda or Gandalf. My favorite was Ian Bogost’s submission:

Shortly after the How Old bot had its fleeting moment of virality, Nathan Jurgenson tweeted the following:

This was an interesting observation, and it generated a few interesting replies. Jurgenson himself added, “much of the bigdata/algorithm debates miss how poor these often perform. many critiques presuppose & reify their untenable positivism.” He summed up this line of thought with this tweet: “so much ‘tech criticism’ starts first with uncritically buying all of the hype silicon valley spits out.”

Let’s pause here for a moment. All of this is absolutely true. Yet … it’s not all hype, not necessarily anyway. Let’s bracket the more outlandish claims made by the singularity crowd, of course. But take facial recognition software, for instance. It doesn’t strike me as wildly implausible that in the near future facial recognition programs will achieve a rather striking degree of accuracy.

Along these lines, I found Kyle Wrather’s replies to Jurgenson’s tweet particularly interesting. First, Wrather noted, “[How Old Bot] being wrong makes people more comfortable w/ facial recognition b/c it seems less threatening.” He then added, “I think people would be creeped out if we’re totally accurate. When it’s wrong, humans get to be ‘superior.'”

Wrather’s second comment points to an intriguing psychological dynamic. Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But as far as I know, it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? I’m not entirely sure, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

I think this latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

(It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”)

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions:  Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

So, to sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.