Algorithms, we are told, “rule our world.” They are ubiquitous. They lurk in the shadows, shaping our lives without our consent. They may revoke your driver’s license, determine whether you get your next job, or cause the stock market to crash. More worrisome still, they can also be the arbiters of lethal violence. No wonder one scholar has dubbed 2015 “the year we get creeped out by algorithms.” While some worry about the power of algorithms, other think we are in danger of overstating their significance or misunderstanding their nature. Some have even complained that we are treating algorithms like gods whose fickle, inscrutable wills control our destinies.
Clearly, it’s important that we grapple with the power of algorithms, real and imagined, but where do we start? It might help to disambiguate a few related concepts that tend to get lumped together when the word algorithm (or the phrase “Bid Data”) functions more as a master metaphor than a concrete noun. I would suggest that we distinguish at least three realities: data, algorithms, and devices. Through the use of our devices we generate massive amounts of data, which would be useless were it not for analytical tools, algorithms prominent among them. It may be useful to consider each of these separately; at least we should be mindful of the distinctions.
We should also pay some attention to the language we use to identify and understand algorithms. As Ian Bogost has forcefully argued, we should certainly avoid implicitly deifying algorithms by how we talk about them. But even some of our more mundane metaphors are not without their own difficulties. In a series of posts at The Infernal Machine, Kevin Hamilton considers the implications of the popular “black box” metaphor and how it encourages us to think about and respond to algorithms.
The black box metaphor tries to get at the opacity of algorithmic processes. Inputs are transformed into outputs, but most of us have no idea how the transformation was effected. More concretely, you may have been denied a loan or job based on the determinations of a program running an algorithm, but how exactly that determination was made remains a mystery.
In his discussion of the black box metaphor, Hamilton invites us to consider the following scenario:
“Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.”
But how effective is this new way of approaching her engagement with Facebook, now informed by the black box metaphor? Hamilton thinks “this grasp toward agency is also the beginning of a new system.” “Tweaking to account for black-boxed algorithmic processes,” Hamilton suggests, “could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” Ultimately, Hamilton concludes, “most of us are stuck in an ‘opt-in or opt-out’ scenario that never goes anywhere.”
If I read him correctly, Hamilton is describing an escalating, never-ending battle to achieve a variety of desired outcomes in relation to the algorithmic system, all of which involve securing some kind of independence from the system, which we now understand as something standing apart and against us. One of those outcomes may be understood as the state Evan Selinger and Woodrow Hartzog have called obscurity, “the idea that when information is hard to obtain or understand, it is, to some degree, safe.” “Obscurity,” in their view, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power.”
Another desired outcome that fuels resistance to black box algorithms involves what we might sum up as the quest for authenticity. Whatever relative success algorithms achieve in predicting our likes and dislikes, our actions, our desires–such successes are often experienced as an affront to our individuality and autonomy. Ironically, the resulting battle against the algorithm often secures the their relative victory by fostering what Frank Pasquale has called the algorithmic self, constantly modulating itself in response/reaction to the algorithms it encounters.
More recently, Quinn Norton expressed similar concerns from a slightly different angle: “Your internet experience isn’t the main result of algorithms built on surveillance data; you are. Humans are beautifully plastic, endlessly adaptable, and over time advertisers can use that fact to make you into whatever they were hired to make you be.”
Algorithms and the Banality of Evil
These concerns about privacy or obscurity on the one hand and agency or authenticity on the other are far from insignificant. Moving forward, though, I will propose another approach to the challenges posed by algorithmic culture, and I’ll do so with a little help from Joseph Conrad and Hannah Arendt.
In Conrad’s Heart of Darkness, as the narrator, Marlow, makes his way down the western coast of Africa toward the mouth of the Congo River in the service of a Belgian trading company, he spots a warship anchored not far from shore: “There wasn’t even a shed there,” he remembers, “and she was shelling the bush.”
“In the empty immensity of earth, sky, and water,” he goes on, “there she was, incomprehensible, firing into a continent …. and nothing happened. Nothing could happen.” “There was a touch of insanity in the proceeding,” he concluded. This curious and disturbing sight is the first of three such cases encountered by Marlow in quick succession.
Not long after he arrived at the Company’s station, Marlow heard a loud horn and then saw natives scurry away just before witnessing an explosion on the mountainside: “No change appeared on the face of the rock. They were building a railway. The cliff was not in the way of anything; but this objectless blasting was all the work that was going on.”
These two instances of seemingly absurd, arbitrary action are followed by a third. Walking along the station’s grounds, Marlow “avoided a vast artificial hole somebody had been digging on the slope, the purpose of which I found it impossible to divine.” As they say: two is a coincidence; three’s a pattern.
Nestled among these cases of mindless, meaningless action, we encounter as well another kind of related thoughtlessness. The seemingly aimless shelling he witnessed at sea, Marlow is assured, targeted an unseen camp of natives. Registering the incongruity, Marlow exclaims, “he called them enemies!” Later, Marlow recalls the shelling off the coastline when he observed the natives scampering clear of each blast on the mountainside: “but these men could by no stretch of the imagination be called enemies. They were called criminals, and the outraged law, like the bursting shells, had come to them, an insoluble mystery from the sea.”
Taken together these incidents convey a principle: thoughtlessness couples with ideology to abet violent oppression. We’ll come back to that principle in a moment, but, before doing so, consider two more passages from the novel. Just before that third case of mindless action, Marlow reflected on the peculiar nature of the evil he was encountering:
“I’ve seen the devil of violence, and the devil of greed, and the devil of hot desire; but, by all the stars! these were strong, lusty, red-eyed devils, that swayed and drove men–men, I tell you. But as I stood on this hillside, I foresaw that in the blinding sunshine of that land I would become acquainted with a flabby, pretending, weak-eyed devil of rapacious and pitiless folly.”
Finally, although more illustrations could be adduced, after an exchange with an insipid, chatty company functionary, who is also an acolyte of Mr. Kurtz, Marlow had this to say: “I let him run on, the papier-mâché Mephistopheles, and it seemed to me that if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”
That sentence, to my mind, most readily explains why T.S. Eliot chose as an epigraph for his 1925 poem, “The Hollow Men,” a line from Heart of Darkness: “Mistah Kurtz – he dead.” This is likely an idiosyncratic reading, so take it with the requisite grain of salt, but I take Conrad’s papier-mâché Mephistopheles to be of a piece with Eliot’s hollow men, who having died are remembered “Not as lost
Violent souls, but only
As the hollow men
The stuffed men.”
For his part, Conrad understood that these hollow men, these flabby devils were still capable of immense mischief. Within the world as it is administered by the Company, there is a great deal of doing but very little thinking or understanding. Under these circumstances, men are characterized by a thoroughgoing superficiality that renders them willing, if not altogether motivated participants in the Company’s depredations. Conrad, in fact, seems to have intuited the peculiar dangers posed by bureaucratic anomie and anticipated something like what Hannah Arendt later sought to capture in her (in)famous formulation, “the banality of evil.”
If you are familiar with the concept of the banality of evil, you know that Arendt conceived of it as a way of characterizing the kind of evil embodied by Adolph Eichmann, a leading architect of the Holocaust, and you may now be wondering if I’m preparing to argue that algorithms will somehow facilitate another mass extermination of human beings.
Not exactly. I am circumspectly suggesting that the habits of the algorithmic mind are not altogether unlike the habits of the bureaucratic mind. (Adam Elkus makes a similar correlation here, but I think I’m aiming at a slightly different target.) Both are characterized by an unthinking automaticity, a narrowness of focus, and a refusal of responsibility that yields the superficiality or hollowness Conrad, Eliot, and Arendt all seem to be describing, each in their own way. And this superficiality or hollowness is too easily filled with mischief and cruelty.
While Eichmann in Jerusalem is mostly remembered for that one phrase (and also for the controversy the book engendered), “the banality of evil” appears, by my count, only once in the book. Arendt later regretted using the phrase, and it has been widely misunderstood. Nonetheless, I think there is some value to it, or at least to the condition that it sought to elucidate. Happily, Arendt returned to the theme in a later, unfinished work, The Life of the Mind.
Eichmann’s trial continued to haunt Arendt. In the Introduction, Arendt explained that the impetus for the lectures that would become The Life of the Mind stemmed from the Eichmann trial. She admits that in referring to the banality of evil she “held no thesis or doctrine,” but she now returns to the nature of evil embodied by Eichmann in a renewed attempt to understand it: “The deeds were monstrous, but the doer … was quite ordinary, commonplace, and neither demonic nor monstrous.” She might have added: “… if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”
There was only one “notable characteristic” that stood out to Arendt: “it was not stupidity but thoughtlessness.” Arendt’s close friend, Mary McCarthy, felt that this word choice was unfortunate. “Inability to think” rather than thoughtlessness, McCarthy believed, was closer to the sense of the German word Gedankenlosigkeit.
Later in the Introduction, Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”
Arendt explained that it was this “absence of thinking–which is so ordinary an experience in our everyday life, where we have hardly the time, let alone the inclination, to stop and think–that awakened my interest.” And it posed a series of related questions that Arendt sought to address:
“Is evil-doing (the sins of omission, as well as the sins of commission) possible in default of not just ‘base motives’ (as the law calls them) but of any motives whatever, of any particular prompting of interest or volition?”
“Might the problem of good and evil, our faculty for telling right from wrong, be connected with our faculty of thought?”
All told, Arendt arrived at this final formulation of the question that drove her inquiry: “Could the activity of thinking as such, the habit of examining whatever happens to come to pass or to attract attention, regardless of results and specific content, could this activity be among the conditions that make men abstain from evil-doing or even actually ‘condition’ them against it?”
It is with these questions in mind–questions, mind you, not answers–that I want to return to the subject with which we began, algorithms.
Outsourcing the Life of the Mind
Momentarily considered apart from data collection and the devices that enable it, algorithms are principally problem solving tools. They solve problems that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind. So, to borrow the language that Evan Selinger has deployed so effectively in his critique of contemporary technology, we might say that algorithms outsource the life of the mind. And, if Arendt is right, this outsourcing of the life of the mind is morally consequential.
The outsourcing problem is at the root of much of our unease with contemporary technology. Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.
But perhaps we have ignored another dimension of the problem, one that the outsourcing critique itself might, possibly, encourage. Consider this: to say that algorithms are displacing the life of the mind is to unwittingly endorse a terribly impoverished account of the life of the mind. For instance, if I were to argue that the ability to “Google” whatever bit of information we happen to need when we need it leads to an unfortunate “outsourcing” of our memory, it may be that I am already giving up the game because I am implicitly granting that a real equivalence exists between all that is entailed by human memory and the ability to digitally store and access information. A moments reflection, of course, will reveal that human remembering involves considerably more than the mere retrieval of discreet bits of data. The outsourcing critique, then, valuable as it is, must also challenge the assumption that the outsourcing occurs without remainder.
Viewed in this light, the problem with outsourcing the life of the mind is that it encourages an impoverished conception of what constitutes the life of the mind in the first place. Outsourcing, then, threatens our ability to think not only because some of our “thinking” will be done for us; it will do so because, if we are not careful, we will be habituated into conceiving of the life of the mind on the model of the problem-solving algorithm. We would thereby surrender the kind of thinking that Arendt sought to describe and defend, thinking that might “condition” us against the varieties of evil that transpire in environments of pervasive thoughtlessness.
In our responses to the concerns raised by algorithmic culture, we tend to ask, What can we do? Perhaps, this is already to miss the point by conceiving of the matter as a problem to be solved by something like a technical solution. Perhaps the most important and powerful response is not an action we take but rather an increased devotion to the life of the mind. The phrase sounds quaint, or, worse, elitist. As Arendt meant it, it was neither. Indeed, Arendt was convinced that if thinking was somehow essential to moral action, it must be accessible to all: “If […] the ability to tell right from wrong should turn out to have anything to do with the ability to think, then we must be able to ‘demand’ its exercise from every sane person, no matter how erudite or ignorant, intelligent or stupid, he may happen to be.”
And how might we pursue the life of the mind? Perhaps the first, modest step in that direction is simply the cultivation of times and spaces for thinking, and perhaps also resisting the urge to check if there is an app for that.
MY GAWD, Give me time, please, to look at this blog more closely, more regularly, and please a fraction of the mind power I had in grad school — so that I can truly evaluate, and learn/benefit from, all the mind behind it has to offer … me. (Yes, just me for now.)
Yes, this is sort of a prayer. I think you’re big into/onto something here. I will pay you whatever attention I can.
I feel the same way! It’s just long enough to be considered a ‘longread’ (what a dreadful word) by web standards. It requires me to ‘switch modes’ from my normal internet reading.
Thanks for the insights, Michael.
Cheers!
That whole “longread” business makes sense in certain contexts, including certain blogs, but just as bulleted lists often lead to more questions rather than help clarify/simplify (speedify) anything, web-friendly “shortreads” don’t always work. They’re often complete travesties, glossing over significant complexities in their subject matter or, worse, getting it completely wrong. The amount we’re willing to “really read” is already ridiculously, disproportionately small to the amount of information we’re hit with every day. Continuing along this path, we becoming increasingly vulnerable, like baby birds who require their mother to retrieve, deliver, chew, and partially digest their food before they can eat it. Only, the web is not your dear old ma! Although her focus is “storytelling,” Maria Popova’s “Wisdom in the age of information and how to navigate the open sea of knowledge,” an essay beautifully animated here makes the case well, if you have the time to check it out ;): https://www.facebook.com/brainpickings.mariapopova?ref=mf
I keep struggling to make the same argument against the utopians, although less eruditely, and I think it really does come back to there being not real quick fix for self-improvement. In the same way that pills and tech can’t really replace, only supplement, better diet and more exercise, there’s no real tech substitute for mental and spiritual growth. It takes work, focus, effort and while you can supplement that with tech, and maybe pills, there’s just no getting around needing to make an effort to grow mentally or physically.
Right. The key is to show that there are goods for which the labor of attaining them is an essential, not accidental to their goodness. Hard case to make when contrary assumptions (efficiency, expediency, interchangeability of means, etc.) are part of the taken-for-granted background of thought.
The “Life” of the mind?
First let’s figure out where the brain ends and the mind begins. As far as I can tell it’s all nothing but chemistry and electricity. All the rest is stuff we make up using the chemistry and electricity in ways we don’t really comprehend. It brings Ouroboros to “mind“.
I wonder if our prehistoric relatives were troubled by algorithms. Seems that maybe the human brain is the mother of all black boxes.
Yes, the human brain = mother of all black boxes. That seems right on. But I really don’t think brain vs. mind arguments have any place here. Looking back on the whole piece now, though, I’m thinking THINKING (wanting to avoid it, not making enough time for it, etc.) might not be the main problem just as the solutions we arrive at algorithmically do not always solve anything. Seem to me the thing people could cultivate more of, and which might help us “resist the habits…,” is tolerance of uncertainty — both the kind that may eventually resolve and the kind that is eternal.
I’d suggest that the “tolerance of uncertainty“, and even the expectation of it, was a key to survival for our ancient predecessors. The less tolerant or aware of it we become, as a species, the more imminent our extinction. This lack of respect for uncertainty is a significant aspect of our ridiculously arrogant anthropocentricity. Even now it leads us to the brink.
Not the slightest scrap of hard evidence, either morphological or genetic, suggests that Homo sapiens is not, like all animals, a natural by-product of genetic and Darwinian evolution. We should therefore assume that we, like they, are uncontaminated by any supra-natural influences. We may well be excellent communicators and tool-makers, and also the most self-aware, mystical and malicious animals on Earth, but overwhelming evidence shows that all these distinctions are of degree, not of kind. And yet the myth lives on.” (source)
I’m in glee-less agreement with a lot of this, but don’t see where it falls in the original argument. Perhaps I led us off course. But then you may be headed somewhere even further off. It’s a wash. I like people, believe (most) people mean and want to do well and that many of our biggest mistakes/errors in perception (which may lead to things that look like “lack of respect for uncertainty … a significant aspect of our ridiculously arrogant anthropocentricity) come from this decided intolerance AND the psychological/emotional/spiritual anguish (however those “concepts” square with you) that come from it. And both of these, I’ll bet, are distinctly human — if only by degree, by a very high degree. There’s plenty of evidence (and obvious examples) that other animals apprehend uncertainty and even work against or to resolve it in a systematic/ algorithmic manner — often together, in social networks/groups. But I doubt that they suffer that loaded discomfort w/ ambiguity, WHICH IS the seat (I feel sure) of the best and worst of humanity.
Michael,
Your considerations here lead me back to questions about education, and what it means to master an idea or argument vs. simply applying it without understanding. I found some discussion along this line in Carr’s “The Glass Cage” chapter 4, “The Degeneration Effect”. He starts with Whitehead’s famous quote:
The common notion that “we should cultivate the habit of thinking of what we are doing,” he wrote, is “profoundly erroneous.”–p. 65
He then discusses some problems with this perspective such as how it may lead to a weakening of mental abilities and lack of flexibility.
I think back to some of my mathematics education, and I had a great calculus teacher in high school who taught us how to do integrals and derivatives, and to really understand what they meant. We learned many different tricks and had to do a large number of exercises, all of which were carefully graded. Today, I don’t feel bad if I use a tool like Mathematica to tell me what the integral from 0 to pi of sec^3(x) is. I already worked this out many times by hand, and with enough time could do it again.
I know that the ability to use mathematical algorithms responsibly is rather a smaller domain than the ability to use algorithms controlling finance or credit reports ethically, but I think one could draw a parallel. A hollow algorithmic thinker is one who could not reproduce the arguments leading to the algorithm at hand. The algorithm is external to them.
I suppose I’m trying to point to suggest that there can be ethical algorithmic minds, and for me such an ethics is connected to the ability to understand the algorithms in detail, or at least to be able to point to a literature and history from which it derives.
regards,
Boaz
Boaz,
This sounds like a useful proposal. I’d agree that intimate knowledge of how a thing work can be essential to using it well, in the ethical sense. I wonder, though, how realistic that may be in the case of algorithms due to both their complexity and the degree to which they may be veiled for proprietary reasons. In truth, I don’t know enough to answer those questions.
Yes, but for the most part, algorithms have been invented by people, so someone (usually many people) understands each one. Using algorithms you don’t understand is a kind of greed and over-reaching that risks implementing someone else’s agenda, or causing unintended harm. So this can serve as a guide as to when a system is becoming possibly unethical- when the users no longer understand it.
This post has me thinking about our present algorithm fetish as a variety of gnosticism. It denies the body and bodily experience in favor of a glorified, disembodied, secret knowledge. And, as we know, Americans love our gnosticisms (in no small part because they disguise and thereby assist embodied oppressions). The detail work is important, and thank you for laying out the details above with such care. History is embodied experience, so we need to understand the particular details of each new American gnostic gospel.
I think that a useful analogy. The opaque algorithm is the new secret gnosis.
Reblogged this on Myriad Ways and commented:
Michael Sacasas thoughtfully suggests that our reliance on machines to make decisions for us may lead us to outsource our moral judgement, With a some Heart of Darkness and Hannah Arendt to make some points and a few good examples from recent media sources to illuminate his points.
Thanks for the thoughtful piece. There’s a lot to work with here, but the part that strikes me (right now) is this: “Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure.” What I’m thinking is that the enthusiasts of which you speak typically fall into two camps: VC folks who are only interested in a golden age to the extent that it continues to line their pockets; and typical Silicon Valley folks who seek to create the golden age mainly for folks just like them (white, by and large male, dare I say “privileged”), and who largely do think that all problems do have a tech solution. The further outside of those limited circles one falls, the more the golden age of leisure will likely mean more unemployment and more people with lives structured by the on demand economy and persistent surveillance. It just so happens that these are the same people right now whose precarious existence makes a life of the mind so much more difficult to attain. Tech solutions as they are currently structured will do little to change that.