Evaluating the Promise of Technological Outsourcing

“It is crucial for a resilient democracy that we better understand how these powerful, ubiquitous websites are changing the way we think, interact and behave.” The websites in question are chiefly Google and Facebook. The admonition to better understand their impact on our thinking and civic deliberations comes from an article in The Guardian by Evan Selinger and Brett Frischmann, “Why it’s dangerous to outsource our critical thinking to computers.”

Selinger and Frischmann are the authors of one the forthcoming books I am most eagerly anticipating, Being Human in the 21st Century to be published by Cambridge University Press. I’ve frequently cited Selinger’s outsourcing critique of digital technology (e.g., here and here), which the authors will be expanding and deepening in this study. In short, Selinger has explored how a variety of apps and devices outsource labor that is essential or fundamental to our humanity. It’s an approach that immediately resonated with me, primed as I had been for it by Albert Borgmann’s work. (You can read about Borgmann in the latter link above and here.)

In this case, the crux of Selinger and Frischmann’s critique can be found in these two key paragraphs:

Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.

This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

Their concern, then, is that we may be encouraged to outsource an essential skill to a device or application that promises to do the work for us. In this case, the skill we are tempted to outsource is a critical component of a healthy citizenry. As they put it, “Democracies don’t simply depend on well-informed citizens – they require citizens to be capable of exerting thoughtful, independent judgment.”

As I’m sure Selinger and Frischmann would agree, this outsourcing dynamic is one of the dominant features of the emerging techno-social landscape, and we should work hard to understand its consequences.

As some of you may remember, I’m fond of questions. They are excellent tools for thinking, including thinking about the ethical implications of technology. “Questioning is the piety of thought,” Heidegger once claimed in a famous essay about technology. With that in mind I’ll work my way to a few questions we can ask of outsourcing technologies.

My approach will take its point of departure from Marshall McLuhan’s Laws of Media, sometimes called the Four Effects or McLuhan’s tetrad. These four effects were offered by McLuhan as a compliment to Aristotle’s Four Causes and they were presented as a paradigm by which we might evaluate the consequences of both intellectual and material things, ideas and tools.

The four effects were Retrieval, Reversal, Obsolescence, and Enhancement. Here are a series of questions McLuhan and his son, Eric McLuhan, offered to unpack these four effects:

A. “What recurrence or RETRIEVAL of earlier actions and services is brought into play simultaneously by the new form? What older, previously obsolesced ground is brought back and inheres in the new form?”

B. “When pushed to the limits of its potential, the new form will tend to reverse what had been its original characteristics. What is the REVERSAL potential of the new form?”

C. “If some aspect of a situation is enlarged or enhanced, simultaneously the old condition or un-enhanced situation is displaced thereby. What is pushed aside or OBSOLESCED by the new ‘organ’?”

D. “What does the artefact ENHANCE or intensify or make possible or accelerate? This can be asked concerning a wastebasket, a painting, a steamroller, or a zipper, as well as about a proposition in Euclid or a law of physics. It can be asked about any word or phrase in any language.”

These are all useful questions, but for our purposes the focus will be on the third effect, Obsolescence. It’s in this class of effects that I think we can locate what Selinger calls digital outsourcing. I began by introducing all four, however, so that we wouldn’t be tempted to think that displacement or outsourcing is the only dynamic to which we should give our attention.

When McLuhan invites us to ask what a new technology renders obsolete, we may immediately imagine older technologies that are set aside in favor of the new. Following Borgmann, however, we can also frame the question as a matter of human labor or involvement. In other words, it is not only about older tools that we set aside but also about human faculties, skills, and subjective engagement with the world–these, too, can be displaced or outsourced by new tools. The point, of course, is not to avoid every form of technological displacement, this would be impossible and undesirable. Rather, what we need is a better way of thinking about and evaluating these displacements so that we might, when possible, make wise choices about our use of technology.

So we can begin to elaborate McLuhan’s third effect with this question:

1. What kind of labor does the tool/device/app displace? 

This question yields at least five possible responses:

a. Physical labor, the work of the body
b. Cognitive labor, the work of the mind
c. Emotional labor, the work of the heart
d. Ethical labor, the work of the conscience
e. Volitional labor, the work of the will

The schema implied by these five categories is, of course, like all such schemas, too neat. Take it as a heuristic device.

Other questions follow that help clarify the stakes. After all, what we’re after is not only a taxonomy but also a framework for evaluation.

2. What is the specific end or goal at which the displaced labor is aimed?

In other words, what am I trying to accomplish by the use the technology in question? But the explicit objective I set out to achieve may not be the only effect worth considering; there are implicit effects as well. Some of these implicit effects may be subjective and others may be social; in either case they are not always evident and may, in fact, be difficult to perceive. For example, in using GPS, navigating from Point A to Point B is the explicit objective. However, the use of GPS may also impact my subjective experience of place, for example, and this may carry political implications. So we should also consider a corollary question:

2a. Are there implicit effects associated with the displaced labor?

Consider the work of learning: If the work of learning is ultimately subordinate to becoming a certain kind of person, then it matters very much how we go about learning. This is because  the manner in which we go about acquiring knowledge constitutes a kind of practice that over the long haul shapes our character and disposition in non-trivial ways. Acquiring knowledge through apprenticeship, for example, shapes people in a certain way, acquiring knowledge through extensive print reading in another, and through web based learning in still another. The practice which constitutes our learning, if we are to learn by it, will instill certain habits, virtues, and, potentially, vices — it will shape the kind of person we are becoming.

3. Is the labor we are displacing essential or accidental to the achievement of that goal?

As I’ve written before, when we think of ethical and emotional labor, it’s hard to separate the labor itself from the good that is sought or the end that is pursued. For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous.

A related question arises when we remember the implicit effects we discussed above:

3a. Is the labor essential or accidental to the implicit effects associated with the displaced labor?

4. What skills are sustained by the labor being displaced? 

4a. Are these skills valuable for their own sake and/or transferable to other domains?

These two questions seem more straightforward, so I will say less about them. The key point is essentially the one made by Selinger and Frischmann in the article with which we began: the kind of critical thinking that demigrated require of their citizens should be actively cultivated. Outsourcing that work to an algorithm may, in fact, weaken the very skill it seeks to support.

These questions should help us think more clearly about the promise of technological outsourcing. They may also help us to think more clearly about what we have been doing all along. After all, new technologies often cast old experiences in new light. Even when we are wary or critical of the technologies in question, we may still find that their presence illuminates aspects of our experience by inviting us to think about what we had previously taken for granted.

Machines, Work, and the Value of People

Late last month, Microsoft released a “bot” that guesses your age based on an uploaded picture. The bot tended to be only marginally accurate and sometimes hilariously (or disconcertingly) wrong. What’s more, people quickly began having some fun with the program by uploading faces of actors playing fictional characters, such as Yoda or Gandalf. My favorite was Ian Bogost’s submission:

Shortly after the How Old bot had its fleeting moment of virality, Nathan Jurgenson tweeted the following:

This was an interesting observation, and it generated a few interesting replies. Jurgenson himself added, “much of the bigdata/algorithm debates miss how poor these often perform. many critiques presuppose & reify their untenable positivism.” He summed up this line of thought with this tweet: “so much ‘tech criticism’ starts first with uncritically buying all of the hype silicon valley spits out.”

Let’s pause here for a moment. All of this is absolutely true. Yet … it’s not all hype, not necessarily anyway. Let’s bracket the more outlandish claims made by the singularity crowd, of course. But take facial recognition software, for instance. It doesn’t strike me as wildly implausible that in the near future facial recognition programs will achieve a rather striking degree of accuracy.

Along these lines, I found Kyle Wrather’s replies to Jurgenson’s tweet particularly interesting. First, Wrather noted, “[How Old Bot] being wrong makes people more comfortable w/ facial recognition b/c it seems less threatening.” He then added, “I think people would be creeped out if we’re totally accurate. When it’s wrong, humans get to be ‘superior.'”

Wrather’s second comment points to an intriguing psychological dynamic. Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But as far as I know, it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? I’m not entirely sure, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

I think this latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

(It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”)

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions:  Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

So, to sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.

The Pleasures of Laborious Reading

I’m hoping to begin posting a bit more frequently soon. First up will be a follow-up to my last post about smart-homes. Until then, here’s a piece by Freddie deBoer well worth your time: “in order to read, start reading.”

DeBoer laments how difficult it has become for many of us to read works that demand sustained attention. This, of course, was the concern that animated Nick Carr’s well-known 2008 essay, “Is Google Making Us Stupid?”

To counteract this trend, deBoer recommends that we take up what he calls a “project book.” As he lays it out, this strikes me as good advice. Along the way, deBoer also makes a series of characteristically trenchant observations about the Internet and what we might call Internet culture. For instance:

“The internet has an immune system, a tendency to produce pushback and resistance to arguments not just about the drawbacks and downsides of endless internet connectivity, but to the very notion of moderation in our use. There is something about the habitual aspects of the internet, the “more, now, again” aspects, that couple with the vague sense of embarrassment we feel about constant internet use to produce a default posture of insecurity and defensiveness about these behaviors.”

Do read the whole thing. What deBoer challenges is, in my view, one of the great temptations of our age: the willingness to abandon or outsource all sorts of labor–intellectual, moral, emotional–the fruits and pleasures of which can be had no other way.

Taylorism on Digital Steroids

Here are reminders, if we needed them, that the role of technology in our world transcends artifacts, tools, and devices. It also entails, as Jacques Ellul well understood, a particular way of looking at the world and its problems (and, as Morozov has suggested, it constitutes certain conditions and phenomenon as problems).

From Salon:

“Amazon equals Walmart in the use of monitoring technologies to track the minute-by-minute movements and performance of employees and in settings that go beyond the assembly line to include their movement between loading and unloading docks, between packing and unpacking stations, and to and from the miles of shelving at what Amazon calls its “fulfillment centers”—gigantic warehouses where goods ordered by Amazon’s online customers are sent by manufacturers and wholesalers, there to be shelved, packaged, and sent out again to the Amazon customer.

Amazon’s shop-floor processes are an extreme variant of Taylorism that Frederick Winslow Taylor himself, a near century after his death, would have no trouble recognizing. With this twenty-first-century Taylorism, management experts, scientific managers, take the basic workplace tasks at Amazon, such as the movement, shelving, and packaging of goods, and break down these tasks into their subtasks, usually measured in seconds; then rely on time and motion studies to find the fastest way to perform each subtask; and then reassemble the subtasks and make this “one best way” the process that employees must follow.”

From Business Insider:

“There’s a fine line between micromanaging and house arrest, and British grocery store chain Tesco […] seems determined to cross it. According to the Irish Independent, employees at the company’s Dublin distribution center are forced to wear armbands that measure their productivity so closely that the company even knows when they take a bathroom break.

The armbands, officially known as Motorola arm-mounted terminals, look like something between a Game Boy and Garmin GPS device. The terminals keep track of how quickly and competently employees unload and scan goods in the warehouse and gives them a grade. It also sets benchmarks for loading and unloading speed, which workers are expected to meet. The monitors can be turned off during workers’ lunch breaks, but anything else—bathroom trips, visits to a water fountain—reportedly lowers their productivity score.”

These folks would’ve been in trouble. They might also have had the good sense to revolt, being peasants and all.

Pieter Brueghel, The Harvesters (1565)
Pieter Brueghel, The Harvesters (1565)