Evaluating the Promise of Technological Outsourcing

“It is crucial for a resilient democracy that we better understand how these powerful, ubiquitous websites are changing the way we think, interact and behave.” The websites in question are chiefly Google and Facebook. The admonition to better understand their impact on our thinking and civic deliberations comes from an article in The Guardian by Evan Selinger and Brett Frischmann, “Why it’s dangerous to outsource our critical thinking to computers.”

Selinger and Frischmann are the authors of one the forthcoming books I am most eagerly anticipating, Being Human in the 21st Century to be published by Cambridge University Press. I’ve frequently cited Selinger’s outsourcing critique of digital technology (e.g., here and here), which the authors will be expanding and deepening in this study. In short, Selinger has explored how a variety of apps and devices outsource labor that is essential or fundamental to our humanity. It’s an approach that immediately resonated with me, primed as I had been for it by Albert Borgmann’s work. (You can read about Borgmann in the latter link above and here.)

In this case, the crux of Selinger and Frischmann’s critique can be found in these two key paragraphs:

Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.

This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

Their concern, then, is that we may be encouraged to outsource an essential skill to a device or application that promises to do the work for us. In this case, the skill we are tempted to outsource is a critical component of a healthy citizenry. As they put it, “Democracies don’t simply depend on well-informed citizens – they require citizens to be capable of exerting thoughtful, independent judgment.”

As I’m sure Selinger and Frischmann would agree, this outsourcing dynamic is one of the dominant features of the emerging techno-social landscape, and we should work hard to understand its consequences.

As some of you may remember, I’m fond of questions. They are excellent tools for thinking, including thinking about the ethical implications of technology. “Questioning is the piety of thought,” Heidegger once claimed in a famous essay about technology. With that in mind I’ll work my way to a few questions we can ask of outsourcing technologies.

My approach will take its point of departure from Marshall McLuhan’s Laws of Media, sometimes called the Four Effects or McLuhan’s tetrad. These four effects were offered by McLuhan as a compliment to Aristotle’s Four Causes and they were presented as a paradigm by which we might evaluate the consequences of both intellectual and material things, ideas and tools.

The four effects were Retrieval, Reversal, Obsolescence, and Enhancement. Here are a series of questions McLuhan and his son, Eric McLuhan, offered to unpack these four effects:

A. “What recurrence or RETRIEVAL of earlier actions and services is brought into play simultaneously by the new form? What older, previously obsolesced ground is brought back and inheres in the new form?”

B. “When pushed to the limits of its potential, the new form will tend to reverse what had been its original characteristics. What is the REVERSAL potential of the new form?”

C. “If some aspect of a situation is enlarged or enhanced, simultaneously the old condition or un-enhanced situation is displaced thereby. What is pushed aside or OBSOLESCED by the new ‘organ’?”

D. “What does the artefact ENHANCE or intensify or make possible or accelerate? This can be asked concerning a wastebasket, a painting, a steamroller, or a zipper, as well as about a proposition in Euclid or a law of physics. It can be asked about any word or phrase in any language.”

These are all useful questions, but for our purposes the focus will be on the third effect, Obsolescence. It’s in this class of effects that I think we can locate what Selinger calls digital outsourcing. I began by introducing all four, however, so that we wouldn’t be tempted to think that displacement or outsourcing is the only dynamic to which we should give our attention.

When McLuhan invites us to ask what a new technology renders obsolete, we may immediately imagine older technologies that are set aside in favor of the new. Following Borgmann, however, we can also frame the question as a matter of human labor or involvement. In other words, it is not only about older tools that we set aside but also about human faculties, skills, and subjective engagement with the world–these, too, can be displaced or outsourced by new tools. The point, of course, is not to avoid every form of technological displacement, this would be impossible and undesirable. Rather, what we need is a better way of thinking about and evaluating these displacements so that we might, when possible, make wise choices about our use of technology.

So we can begin to elaborate McLuhan’s third effect with this question:

1. What kind of labor does the tool/device/app displace? 

This question yields at least five possible responses:

a. Physical labor, the work of the body
b. Cognitive labor, the work of the mind
c. Emotional labor, the work of the heart
d. Ethical labor, the work of the conscience
e. Volitional labor, the work of the will

The schema implied by these five categories is, of course, like all such schemas, too neat. Take it as a heuristic device.

Other questions follow that help clarify the stakes. After all, what we’re after is not only a taxonomy but also a framework for evaluation.

2. What is the specific end or goal at which the displaced labor is aimed?

In other words, what am I trying to accomplish by the use the technology in question? But the explicit objective I set out to achieve may not be the only effect worth considering; there are implicit effects as well. Some of these implicit effects may be subjective and others may be social; in either case they are not always evident and may, in fact, be difficult to perceive. For example, in using GPS, navigating from Point A to Point B is the explicit objective. However, the use of GPS may also impact my subjective experience of place, for example, and this may carry political implications. So we should also consider a corollary question:

2a. Are there implicit effects associated with the displaced labor?

Consider the work of learning: If the work of learning is ultimately subordinate to becoming a certain kind of person, then it matters very much how we go about learning. This is because  the manner in which we go about acquiring knowledge constitutes a kind of practice that over the long haul shapes our character and disposition in non-trivial ways. Acquiring knowledge through apprenticeship, for example, shapes people in a certain way, acquiring knowledge through extensive print reading in another, and through web based learning in still another. The practice which constitutes our learning, if we are to learn by it, will instill certain habits, virtues, and, potentially, vices — it will shape the kind of person we are becoming.

3. Is the labor we are displacing essential or accidental to the achievement of that goal?

As I’ve written before, when we think of ethical and emotional labor, it’s hard to separate the labor itself from the good that is sought or the end that is pursued. For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous.

A related question arises when we remember the implicit effects we discussed above:

3a. Is the labor essential or accidental to the implicit effects associated with the displaced labor?

4. What skills are sustained by the labor being displaced? 

4a. Are these skills valuable for their own sake and/or transferable to other domains?

These two questions seem more straightforward, so I will say less about them. The key point is essentially the one made by Selinger and Frischmann in the article with which we began: the kind of critical thinking that demigrated require of their citizens should be actively cultivated. Outsourcing that work to an algorithm may, in fact, weaken the very skill it seeks to support.

These questions should help us think more clearly about the promise of technological outsourcing. They may also help us to think more clearly about what we have been doing all along. After all, new technologies often cast old experiences in new light. Even when we are wary or critical of the technologies in question, we may still find that their presence illuminates aspects of our experience by inviting us to think about what we had previously taken for granted.

The Ethics of Information Literacy

Yesterday, I caught Derek Thompson of The Atlantic discussing the problem of “fake news” on NPR’s Here and Now. It was all very sensible, of course. Thompson impressed upon the audience the importance of media literacy. He urged listeners to examine the provenance of the information they encounter. He also cited an article that appeared in US News & World Report about teaching high schoolers how to critically evaluate online information. The article, drawing on the advice of teachers, presented three keys: 1. Teach teens to question the source, 2. Help students identify credible sources, and 3. Give students regular opportunities to practice vetting information.

This is all fine. I suspect the problem is not limited to teens–an older cohort appears just as susceptible, if not more so, to “fake news”– but whatever the case, I spend a good deal of time in my classes doing something like what Thompson recommended. In fact, on more than one occasion, I’ve claimed that among the most important skills teachers can impart to students is the ability to discern the credible from the incredible and the serious from the frivolous. (I suspect the latter distinction is the more important and the more challenging to make.)

But we mustn’t fall into the trap of believing that this is simply a problem of the intellect to be solved with a few pointers and a handful of strategies. There is an ethical dimension to the problem as well because desire and virtue bear upon knowing and understanding. Thompson himself alludes to this ethical dimension, but he speaks of it mostly in the language of cognitive psychology–it is the problem of confirmation bias. This is a useful, but perhaps too narrow way of understanding the problem. However we frame it though, the key is this: We must learn to question more than our sources, we must also question ourselves.

I suggest a list of three questions for students, and by implication all of us, to consider. The first two are of the standard sort: 1. Who wrote this? and 2. Why should I trust them?

It would be foolish, in my view, to pretend that any of us can be independent arbiters of the truthfulness of claims made in every discipline or field of knowledge. It is unreasonable to expect that we would all become experts in every field about which we might be expected to have an informed opinion. Consequently, it is better to frame critical examination of sources as a matter of trustworthiness. Can I determine whether or not I have cause to trust the author or the organization that has produced the information I am evaluating? Of course, trustworthiness does not entail truthfulness or accuracy. When trustworthy sources conflict, for instance, we may need to make a judgment call or we might find ourselves unable to arbitrate the competing claims. It inevitably gets complicated.

The third question, however, gets at the ethical dimension: 3. Do I want this to be true?

This question is intended as a diagnostic tool. The goal is to reveal, so far as we might become self-aware about such things, our biases and sympathies. There are three possible answers: yes, no, and I don’t care. In each case, a challenge to discernment is entailed. If I want something to be true, and there may be various reasons for this, then I need to do my best to reposition myself as a skeptical critic. If I do not want something to be true, then I need to do my best to reposition myself as a sympathetic advocate. A measure of humility and courage are required in each case.

If I do not care, then there is another sort of problem to overcome. In this case, I may be led astray by a lack of care. I may believe what I first encounter because I am not sufficiently motivated to press further. Whereas it is something like passion or pride that we must guard against when we want to believe or disbelieve a claim, apathy is the problem here.

When I have taught classes on ethics, it has seemed to me that the critical question is not, as it is often assumed to be, “What is the right thing to do?” Rather, the critical question is this: “Why should someone desire to learn what is right and then do it?”

Likewise with the problem of information literacy. It is one thing to be presented with a set of skills and strategies to make us more discerning and critical. It is another, more important thing, to care about the truth at all, to care more about the truth than about being right.

In short, the business of teaching media literacy or critical thinking skills amounts to a kind of moral education. In a characteristically elaborate footnote in “Authority and American Usage,” David Foster Wallace got at this point, although from the perspective of the writer. In the body of his essay, Wallace writes,  “the error that Freshman Composition classes spend all semester trying to keep kids from making—the error of presuming the very audience-agreement that it is really their rhetorical job to earn.” The footnote to this sentence adds the following, emphasis mine:

Helping them eliminate the error involves drumming into student writers two big injunctions: (1) Do not presume that the reader can read your mind — anything you want the reader to visualize or consider or conclude, you must provide; (2) Do not presume that the reader feels the same way that you do about a given experience or issue — your argument cannot just assume as true the very things you’re trying to argue for. Because (1) and (2) are so simple and obvious, it may surprise you to know that they are actually incredibly hard to get students to understand in such a way that the principles inform their writing. The reason for the difficulty is that, in the abstract, (1) and (2) are intellectual, whereas in practice they are more things of the spirit. The injunctions require of the student both the imagination to conceive of the reader as a separate human being and the empathy to realize that this separate person has preferences and confusions and beliefs of her own, p/c/b’s that are just as deserving of respectful consideration as the writer’s. More, (1) and (2) require of students the humility to distinguish between a universal truth (‘This is the way things are, and only an idiot would disagree’) and something that the writer merely opines (‘My reasons for recommending this are as follows:’) . . . . I therefore submit that the hoary cliché ‘Teaching the student to write is teaching the student to think’ sells the enterprise way short. Thinking isn’t even half of it.

I take Wallace’s counsel here to be, more or less, the mirror image of the counsel I’m offering to us as readers.

Finally, I should say that all of the preceding does not begin to touch on much of what we would also need to consider when we’re thinking about media literacy. Most of the above deals with the matter of evaluating content, which is obviously not unimportant, and textual content at that. However, media literacy in the fullest sense would also entail an understanding of more subtle effects arising from the nature of the various tools we use to communicate content, not to mention the economic and political factors conditioning the production and dissemination of information.


If you’ve appreciated what you’ve read, consider supporting the writer.

Humanist Technology Criticism

“Who are the humanists, and why do they dislike technology so much?”

That’s what Andrew McAfee wants to know. McAfee, formerly of Harvard Business School, is now a researcher at MIT and the author, with Erik Brynjolfsson, of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. At his blog, hosted by the Financial Times, McAfee expressed his curiosity about the use of the terms humanism or humanist in “critiques of technological progress.” “I’m honestly not sure what they mean in this context,” McAfee admitted.

Humanism is a rather vague and contested term with a convoluted history, so McAfee asks a fair question–even if his framing is rather slanted. I suspect that most of the critics he has in mind would take issue with the second half of McAfee’s compound query. One of the examples he cites, after all, is Jaron Lanier, who, whatever else we might say of him, can hardly be described as someone who “dislikes technology.”

That said, what response can we offer McAfee? It would be helpful to sketch a history of the network of ideas that have been linked to the family of words that include humanism, humanist, and the humanities. The journey would take us from the Greeks and the Romans, through (not excluding) the medieval period to the Renaissance and beyond. But that would be a much larger project, and I wouldn’t be your best guide. Suffice it to say that near the end of such a journey, we would come to find the idea of humanism splintered and in retreat; indeed, in some quarters, we would find it rejected and despised.

But if we forego the more detailed history of the concept, can we not, nonetheless, offer some clarifying comments regarding the more limited usage that has perplexed McAfee? Perhaps.

I’ll start with an observation made by Wilfred McClay in a 2008 essay in the Wilson Quarterly, “The Burden of the Humanities.” McClay suggested that we define the humanities as “the study of human things in human ways.”¹ If so, McClay continues, “then it follows that they function in culture as a kind of corrective or regulative mechanism, forcing upon our attention those features of our complex humanity that the given age may be neglecting or missing.” Consequently, we have a hard time defining the humanities–and, I would add, humanism–because “they have always defined themselves in opposition.”

McClay provides a brief historical sketch showing that the humanities have, at different historical junctures, defined themselves by articulating a vision of human distinctiveness in opposition to the animal, the divine, and the rational-mechanical. “What we are as humans,” McClay adds, “is, in some respects, best defined by what we are not: not gods, not angels, not devils, not machines, not merely animals.”

In McClay’s historical sketch, humanism and the humanities have lately sought to articulate an understanding of the human in opposition to the “rational-mechanical,” or, in other words, in opposition to the technological, broadly speaking. In McClay’s telling, this phase of humanist discourse emerges in early nineteenth century responses to the Enlightenment and industrialization. Here we have the beginnings of a response to McAfee’s query. The deployment of humanist discourse in the context of technology criticism is not exactly a recent development.

There may have been earlier voices of which I am unaware, but we may point to Thomas Carlyle’s 1829 essay, “Sign of the Times,” as an ur-text of the genre.² Carlyle dubbed his era the “Mechanical Age.” “Men are grown mechanical in head and heart, as well as in hand,” Carlyle complained. “Not for internal perfection,” he added, “but for external combinations and arrangements for institutions, constitutions, for Mechanism of one sort or another, do they hope and struggle.”

Talk of humanism in relation to technology also flourished in the early and mid-twentieth century. Alan Jacobs, for instance, is currently working on a book project that examines the response of a set of early 20th century Christian humanists, including W.H. Auden, Simone Weil, and Jacques Maritain, to total war and the rise of technocracy. “On some level each of these figures,” Jacobs explains, “intuited or explicitly argued that if the Allies won the war simply because of their technological superiority — and then, precisely because of that success, allowed their societies to become purely technocratic, ruled by the military-industrial complex — their victory would become largely a hollow one. Each of them sees the creative renewal of some form of Christian humanism as a necessary counterbalance to technocracy.”

In a more secular vein, Paul Goodman asked in 1969, “Can Technology Be Humane?” In his article (h/t Nicholas Carr), Goodman observed that popular attitudes toward technology had shifted in the post-war world. Science and technology could no longer claim the “unblemished and justified reputation as a wonderful adventure” they had enjoyed for the previous three centuries. “The immediate reasons for this shattering reversal of values,” in Goodman’s view, “are fairly obvious.

Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied subsequent developments, the deterioration of the physical environment and the destruction of the biosphere, the catastrophes impending over the cities because of technological failures and psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield diminishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely only increase human woe.”

For his part, Goodman advocated a more prudential and, yes, humane approach to technology. “Whether or not it draws on new scientific research,” Goodman argued, “technology is a branch of moral philosophy, not of science.” “As a moral philosopher,” Goodman continued in a remarkable passage, “a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.” The whole essay is well-worth your time. I bring it up merely as another instance of the genre of humanistic technology criticism.

More recently, in an interview cited by McAfee, Jaron Lanier has advocated the revival of humanism in relation to the present technological milieu. “I’m trying to revive or, if you like, resuscitate, or rehabilitate the term humanism,” Lanier explained before being interrupted by a bellboy cum Kantian, who breaks into the interview to say, “Humanism is humanity’s adulthood. Just thought I’d throw that in.” When he resumes, Lanier expanded on what he means by humanism:

“And pragmatically, if you don’t treat people as special, if you don’t create some sort of a special zone for humans—especially when you’re designing technology—you’ll end up dehumanising the world. You’ll turn people into some giant, stupid information system, which is what I think we’re doing. I agree that humanism is humanity’s adulthood, but only because adults learn to behave in ways that are pragmatic. We have to start thinking of humans as being these special, magical entities—we have to mystify ourselves because it’s the only way to look after ourselves given how good we’re getting at technology.”

In McAfee’s defense, this is an admittedly murky vision. I couldn’t tell you what exactly Lanier is proposing when he says that we have to “mystify ourselves.” Earlier in the interview, however, he gave an example that might help us understand his concerns. Discussing Google Translate, he observes the following: “What people don’t understand is that the translation is really just a mashup of pre-existing translations by real people. The current set up of the internet trains us to ignore the real people who did the first translations, in order to create the illusion that there is an electronic brain. This idea is terribly damaging. It does dehumanise people; it does reduce people.”

So Lanier’s complaint here seems to be that this particular configuration of technology and obscures an essential human element. Furthermore, Lanier is concerned that people are reduced in this process. This is, again, a murky concept, but I take it to mean that some important element of what constitutes the human is being ignored or marginalized or suppressed. Like the humanities in McClay’s analysis, Lanier’s humanism draws our attention to “those features of our complex humanity that the given age may be neglecting or missing.”

One last example. Some years ago, historian of science George Dyson wondered if the cost of machines that think will be people who don’t. Dyson’s quip suggests the problem that Evan Selinger has dubbed the outsourcing of our humanity. We outsource our humanity when we allow an app or device to do for us what we ought to be doing for ourselves (naturally, that ought needs to be established). Selinger has developed his critique in response to a variety of apps but especially those that outsource what we may call our emotional labor.

I think it fair to include the outsourcing critique within the broader genre of humanist technology criticism because it assumes something about the nature of our humanity and finds that certain technologies are complicit in its erosion. Not surprisingly, in a tweet of McAfee’s post, Selinger indicated that he and Brett Frischmann had plans to co-author a book analyzing the concept of dehumanizing technology in order to bring clarity to its application. I have no doubt that Selinger and Frishchmann’s work will advance the discussion.

While McAfee was puzzled by humanist discourse with regards to technology criticism, others have been overtly critical. Evgeny Morozov recently complained that most technology critics default to humanist/anti-humanist rhetoric in their critiques in order to evade more challenging questions about politics and economics. For my part, I don’t see why both approaches cannot each contribute to a broader understanding of technology and its consequences while also informing our personal and collective responses.

Of course, while Morozov is critical of humanizing/dehumanizing approach to technology on more or less pragmatic grounds–it is ultimately ineffective in his view–others oppose it on ideological or theoretical grounds. For these critics, humanism is part of the problem not the solution. Technology has been all too humanistic, or anthropocentric, and has consequently wreaked havoc on the global environment. Or, they may argue that any deployment of humanism as an evaluative category also implies a policing of the boundaries of the human with discriminatory consequences. Others will argue that it is impossible to make a hard ontological distinction among the natural, the human, and the technological. We have always been cyborgs in their view. Still others argue that there is no compelling reason to privilege the existing configuration of what we call the human. Humanity is a work in progress and technology will usher in a brave, new post-human world.

Already, I’ve gone on longer than a blog post should, so I won’t comment on each of those objections to humanist discourse. Instead, I’ll leave you with a few considerations about what humanist technology criticism might entail. I’ll do so while acknowledging that these considerations undoubtedly imply a series of assumptions about what it means to be a human being and what constitutes human flourishing.

That said, I would suggest that a humanist critique of technology entails a preference for technology that (1) operates at a humane scale, (2) works toward humane ends, (3) allows for the fullest possible flourishing of a person’s capabilities, (4) does not obfuscate moral responsibility, and (5) acknowledges certain limitations to what we might quaintly call the human condition.

I realize these all need substantial elaboration and support–the fifth point is especially contentious–but I’ll leave it at that for now. Take that as a preliminary sketch. I’ll close, finally, with a parting observation.

A not insubstantial element within the culture that drives technological development is animated by what can only be described as a thoroughgoing disgust with the human condition, particularly its embodied nature. Whether we credit the wildest dreams of the Singulatarians, Extropians, and Post-humanists or not, their disdain as it finds expression in a posture toward technological power is reason enough for technology critics to strive for a humanist critique that acknowledges and celebrates the limitations inherent in our frail, yet wondrous humanity.

This gratitude and reverence for the human as it is presently constituted, in all its wild and glorious diversity, may strike some as an unpalatably religious stance to assume. And, indeed, for many of us it stems from a deeply religious understanding of the world we inhabit, a world that is, as Pope Francis recently put it, “our common home.” Perhaps, though, even the secular citizen may be troubled by, as Hannah Arendt has put it, such a “rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking).”

________________________

¹ Here’s a fuller expression of McClay’s definition from earlier in the essay: “The distinctive task of the humanities, unlike the natural sciences and social sciences, is to grasp human things in human terms, without converting or reducing them to something else: not to physical laws, mechanical systems, biological drives, psychological disorders, social structures, and so on. The humanities attempt to understand the human condition from the inside, as it were, treating the human person as subject as well as object, agent as well as acted-upon.”

² Shelley’s “In Defense of Poetry” might qualify.

Do Things Want?

Alan Jacobs’ 79 Theses on Technology were offered in the spirit of a medieval disputation, and they succeeded in spurring a number of stimulating responses in a series of essays posted to the Infernal Machine over the last two weeks. Along with my response to Jacobs’ provocations, I wanted to engage a debate between Jacobs and Ned O’Gorman about whether or not we may meaningfully speak of what technologies want. Here’s a synopsis of the exchange with my own commentary along the way.

O’Gorman’s initial response focused on the following theses from Jacobs:

40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.

46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.

O’Gorman framed these theses by saying that he found it “perplexing” that Jacobs “is so seemingly unsympathetic to the meaningfulness of things, the class to which technologies belong.” I’m not sure, however, that Jacobs was denying the meaningfulness of things; rather, as I read him, he is contesting the claim that it is from technology that our lives derive their meaning. That may seem a fine distinction, but I think it is an important one. In any case, a little clarification about what exactly “meaning” entails, may go a long way in clarifying that aspect of the discussion.

A little further on, O’Gorman shifts to the question of agency: “Our technological artifacts aren’t wholly distinct from human agency; they are bound up with it.” It is on this ground that the debate mostly unfolds, although there is more than a little slippage between the question of meaning and the question of agency.

O’Gorman appealed to Mary Carruthers’ fascinating study of the place of memory in medieval culture, The Book of Memory: A Study of Memory in Medieval Culture, to support his claim, but I’m not sure the passage he cites supports his claim. He is seeking to establish, as I read him, two claims. First, that technologies are things and things are meaningful. Second, that we may properly attribute agency to technology/things. Now here’s the passage he cites from Carruthers’ work (brackets and ellipses are O’Gorman’s):

“[In the middle ages] interpretation is not attributed to any intention of the man [the author]…but rather to something understood to reside in the text itself.… [T]he important “intention” is within the work itself, as its res, a cluster of meanings which are only partially revealed in its original statement…. What keeps such a view of interpretation from being mere readerly solipsism is precisely the notion of res—the text has a sense within it which is independent of the reader, and which must be amplified, dilated, and broken-out from its words….”

“Things, in this instance manuscripts,” O’Gorman adds, “are indeed meaningful and powerful.” But in this instance, the thing (res) in view is not, in fact, the manuscripts. As Carruthers explains at various other points in The Book of Memory, the res in this context is not a material thing, but something closer to the pre-linguistic essence or idea or concept that the written words convey. It is an immaterial thing.

That said, there are interesting studies that do point to the significance of materiality in medieval context. Ivan Illich’s In the Vineyard of the Text, for example, dwells at length on medieval reading as a bodily experience, an “ascetic discipline focused by a technical object.” Then there’s Caroline Bynum’s fascinating Christian Materiality: An Essay on Religion in Late Medieval Europe, which explores the multifarious ways matter was experienced and theorized in the late middle ages.

Bynum concludes that “current theories that have mostly been used to understand medieval objects are right to attribute agency to objects, but it is an agency that is, in the final analysis, both too metaphorical and too literal.” She adds that insofar as modern theorizing “takes as self-evident the boundary between human and thing, part and whole, mimesis and material, animate and inanimate,” it may be usefully unsettled by an encounter with medieval theories and praxis, which “operated not from a modern need to break down such boundaries but from a sense that they were porous in some cases, nonexistent in others.”

Of course, taking up Bynum’s suggestion does not entail a re-imagining of our smartphone as a medieval relic, although one suspects that there is but a marginal difference in the degree of reverence granted to both objects. The question is still how we might best understand and articulate the complex relationship between our selves and our tools.

In his reply to O’Gorman, Jacobs focused on O’Gorman’s penultimate paragraph:

“Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve ‘wanting’ for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.”

It’s an interesting feature of the exchange from this point forward that O’Gorman and Jacobs at once emphatically disagree, and yet share very similar concerns. The disagreement is centered chiefly on the question of whether or not it is helpful or even meaningful to speak of technologies “wanting.” Their broad agreement, as I read their exchange, is about the inadequacy of what O’Gorman calls “philosophical volunatarianism” and “technological instrumentalism.”

In other words, if you begin by assuming that the most important thing about us is our ability to make rational and unencumbered choices, then you’ll also assume that technologies are neutral tools over which we can achieve complete mastery.

If O’Gorman means what I think he means by this–and what Jacobs takes him to mean–then I share his concerns as well. We cannot think well about technology if we think about technology as mere tools that we use for good or evil. This is the “guns don’t kill people, people kill people” approach to the ethics of technology, and it is, indeed, inadequate as a way of thinking about the ethical status of artifacts, as I’ve argued repeatedly.

Jacobs grants these concerns, but, with a nod to the Borg Complex, he also thinks that we do not help ourselves in facing them if we talk about technologies “wanting.” Here’s Jacobs’ conclusion:

“It seems that [O’Gorman] thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility, and an inability to confront the political dimensions of technological modernity.”

This seems basically right to me, but it prompted a second reply from O’Gorman that brought some further clarity to the debate. O’Gorman identified three distinct “directions” his disagreement with Jacobs takes: rhetorical, ontological, and ethical.

He frames his discussion of these three differences by insisting that technologies are meaningful by virtue of their “structure of intention,” which entails a technology’s affordances and the web of practices and discourse in which the technology is embedded. So far, so good, although I don’t think intention is the best choice of word. From here O’Gorman goes on to show why he thinks it is “rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.”

Rhetorically, O’Gorman appears to be advocating a Wittgenstein-ian, “look and see” approach. Let’s see how people are using language before we rush to delimit a word’s semantic range. To a certain degree, I can get behind this. I’ve advocated as much when it comes to the way we use the word “technology,” itself a term that abstracts and obfuscates. But I’m not sure that once we look we will find much. While our language may animate or personify our technology, I’m less sure that we typically speak about technology “wanting” anything.  We do not ordinarily say things like “my iPhone wants to be charged,” “the car wants to go out for a drive,” “the computer wants to play.” Although, I can think of an exception or two. I have heard, for example, someone explain to an anxious passenger that the airplane “wants” to stay in the air. The phrase, “what technology wants,” owes much of its currency, such as it is, to the title of Kevin Kelly’s book, and I’m pretty sure Kelly means more by it than what O’Gorman might be prepared to endorse.

Ontologically, O’Gorman is “skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.” “What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of ‘wanting’ that do not seem to be a product of our will?” he wonders. Fair enough, but all of the examples he cites are connected with beings that are, in a literal sense, alive. Of course I can’t attribute all of my desires to my conscious will, sure my dog wants to eat, and maybe in some sense my plant wants water. But there’s still a leap involved in saying that my clock wants to tell time. Wanting may not be neatly tied to willing, but I don’t see how it is not tied to sentience.

There’s one other point worth making at this juncture. I’m quite sympathetic to what is basically a phenomenological account of how our tools quietly slip into our subjective, embodied experience of the world. This is why I can embrace so much of O’Gorman’s case. Thinking back many years, I can distinctly remember a moment when I held a baseball in my hand and reflected on how powerfully I felt the urge to throw it, even though I was standing inside my home. This feeling is, I think, what O’Gorman wants us to recognize. The baseball wanted to be thrown! But how far does this kind of phenomenological account take us?

I think it runs into limits when we talk about technologies that do not enter quite so easily into the circuit of mind, body, and world. The case for the language of wanting is strongest the closer I am to my body; it weakens the further away we get from it. Even if we grant that the baseball in hand feels like it wants to be thrown, what exactly does the weather satellite in orbit want? I think this strongly suggests the degree to which the wanting is properly ours, even while acknowledging the degree to which it is activated by objects in our experience.

Finally, O’Gorman thinks that it is “perfectly legitimate and indeed ethically good and right to speak of technologies as ‘wanting.'” He believes this to be so because “wanting” is not only a matter of willing, it is “more broadly to embody a structure of intention within a given context or set of contexts.” Further, “Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.”

“It is good and right,” O’Gorman insists, “to call this ‘wanting’ because ‘wanting’ suggests that things, even machine things, have an active presence in our life—they are intentional” and, what’s more, their “active presence cannot be neatly traced back to their design and, ultimately, some intending human.”

I agree with O’Gorman that the ethical considerations are paramount, but I’m finally unpersuaded that we are on firmer ground when we speak of technologies wanting, even though I recognize the undeniable importance of the dynamics that O’Gorman wants to acknowledge by speaking so.

Consider what O’Gorman calls the “structure of intention.” I’m not sure intention is the best word to use here. Intentionality resides in the subjective experience of the “I,” but it is true, as phenomenologists have always recognized, that intentionality is not unilaterally directed by the self-consciously willing “I.” It has conscious and non-conscious dimensions, and it may be beckoned and solicited by the world that it simultaneously construes through the workings of perception.

I think we can get at what O’Gorman rightly wants us to acknowledge without attributing “wanting” to objects. We may say, for instance, that objects activate our wanting as they are intended to do by design and also in ways that are unintended by any person. But it’s best to think of this latter wanting as an unpredictable surplus of human intentionality rather than inject a non-human source of wanting. The wanting is always mine, but it may be prompted, solicited, activated, encouraged, fostered, etc. by aspects of the non-human world. So, we may correctly talk about a structure of desire that incorporates non-human aspects of the world and thereby acknowledge the situated nature of our own wanting. Within certain contexts, if we were so inclined, we may even call it a structure of temptation.

To fight the good fight, as it were, we must acknowledge how technology’s consequences exceed and slip loose of our cost/benefit analysis and our rational planning and our best intentions. We must take seriously how their use shapes our perception of the world and both enable and constrain our thinking and acting. But talk about what technology wants will ultimately obscure moral responsibility. “What the machine/algorithm wanted” too easily becomes the new “I was just following orders.” I believe this to be true because I believe that we have a proclivity to evade responsibility. Best, then, not to allow our language to abet our evasions.

On the Moral Implications of Willful Acts of Virtual Harm

Perhaps you’ve seen the clip below in which a dog-like robot developed by Boston Dynamics, a Google-owned robotics company, receives a swift kick and manages to maintain its balance:

I couldn’t resist tweeting that clip with this text: “The mechanical Hound slept but did not sleep, lived but did not live in … a dark corner of the fire house.” That line, of course, is from Ray Bradbury’s Fahrenheit 451, in which the mechanical Hound is deployed to track down dissidents. The apt association was first suggested to me a few months back by a reader’s email occasioned by an earlier Boston Dynamics robot.

My glib tweet aside, many have found the clip disturbing for a variety of reasons. One summary of the concerns can be found in a CNN piece by Phoebe Parke titled, “Is It Cruel to Kick a Robot Dog?” (via Mary Chayko). That question reminded me of a 2013 essay by Richard Fisher posted at BBC Future, “Is It OK to Torture or Murder a Robot?”

Both articles discuss our propensity to anthropomorphize non-human entities and artifacts. Looked at in that way, the ethical concerns seem misplaced if not altogether silly. So, according to one AI researcher quoted by Parke, “The only way it’s unethical is if the robot could feel pain.” A robot cannot feel pain, thus there is nothing unethical about the way we treat robots.

But is that really all that needs to be said about the ethical implications?

Consider these questions raised by Fisher:

“To take another example: if a father is torturing a robot in front of his 4-year-old son, would that be acceptable? The child can’t be expected to have the sophisticated understanding of adults. Torturing a robot teaches them that acts that cause suffering – simulated or not – are OK in some circumstances.

Or to take it to an extreme: imagine if somebody were to take one of the childlike robots already being built in labs, and sell it to a paedophile who planned to live out their darkest desires. Should a society allow this to happen?

Such questions about apparently victimless evil are already playing out in the virtual world. Earlier this year, the New Yorker described the moral quandaries raised when an online forum discussing Grand Theft Auto asked players if rape was acceptable inside the game. One replied: ‘I want to have the opportunity to kidnap a woman, hostage her, put her in my basement and rape her everyday, listen to her crying, watching her tears.’ If such unpleasant desires could be actually lived with a physical robotic being that simulates a victim, it may make it more difficult to tolerate.”

These are challenging questions that, to my mind, expose the inadequacy of thinking about the ethics of technology, or ethics more broadly, from a strictly instrumental perspective.

Recently, philosopher Charlie Huenemann posed a similarly provocative reflection on killing dogs in Minecraft. His reflections led him to consider the moral standing of the attachments we form to objects, whether they be material or virtual, in a manner I found helpful. Here are his concluding paragraphs:

The point is that we form attachments to things that may have no feelings or rights whatsoever, but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern. (Yes, lots of complications can come in here – “What if I really care for the fire that is now engulfing your home?” – but the basic point stands: there is some initial reason, though not necessarily a final or decisive one.) I had some attachment to my Minecraft dogs, which is why I felt sorry when they died. Had you come along in a multiplayer setting and chopped them to death for the sheer malicious pleasure of doing so, I could rightly claim that you did something wrong.

Moreover, we can also speak of attachments – even to virtual objects – that we should form, just as part of being good people. Imagine if I were to gain a Minecraft dog that accompanied me on many adventures. I even offer it rotten zombie flesh to eat on several occasions. But then one day I tire of it and chop it into nonexistence. I think most of would be surprised: “Why did you do that? You had it a long time, and even took care of it. Didn’t you feel attached to it?” Suppose I say, “No, no attachment at all”. “Well, you should have”, we would mumble. It just doesn’t seem right not to have felt some attachment, even if it was overcome by some other concern. “Yes, I was attached to it, but it was getting in the way too much”, would have been at least more acceptable as a reply. (“Still, you didn’t have to kill it. You could have just clicked on it to sit forever….”)

The first of my 41 questions about the ethics of technology was a simple one: What sort of person will the use of this technology make of me?

It’s a simple question, but one we often fail to ask because we assume that ethical considerations apply only to what people do with technology, to the acts themselves. It is a question, I think, that helps us imagine the moral implications of willful acts of virtual harm.

Of course, it is also worth asking, “What sort of person does my use of this technology reveal me to be?”