Machines, Work, and the Value of People

Late last month, Microsoft released a “bot” that guesses your age based on an uploaded picture. The bot tended to be only marginally accurate and sometimes hilariously (or disconcertingly) wrong. What’s more, people quickly began having some fun with the program by uploading faces of actors playing fictional characters, such as Yoda or Gandalf. My favorite was Ian Bogost’s submission:

Shortly after the How Old bot had its fleeting moment of virality, Nathan Jurgenson tweeted the following:

This was an interesting observation, and it generated a few interesting replies. Jurgenson himself added, “much of the bigdata/algorithm debates miss how poor these often perform. many critiques presuppose & reify their untenable positivism.” He summed up this line of thought with this tweet: “so much ‘tech criticism’ starts first with uncritically buying all of the hype silicon valley spits out.”

Let’s pause here for a moment. All of this is absolutely true. Yet … it’s not all hype, not necessarily anyway. Let’s bracket the more outlandish claims made by the singularity crowd, of course. But take facial recognition software, for instance. It doesn’t strike me as wildly implausible that in the near future facial recognition programs will achieve a rather striking degree of accuracy.

Along these lines, I found Kyle Wrather’s replies to Jurgenson’s tweet particularly interesting. First, Wrather noted, “[How Old Bot] being wrong makes people more comfortable w/ facial recognition b/c it seems less threatening.” He then added, “I think people would be creeped out if we’re totally accurate. When it’s wrong, humans get to be ‘superior.'”

Wrather’s second comment points to an intriguing psychological dynamic. Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But as far as I know, it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? I’m not entirely sure, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

I think this latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

(It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”)

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions:  Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

So, to sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.

Directive from the Borg: Love All Technology, Now!

I don’t know about you, but when I look around, it seems to me, that we live in what may be conservatively labeled a technology-friendly social environment. If that seems like a reasonable estimation of the situation to you, then, it would appear, that you and I are out of touch with reality. Or, at least, this is what certain people in the tech world would have us believe. To hear some of them talk, it would appear that the technology sector is a beleaguered minority fending off bands of powerful critics, that Silicon Valley is an island of thoughtful, benign ingenuity valiantly holding off hordes of Luddite barbarians trying to usher in a new dark age.

Consider this tweet from venture capitalist Marc Andreessen.

Don’t click on that link quite yet. First, let me explain the rhetorical context. Andreessen’s riposte is aimed at two groups at once. On the one hand, he is taking a swipe at those who, like Peter Thiel, worry that we are stuck in a period of technological stagnation and, on the other, critics of technology. The implicit twofold message is simple: concerns about stagnation are misguided and technology is amazing. In fact, “never question progress or technology” is probably a better way of rendering it, but more on that in a moment.

Andreessen has really taken to Twitter. The New Yorker recently published a long profile of Andreessen, which noted that he “tweets a hundred and ten times a day, inundating his three hundred and ten thousand followers with aphorisms and statistics and tweetstorm jeremiads.” It continues,

Andreessen says that he loves Twitter because “reporters are obsessed with it. It’s like a tube and I have loudspeakers installed in every reporting cubicle around the world.” He believes that if you say it often enough and insistently enough it will come—a glorious revenge. He told me, “We have this theory of nerd nation, of forty or fifty million people all over the world who believe that other nerds have more in common with them than the people in their own country. So you get to choose what tribe or band or group you’re a part of.” The nation-states of Twitter will map the world.

Not surprisingly, Andreessen’s Twitter followers tend to be interested in technology and the culture of Silicon Valley. For this reason, I’ve found that taking a glance at the replies Andreessen’s tweets garner gives us an interesting, if at times somewhat disconcerting snapshot of attitudes about technology, at least within a certain segment of the population. For instance, if you click on that tweet above and skim the replies it has received, you would assume the linked article was nothing more than a Luddite screed about the evils of technology.

Instead, what you will find is Tom Chatfield interviewing Nick Carr about his latest book. It’s a good interview, too, well worth a few minutes of your time. Carr is, of course, a favorite whipping boy for this crowd, although I’ve yet to see any evidence that they’ve read a word Carr has written.

Here’s a sampling of some of Carr’s more outlandish and incendiary remarks:

• “the question isn’t, ‘should we automate these sophisticated tasks?’, it’s ‘how should we use automation, how should we use the computer to complement human expertise'”

• “I’m not saying that there is no role for labour-saving technology; I’m saying that we can do this wisely, or we can do it rashly; we can do it in a way that understands the value of human experience and human fulfilment, or in a way that simply understands value as the capability of computers.”

• “I hope that, as individuals and as a society, we maintain a certain awareness of what is going on, and a certain curiosity about it, so that we can make decisions that are in our best long-term interest rather than always defaulting to convenience and speed and precision and efficiency.”

• “And in the end I do think that our latest technologies, if we demand more of them, can do what technologies and tools have done through human history, which is to make the world a more interesting place for us, and to make us better people.”

Crazy talk, isn’t it? That guy, what an unhinged, Luddite fear-monger.

Carr has the temerity to suggest that we think about what we are doing, and Andreessen translates this as a complaint that technology is “ruining life as we know it.”

Here’s what this amounts to: you have no choice but to love technology. Forget measured criticism or indifference. No. Instead, you must love everything about it. Love every consequences of every new technology. Love it adamantly and ardently. Express this love proudly and repeatedly: “The world is now more awesome than ever because of technology and it will only get more awesome each and everyday.” Repeat. Repeat. Repeat.

This is pretty much it, right? You tell me?

Classic Borg Complex, of course. But wait, there’s more.

Here’s a piece from New York Times’ Style Magazine that crossed my path yesterday: “In Defense of Technology.” You read that correctly. In defense of technology. Because, you know, technology really needs defending these days. Obviously.

It gets better. Here’s the quick summary below the title: “As products and services advance, plenty of nostalgists believe that certain elements of humanity have been lost. One contrarian argues that being attached to one’s iPhone is a godsend.”

“One contrarian.”

“One.”

Read that piece, then contemplate Alan Jacobs’ 70th out of 79 theses on technology: “The always-connected forget the pleasures of disconnection, then become impervious to them.” Here are the highlights, in my view, of this defense of technology:

• “I now feel — and this is a revelation — that my past was an interesting and quite fallow period spent waiting for the Internet.”

• “I didn’t know it when I was young, but maybe we were just waiting for more stuff and ways to save time.”

• “I’ve come fully round to time-saving apps. I’ve become addicted to the luxury of clicking through for just about everything I need.”

• “Getting better is getting better. Improvement is improving.”

• “Don’t tell me the spiritual life is over. In many ways it’s only just begun.”

• “What has been lost? Nothing.”

Nothing. Got that? Nothing. So quit complaining. Love it all. Now.

The Pleasures of Self-Tracking

A couple of days ago the NY Times ran a story about smart homes and energy savings. Bottom line:

Independent research studying hundreds of households, and thousands in control groups, found significant energy savings — 7 to 17 percent on average for gas heating and electric cooling. Yet as a percentage of a household’s total gas and electric use, the reduction was 2 to 8 percent.

A helpful savings, but probably not enough of a monthly utility bill to be a call to action. Then, there is the switching cost. Conventional thermostats cost a fraction of the $249 Nest device.

That’s not particularly interesting, but tucked in the story there were a couple of offhand comments that caught my attention.

The story opens with the case of Dustin Bond, who “trimmed his electricity bill last summer by about 40 percent thanks to the sensors and clever software of a digital thermostat.”

A paragraph or two on, the story adds, “Mr. Bond says he bought the Nest device mainly for its looks, a stylish circle of stainless steel, reflective polymer and a color display. Still, he found he enjoyed tracking his home energy use on his smartphone, seeing patterns and making adjustments.”

The intriguing bit here is the passing mention of the pleasures of data tracking. I’m certain Bond is not alone in this. There seems to be something enjoyable about being presented with data about you or your environment, consequently adjusting your behavior in response, and then receiving new data that registers the impact of your refined actions.

But what is the nature of this pleasure?

Is it like the pleasure of playing a game at which you improve incrementally until you finally win? Is it the pleasure of feeling that your actions make some marginal difference in the world, the pleasure, in other words, of agency? Is it a Narcissus-like pleasure of seeing your self reflected back to you in the guise of data? Or is it the pleasure of feeling as if you have a degree of control over certain aspects of your life?

Perhaps it’s a combination of two or more of these factors, or maybe it’s none of the above. I’m not sure, but I think it may be worth trying to understand the appeal of being measured, quantified, and tracked. It may go a long way toward helping us understand an important segment of emerging technologies.

Happily, Natasha Dow Schüll is on the case. The author of Addiction by Design: Machine Gambling in Las Vegas (which also happens to be, indirectly, one the best books about social media and digital devices) is working on a book about self-tracking and the Quantified Self. The book is due out next year. Here’s an excerpt from a recent article about Schüll’s work:

She was subsequently drawn to the self-tracking movement, she says, in part because it involved people actively analyzing and acting upon insights derived from their own behavior data — rather than having companies monitor and manipulate them.

“It’s like you are a detective of the self and you have discerned these patterns,” Ms. Schüll says. For example, someone might notice correlations between personal driving habits and mood swings. “Then you can make this change and say to yourself, ‘I’m not going to drive downtown anymore because it makes me grumpy.’”

One last thought. Whatever the pleasures of the smart home or the Quantified Self may be, they need to compensate for an apparent lack of practical effectiveness and efficiency. Here’s one customer’s conclusion regarding GE’s smart light bulbs: “Setting it up required an engineering degree, and it still doesn’t really work [….] For all the utopian promises, it’s easier to turn the lights on and off by hand.”

The article on Schüll’s forthcoming book closed with the following:

But whether these gadgets have beneficial outcomes may not be the point. Like vitamin supplements, for which there is very little evidence of benefit in healthy people, just the act of buying these devices makes many people feel they are investing in themselves. Quantrepreneurs at least are banking on it.

Do Things Want?

Alan Jacobs’ 79 Theses on Technology were offered in the spirit of a medieval disputation, and they succeeded in spurring a number of stimulating responses in a series of essays posted to the Infernal Machine over the last two weeks. Along with my response to Jacobs’ provocations, I wanted to engage a debate between Jacobs’ and Ned O’Gorman about whether or not we may meaningfully speak of what technologies want. Here’s a synopsis of the exchange with my own commentary along the way.

O’Gorman’s initial response focused on the following theses from Jacobs:

40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.

46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.

O’Gorman framed these theses by saying that he found it “perplexing” that Jacobs “is so seemingly unsympathetic to the meaningfulness of things, the class to which technologies belong.” I’m not sure, however, that Jacobs was denying the meaningfulness of things; rather, as I read him, he is contesting the claim that it is from technology that our lives derive their meaning. That may seem a fine distinction, but I think it is an important one. In any case, a little clarification about what exactly “meaning” entails, may go a long way in clarify that aspect of the discussion.

A little further on, O’Gorman shifts to the question of agency: “Our technological artifacts aren’t wholly distinct from human agency; they are bound up with it.” It is on this ground that the debate mostly unfolds, although there is more than a little slippage between the question of meaning and the question of agency.

O’Gorman appealed to Mary Carruthers’ fascinating study of the place of memory in medieval culture, The Book of Memory: A Study of Memory in Medieval Culture, to support his claim, but I’m not sure the passage he cites supports his claim. He is seeking to establish, as I read him, two claims. First, that technologies are things and things are meaningful. Second, that we may properly attribute agency to technology/things. Now here’s the passage he cites from Carruthers’ work (brackets and elipses ellipses are O’Gorman’s):

“[In the middle ages] interpretation is not attributed to any intention of the man [the author]…but rather to something understood to reside in the text itself.… [T]he important “intention” is within the work itself, as its res, a cluster of meanings which are only partially revealed in its original statement…. What keeps such a view of interpretation from being mere readerly solipsism is precisely the notion of res—the text has a sense within it which is independent of the reader, and which must be amplified, dilated, and broken-out from its words….”

“Things, in this instance manuscripts,” O’Gorman adds, “are indeed meaningful and powerful.” But in this instance, the thing (res) in view is not, in fact, the manuscripts. As Carruthers explains at various other points in The Book of Memory, the res in this context is not a material thing, but something closer to the pre-linguistic essence or idea or concept that the written words convey. It is an immaterial thing.

That said, there are interesting studies that do point to the significance of materiality in medieval context. Ivan Illich’s In the Vineyard of the Text, for example, dwells at length on medieval reading as a bodily experience, an “ascetic discipline focused by a technical object.” Then there’s Caroline Bynum’s fascinating Christian Materiality: An Essay on Religion in Late Medieval Europe, which explores the multifarious ways matter was experienced and theorized in the late middle ages.

Bynum concludes that “current theories that have mostly been used to understand medieval objects are right to attribute agency to objects, but it is an agency that is, in the final analysis, both too metaphorical and too literal.” She adds that insofar as modern theorizing “takes as self-evident the boundary between human and thing, part and whole, mimesis and material, animate and inanimate,” it may be usefully unsettled by an encounter with medieval theories and praxis, which “operated not from a modern need to break down such boundaries but from a sense that they were porous in some cases, nonexistent in others.”

Of course, taking up Bynum’s suggestion does not entail a re-imagining of our smartphone as a medieval relic, although one suspects that there is but a marginal difference in the degree of reverence granted to both objects. The question is still how we might best understand and articulate the complex relationship between our selves and our tools.

In his reply to O’Gorman, Jacobs focused on O’Gorman’s penultimate paragraph:

“Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve ‘wanting’ for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.”

It’s an interesting feature of the exchange from this point forward that O’Gorman and Jacobs at once emphatically disagree, and yet share very similar concerns. The disagreement is centered chiefly on the question of whether or not it is helpful or even meaningful to speak of technologies “wanting.” Their broad agreement, as I read their exchange, is about the inadequacy of what O’Gorman calls “philosophical volunatarianism” and “technological instrumentalism.”

In other words, if you begin by assuming that the most important thing about us is our ability to make rational and unencumbered choices, then you’ll also assume that technologies are neutral tools over which we can achieve complete mastery.

If O’Gorman means what I think he means by this–and what Jacobs takes him to mean–then I share his concerns as well. We cannot think well about technology if we think about technology as mere tools that we use for good or evil. This is the “guns don’t kill people, people kill people” approach to the ethics of technology, and it is, indeed, inadequate as a way of thinking about the ethical status of artifacts, as I’ve argued repeatedly.

Jacobs grants these concerns, but, with a nod to the Borg Complex, he also thinks that we do not help ourselves in facing them if we talk about technologies “wanting.” Here’s Jacobs’ conclusion:

“It seems that [O’Gorman] thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility, and an inability to confront the political dimensions of technological modernity.”

This seems basically right to me, but it prompted a second reply from O’Gorman that brought some further clarity to the debate. O’Gorman identified three distinct “directions” his disagreement with Jacobs takes: rhetorical, ontological, and ethical.

He frames his discussion of these three differences by insisting that technologies are meaningful by virtue of their “structure of intention,” which entails a technology’s affordances and the web of practices and discourse in which the technology is embedded. So far, so good, although I don’t think intention is the best choice of word. From here O’Gorman goes on to show why he thinks it is “rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.”

Rhetorically, O’Gorman appears to be advocating a Wittgenstein-ian, “look and see” approach. Let’s see how people are using language before we rush to delimit a word’s semantic range. To a certain degree, I can get behind this. I’ve advocated as much when it comes to the way we use the word “technology,” itself a term that abstracts and obfuscates. But I’m not sure that once we look we will find much. While our language may animate or personify our technology, I’m less sure that we typically speak about technology “wanting” anything.  We do not ordinarily say things like “my iPhone wants to be charged,” “the car wants to go out for a drive,” “the computer wants to play.” Although, I can think of an exception or two. I have heard, for example, someone explain to an anxious passenger that the airplane “wants” to stay in the air. The phrase, “what technology wants,” owes much of its currency, such as it is, to the title of Kevin Kelly’s book, and I’m pretty sure Kelly means more by it than what O’Gorman might be prepared to endorse.

Ontologically, O’Gorman is “skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.” “What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of ‘wanting’ that do not seem to be a product of our will?” he wonders. Fair enough, but all of the examples he cites are connected with beings that are, in a literal sense, alive. Of course I can’t attribute all of my desires to my conscious will, sure my dog wants to eat, and maybe in some sense my plant wants water. But there’s still a leap involved in saying that my clock wants to tell time. Wanting may not be neatly tied to willing, but I don’t see how it is not tied to sentience.

There’s one other point worth making at this juncture. I’m quite sympathetic to what is basically a phenomenological account of how our tools quietly slip into our subjective, embodied experience of the world. This is why I can embrace so much of O’Gorman’s case. Thinking back many years, I can distinctly remember a moment when I held a baseball in my hand and reflected on how powerfully I felt the urge to throw it, even though I was standing inside my home. This feeling is, I think, what O’Gorman wants us to recognize. The baseball wanted to be thrown! But how far does this kind of phenomenological account take us?

I think it runs into limits when we talk about technologies that do not enter quite so easily into the circuit of mind, body, and world. The case for the language of wanting is strongest the closer I am to my body; it weakens the further away we get from it. Even if we grant that the baseball in hand feels like it wants to be thrown, what exactly does the weather satellite in orbit want? I think this strongly suggests the degree to which the wanting is properly ours, even while acknowledging the degree to which it is activated by objects in our experience.

Finally, O’Gorman thinks that it is “perfectly legitimate and indeed ethically good and right to speak of technologies as ‘wanting.'” He believes this to be so because “wanting” is not only a matter of willing, it is “more broadly to embody a structure of intention within a given context or set of contexts.” Further, “Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.”

“It is good and right,” O’Gorman insists, “to call this ‘wanting’ because ‘wanting’ suggests that things, even machine things, have an active presence in our life—they are intentional” and, what’s more, their “active presence cannot be neatly traced back to their design and, ultimately, some intending human.”

I agree with O’Gorman that the ethical considerations are paramount, but I’m finally unpersuaded that we are on firmer ground when we speak of technologies wanting, even though I recognize the undeniable importance of the dynamics that O’Gorman wants to acknowledge by speaking so.

Consider what O’Gorman calls the “structure of intention.” I’m not sure intention is the best word to use here. Intentionality resides in the subjective experience of the “I,” but it is true, as phenomenologists have always recognized, that intentionality is not unilaterally directed by the self-consciously willing “I.” It has conscious and non-conscious dimensions, and it may be beckoned and solicited by the world that it simultaneously construes through the workings of perception.

I think we can get at what O’Gorman rightly wants us to acknowledge without attributing “wanting” to objects. We may say, for instance, that objects activate our wanting as they are intended to do by design and also in ways that are unintended by any person. But it’s best to think of this latter wanting as an unpredictable surplus of human intentionality rather than inject a non-human source of wanting. The wanting is always mine, but it may be prompted, solicited, activated, encouraged, fostered, etc. by aspects of the non-human world. So, we may correctly talk about a structure of desire that incorporates non-human aspects of the world and thereby acknowledge the situated nature of our own wanting. Within certain contexts, if we were so inclined, we may even call it a structure of temptation.

To fight the good fight, as it were, we must acknowledge how technology’s consequences exceed and slip loose of our cost/benefit analysis and our rational planning and our best intentions. We must take seriously how their use shapes our perception of the world and both enable and constrain our thinking and acting. But talk about what technology wants will ultimately obscure moral responsibility. “What the machine/algorithm wanted” too easily becomes the new “I was just following orders.” I believe this to be true because I believe that we have a proclivity to evade responsibility. Best, then, not to allow our language to abet our evasions.

The Spectrum of Attention

Late last month, Alan Jacobs presented 79 Theses on Technology at a seminar hosted by the Institute for Advanced Studies in Culture at the University of Virginia. The theses, dealing chiefly with the problem of attention in digital culture, were posted to the Infernal Machine, a terrific blog hosted by the Institute and edited by Chad Wellmon, devoted to reflection on technology, ethics, and the human person. I’ve long thought very highly of both Jacobs and the Institute, so when Wellmon kindly extended an invitation to attend the seminar, I gladly and gratefully accepted.

Wellmon has also arranged for a series of responses to Jacobs’ theses, which have appeared on The Infernal Machine. Each of these is worth considering. In my response, “The Spectrum of Attention,” I took the opportunity to work out a provisional taxonomy of attention that considers the difference our bodies and our tools make to what we generally call attention.

Here’s a quick excerpt:

We can think of attention as a dance whereby we both lead and are led. This image suggests that receptivity and directedness do indeed work together. The proficient dancer knows when to lead and when to be led, and she also knows that such knowledge emerges out of the dance itself. This analogy reminds us, as well, that attention is the unity of body and mind making its way in a world that can be solicitous of its attention. The analogy also raises a critical question: How ought we conceive of attention given that we are  embodied creatures?

Click through to read the rest.