Directive from the Borg: Love All Technology, Now!

I don’t know about you, but when I look around, it seems to me, that we live in what may be conservatively labeled a technology-friendly social environment. If that seems like a reasonable estimation of the situation to you, then, it would appear, that you and I are out of touch with reality. Or, at least, this is what certain people in the tech world would have us believe. To hear some of them talk, it would appear that the technology sector is a beleaguered minority fending off bands of powerful critics, that Silicon Valley is an island of thoughtful, benign ingenuity valiantly holding off hordes of Luddite barbarians trying to usher in a new dark age.

Consider this tweet from venture capitalist Marc Andreessen.

Don’t click on that link quite yet. First, let me explain the rhetorical context. Andreessen’s riposte is aimed at two groups at once. On the one hand, he is taking a swipe at those who, like Peter Thiel, worry that we are stuck in a period of technological stagnation and, on the other, critics of technology. The implicit twofold message is simple: concerns about stagnation are misguided and technology is amazing. In fact, “never question progress or technology” is probably a better way of rendering it, but more on that in a moment.

Andreessen has really taken to Twitter. The New Yorker recently published a long profile of Andreessen, which noted that he “tweets a hundred and ten times a day, inundating his three hundred and ten thousand followers with aphorisms and statistics and tweetstorm jeremiads.” It continues,

Andreessen says that he loves Twitter because “reporters are obsessed with it. It’s like a tube and I have loudspeakers installed in every reporting cubicle around the world.” He believes that if you say it often enough and insistently enough it will come—a glorious revenge. He told me, “We have this theory of nerd nation, of forty or fifty million people all over the world who believe that other nerds have more in common with them than the people in their own country. So you get to choose what tribe or band or group you’re a part of.” The nation-states of Twitter will map the world.

Not surprisingly, Andreessen’s Twitter followers tend to be interested in technology and the culture of Silicon Valley. For this reason, I’ve found that taking a glance at the replies Andreessen’s tweets garner gives us an interesting, if at times somewhat disconcerting snapshot of attitudes about technology, at least within a certain segment of the population. For instance, if you click on that tweet above and skim the replies it has received, you would assume the linked article was nothing more than a Luddite screed about the evils of technology.

Instead, what you will find is Tom Chatfield interviewing Nick Carr about his latest book. It’s a good interview, too, well worth a few minutes of your time. Carr is, of course, a favorite whipping boy for this crowd, although I’ve yet to see any evidence that they’ve read a word Carr has written.

Here’s a sampling of some of Carr’s more outlandish and incendiary remarks:

• “the question isn’t, ‘should we automate these sophisticated tasks?’, it’s ‘how should we use automation, how should we use the computer to complement human expertise'”

• “I’m not saying that there is no role for labour-saving technology; I’m saying that we can do this wisely, or we can do it rashly; we can do it in a way that understands the value of human experience and human fulfilment, or in a way that simply understands value as the capability of computers.”

• “I hope that, as individuals and as a society, we maintain a certain awareness of what is going on, and a certain curiosity about it, so that we can make decisions that are in our best long-term interest rather than always defaulting to convenience and speed and precision and efficiency.”

• “And in the end I do think that our latest technologies, if we demand more of them, can do what technologies and tools have done through human history, which is to make the world a more interesting place for us, and to make us better people.”

Crazy talk, isn’t it? That guy, what an unhinged, Luddite fear-monger.

Carr has the temerity to suggest that we think about what we are doing, and Andreessen translates this as a complaint that technology is “ruining life as we know it.”

Here’s what this amounts to: you have no choice but to love technology. Forget measured criticism or indifference. No. Instead, you must love everything about it. Love every consequences of every new technology. Love it adamantly and ardently. Express this love proudly and repeatedly: “The world is now more awesome than ever because of technology and it will only get more awesome each and everyday.” Repeat. Repeat. Repeat.

This is pretty much it, right? You tell me?

Classic Borg Complex, of course. But wait, there’s more.

Here’s a piece from New York Times’ Style Magazine that crossed my path yesterday: “In Defense of Technology.” You read that correctly. In defense of technology. Because, you know, technology really needs defending these days. Obviously.

It gets better. Here’s the quick summary below the title: “As products and services advance, plenty of nostalgists believe that certain elements of humanity have been lost. One contrarian argues that being attached to one’s iPhone is a godsend.”

“One contrarian.”

“One.”

Read that piece, then contemplate Alan Jacobs’ 70th out of 79 theses on technology: “The always-connected forget the pleasures of disconnection, then become impervious to them.” Here are the highlights, in my view, of this defense of technology:

• “I now feel — and this is a revelation — that my past was an interesting and quite fallow period spent waiting for the Internet.”

• “I didn’t know it when I was young, but maybe we were just waiting for more stuff and ways to save time.”

• “I’ve come fully round to time-saving apps. I’ve become addicted to the luxury of clicking through for just about everything I need.”

• “Getting better is getting better. Improvement is improving.”

• “Don’t tell me the spiritual life is over. In many ways it’s only just begun.”

• “What has been lost? Nothing.”

Nothing. Got that? Nothing. So quit complaining. Love it all. Now.

The Pleasures of Self-Tracking

A couple of days ago the NY Times ran a story about smart homes and energy savings. Bottom line:

Independent research studying hundreds of households, and thousands in control groups, found significant energy savings — 7 to 17 percent on average for gas heating and electric cooling. Yet as a percentage of a household’s total gas and electric use, the reduction was 2 to 8 percent.

A helpful savings, but probably not enough of a monthly utility bill to be a call to action. Then, there is the switching cost. Conventional thermostats cost a fraction of the $249 Nest device.

That’s not particularly interesting, but tucked in the story there were a couple of offhand comments that caught my attention.

The story opens with the case of Dustin Bond, who “trimmed his electricity bill last summer by about 40 percent thanks to the sensors and clever software of a digital thermostat.”

A paragraph or two on, the story adds, “Mr. Bond says he bought the Nest device mainly for its looks, a stylish circle of stainless steel, reflective polymer and a color display. Still, he found he enjoyed tracking his home energy use on his smartphone, seeing patterns and making adjustments.”

The intriguing bit here is the passing mention of the pleasures of data tracking. I’m certain Bond is not alone in this. There seems to be something enjoyable about being presented with data about you or your environment, consequently adjusting your behavior in response, and then receiving new data that registers the impact of your refined actions.

But what is the nature of this pleasure?

Is it like the pleasure of playing a game at which you improve incrementally until you finally win? Is it the pleasure of feeling that your actions make some marginal difference in the world, the pleasure, in other words, of agency? Is it a Narcissus-like pleasure of seeing your self reflected back to you in the guise of data? Or is it the pleasure of feeling as if you have a degree of control over certain aspects of your life?

Perhaps it’s a combination of two or more of these factors, or maybe it’s none of the above. I’m not sure, but I think it may be worth trying to understand the appeal of being measured, quantified, and tracked. It may go a long way toward helping us understand an important segment of emerging technologies.

Happily, Natasha Dow Schüll is on the case. The author of Addiction by Design: Machine Gambling in Las Vegas (which also happens to be, indirectly, one the best books about social media and digital devices) is working on a book about self-tracking and the Quantified Self. The book is due out next year. Here’s an excerpt from a recent article about Schüll’s work:

She was subsequently drawn to the self-tracking movement, she says, in part because it involved people actively analyzing and acting upon insights derived from their own behavior data — rather than having companies monitor and manipulate them.

“It’s like you are a detective of the self and you have discerned these patterns,” Ms. Schüll says. For example, someone might notice correlations between personal driving habits and mood swings. “Then you can make this change and say to yourself, ‘I’m not going to drive downtown anymore because it makes me grumpy.’”

One last thought. Whatever the pleasures of the smart home or the Quantified Self may be, they need to compensate for an apparent lack of practical effectiveness and efficiency. Here’s one customer’s conclusion regarding GE’s smart light bulbs: “Setting it up required an engineering degree, and it still doesn’t really work [….] For all the utopian promises, it’s easier to turn the lights on and off by hand.”

The article on Schüll’s forthcoming book closed with the following:

But whether these gadgets have beneficial outcomes may not be the point. Like vitamin supplements, for which there is very little evidence of benefit in healthy people, just the act of buying these devices makes many people feel they are investing in themselves. Quantrepreneurs at least are banking on it.

The Spectrum of Attention

Late last month, Alan Jacobs presented 79 Theses on Technology at a seminar hosted by the Institute for Advanced Studies in Culture at the University of Virginia. The theses, dealing chiefly with the problem of attention in digital culture, were posted to the Infernal Machine, a terrific blog hosted by the Institute and edited by Chad Wellmon, devoted to reflection on technology, ethics, and the human person. I’ve long thought very highly of both Jacobs and the Institute, so when Wellmon kindly extended an invitation to attend the seminar, I gladly and gratefully accepted.

Wellmon has also arranged for a series of responses to Jacobs’ theses, which have appeared on The Infernal Machine. Each of these is worth considering. In my response, “The Spectrum of Attention,” I took the opportunity to work out a provisional taxonomy of attention that considers the difference our bodies and our tools make to what we generally call attention.

Here’s a quick excerpt:

We can think of attention as a dance whereby we both lead and are led. This image suggests that receptivity and directedness do indeed work together. The proficient dancer knows when to lead and when to be led, and she also knows that such knowledge emerges out of the dance itself. This analogy reminds us, as well, that attention is the unity of body and mind making its way in a world that can be solicitous of its attention. The analogy also raises a critical question: How ought we conceive of attention given that we are  embodied creatures?

Click through to read the rest.

What Do We Think We Are Doing When We Are Thinking?

Over the past few weeks, I’ve drafted about half a dozen posts in my mind that, sadly, I’ve not had the time to write. Among those mental drafts in progress is a response to Evgeny Morozov’s latest essay. The piece is ostensibly a review of Nick Carr’s The Glass Cage, but it’s really a broadside at the whole enterprise of tech criticism (as Morozov sees it). I’m not sure about the other mental drafts, but that is one I’m determined to see through. Look for it in the next few days … maybe.

In the meantime, here’s a quick reaction to a post by Steve Coast that has been making the rounds today.

In “The World Will Only Get Weirder,” Coast opens with some interesting observations about aviation safety. Taking the recent spate of bizarre aviation incidents as his point of departure, Coast argues that rules as a means of managing safety will only get you so far.

The history of aviation safety is the history of rule-making and checklists. Over time, this approach successfully addressed the vast majority of aviation safety issues. Eventually, however, you hit peak rules, as it were, and you enter a byzantine phase of rule making. Here’s the heart of the piece:

“We’ve reached the end of the useful life of that strategy and have hit severely diminishing returns. As illustration, we created rules to make sure people can’t get in to cockpits to kill the pilots and fly the plane in to buildings. That looked like a good rule. But, it’s created the downside that pilots can now lock out their colleagues and fly it in to a mountain instead.

It used to be that rules really helped. Checklists on average were extremely helpful and have saved possibly millions of lives. But with aircraft we’ve reached the point where rules may backfire, like locking cockpit doors. We don’t know how many people have been saved without locking doors since we can’t go back in time and run the experiment again. But we do know we’ve lost 150 people with them.

And so we add more rules, like requiring two people in the cockpit from now on. Who knows what the mental capacity is of the flight attendant that’s now allowed in there with one pilot, or what their motives are. At some point, if we wait long enough, a flight attendant is going to take over an airplane having only to incapacitate one, not two, pilots. And so we’ll add more rules about the type of flight attendant allowed in the cockpit and on and on.”

This struck me as a rather sensible take on the limits of a rule-oriented, essentially bureaucratic approach to problem solving, which is to say the limits of technocracy or technocratic rationality. Limits, incidentally, that apply as well to our increasing dependence on algorithmic automation.

Of course, this is not to say that rule-oriented, bureaucratic reason is useless. Far from it. As a mode of thinking it is, in fact, capable of solving a great number of problems. It is eminently useful, if also profoundly limited.

Problems arise, however, when this one mode of thought crowds out all others, when we can’t even conceive of an alternative.

This dynamic is, I think, illustrated by a curious feature of Coast’s piece. The engaging argument that characterizes the first half or so of the post gives way to a far less cogent and, frankly, troubling attempt at a solution:

“The primary way we as a society deal with this mess is by creating rule-free zones. Free trade zones for economics. Black budgets for military. The internet for intellectual property. Testing areas for drones. Then after all the objectors have died off, integrate the new things in to society.”

So, it would seem, Coast would have us address the limits of rule-oriented, bureaucratic reason by throwing out all rules, at least within certain contexts until everyone gets on board or dies off. This stark opposition is plausible only if you can’t imagine an alternative mode of thought that might direct your actions. We only have one way of thinking seems to be the unspoken premise. Given that premise, once that mode of thinking fails, there’s nothing left to do but discard thinking altogether.

As I was working on this post I came across a story on NPR that also illustrates our unfortunately myopic understanding of what counts as thought. The story discusses a recent study that identifies a tendency the researchers labeled “algorithm aversion”:

“In a paper just published in the Journal of Experimental Psychology: General, researchers from the University of Pennsylvania’s Wharton School of Business presented people with decisions like these. Across five experiments, they found that people often chose a human — themselves or someone else — over a model when it came to making predictions, especially after seeing the model make some mistakes. In fact, they did so even when the model made far fewer mistakes than the human. The researchers call the phenomenon ‘algorithm aversion,’ where ‘algorithm’ is intended broadly, to encompass — as they write — ‘any evidence-based forecasting formula or rule.'”

After considering what might account for algorithm aversion, the author, psychology professor Tania Lombrozo, closes with this:

“I’m left wondering how people are thinking of their own decision process if not in algorithmic terms — that is, as some evidence-based forecasting formula or rule. Perhaps the aversion — if it is that — is not to algorithms per se, but to the idea that the outcomes of complex, human processes can be predicted deterministically. Or perhaps people assume that human ‘algorithms’ have access to additional information that they (mistakenly) believe will aid predictions, such as cultural background knowledge about the sorts of people who select different majors, or about the conditions under which someone might do well versus poorly on the GMAT. People may simply think they’re implementing better algorithms than the computer-based alternatives.

So, here’s what I want to know. If this research reflects a preference for ‘human algorithms’ over ‘nonhuman algorithms,’ what is it that makes an algorithm human? And if we don’t conceptualize our own decisions as evidence-based rules of some sort, what exactly do we think they are?”

May be it’s just me, but it seems Lombrozo can’t quite imagine how people might understand their own thinking if they are not understanding it on the model of an algorithm.

These two pieces raise a series of questions for me, and I’ll leave you with them:

What is thinking? What do we think we are doing when we are thinking? Can we imagine thinking as something more and other than rule-oriented problem solving or cost/benefit analysis? Have we surrendered our thinking to the controlling power of one master metaphor, the algorithm?

(Spoiler alert: I think the work of Hannah Arendt is of immense help in these matters.)

Quantify Thyself

A thought in passing this morning. Here’s a screen shot that purports to be from an ad for Microsoft’s new wearable device called Band:

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central

I say “purports” because I’ve not been able to find this particular shot and caption on any official Microsoft sites. I first encountered it in this story about Band from October of last year, and I also found it posted to a Reddit thread around the same time. You can watch the official ad here.

It may be that this image is hoax or that Microsoft decided it was a bit too disconcerting and pulled it. A more persistent sleuth should be able to determine which. Whether authentic or not, however, it is instructive.

In tweeting a link to the story in which I first saw the image, I commented: “Define ‘know,’ ‘self,’ and ‘human.'” Nick Seaver astutely replied: “that’s exactly what they’re doing, eh?”

Again, the “they” in this case appears to be a bit ambiguous. That said, the picture is instructive because it reminds us, as Seaver’s reply suggests, that more than our physical fitness is at stake in the emerging regime of quantification. If I were to expand my list of 41 questions about technology’s ethical dimensions, I would include this one: How will the use of this technology redefine my moral vocabulary? or What about myself will the use of this technology encourage me to value?

Consider all that is accepted when someone buys into the idea, even if tacitly so, that Microsoft Band will in fact deepen their knowledge of themselves. What assumptions are accepted about the nature of what it means to know and what there is to know and what can be known? What is implied about the nature of the self when we accept that a device like Band can help us understand it more effectively? We are, needless to say, rather far removed from the Delphic injunction, “Know thyself.”

It is not, of course, that I necessarily think users of Band will be so naive that they will consciously believe there is nothing more to their identity than what Band can measure. Rather, it’s that most of us do have a propensity to pay more attention to what we can measure, particularly when an element of competitiveness is introduced.

I’ll go a step further. Not only do we tend to pay more attention to what we can measure, we begin to care more about what can measure. Perhaps that is because measurement affords us a degree of ostensible control over whatever it is that we are able to measure. It makes self-improvement tangible and manageable, but it does so, in part, by a reduction of the self to those dimensions that register on whatever tool or device we happen to be using to take our measure.

I find myself frequently coming back to one line in a poem by Wendell Berry: “We live the given life, not the planned.” Indeed, and we might also say, “We live the given life, not the quantified.”

A certain vigilance is required to remember that our often marvelous tools of measurement always achieve their precision by narrowing, sometimes radically, what they take into consideration. To reveal one dimension of the whole, they must obscure the others. The danger lies in confusing the partial representation for the whole.