What Could Go Right?

Critic and humorist Joe Queenan took aim at the Internet of Things in this weekend’s Wall Street Journal. It’s a mildly entertaining consideration of what could go wrong when our appliances, devices, and online accounts are all networked together. For example:

“If the wireless subwoofers are linked to the voice-activated oven, which is linked to the Lexus, which is linked to the PC’s external drive, then hackers in Moscow could easily break in through your kid’s PlayStation and clean out your 401(k). The same is true if the snowblower is linked to the smoke detector, which is linked to the laptop, which is linked to your cash-strapped grandma’s bank account. A castle is only as strong as its weakest portcullis.”

He goes on to imagine hackers reprogramming your smart refrigerator to order “thousands of gallons of banana-flavored soy milk every week,” or your music library to play only “Il Divo, Il Divo, Il Divo, 24 hours a day.” Queenan gives readers a few more of these humorously intoned, marginally plausible scenarios that, with a light touch, point to some of the ways the Internet of Things could go wrong.

In any case, after reading Queenan’s playful lampoon of the Internet of Things, it occurred to me that more often than not our worries about new technology center on the question, “What could go wrong?” In fact, we often ask that sarcastically to suggest that some new technology is obviously fraught with risk. For instance: Geoengineering. Global-scale interventions in the delicate, imperfectly understood workings of the earth’s climate with potentially massive and irreversible consequences … what could go wrong?

Of course, this is a perfectly reasonable question to ask. We ask it, and engineers and technologists respond by assuring us that safety measures are in place, contingencies have been accounted for, precautions have been taken, etc. Or, alternatively, that the risks of doing nothing are greater than the risks of proceeding with some technological project. In other words, asking what could go wrong tends to lock us in the technocratic frame of mind. It invites cost/benefit analysis, rational planning, technological fixes to technological problems, all mixed through and through with sprinklings or heaps of hubris.

Very often, despite some initial failures and, one hopes, not-too-tragic accidents, the kinks do get worked out, disasters are averted (or mostly so), and the new technology stabilizes. The voices of critics who worried about what could go wrong suddenly sound a lot like a chorus of boys crying wolf. Enthusiasts wipe the sweat from their brows, take a deep breath, and confidently proclaim, “I told you so.”

All well and good. There’s only one problem. Maybe asking “What could go wrong?” is a short-sighted way of thinking about new technologies. Maybe we should also be asking, “What could go right?”

What if this new technology worked just as advertised? What if it became a barely-noticed feature of our technological landscape? What if it was seamlessly integrated into our social life? What if it delivered on its promise?

Accidents and disasters get our attention, their possibility makes us anxious. The more spectacular the promise of a new technology, the more nervous we might be about what could go wrong. But, if we are focused exclusively on the accident, we lose sight of the fact that the most consequential technologies are usually those that end up working. They are the ones that reorder our lives, reframe our experience, restructure our social lives, recalibrate our sense of time and place. Etc.

In his recent review of Jordan Ellenberg’s How Not to Be Wrong: The Power of Mathematical Thinking (a title with a mildly hubristic ring, to be sure), Peter Pesic opens with an anecdote about problem solving during World War II. Given the trade-offs involved in placing extra amor on fighter planes and bombers–increased weight, decreased range–where should military airplanes be reinforced? Noticing that returning planes had more bullet holes in the fuselage than in the engine, some suggested reinforcing the fuselage. There was one, seemingly obvious, problem with this line of thinking. As the mathematician Abraham Wald noted, this solution ignored the planes that didn’t make it back, most likely because they had been shot in the engine.

This little anecdote–from what seems like a fascinating book, by the way–reminds us that where you look sometimes makes all the difference. A truism, certainly, but no less true because of it. If in thinking about new technologies (or those old ones, which are no less consequential for having lost the radiance of novelty) we look only at the potential accident, then we may miss what matters most.

As more than a few critics have noted over the years, our thinking about technology is often already compromised by a technocratic frame of mind. We are, in such cases, already evaluating technology on its own terms. What we need then is to recover ways of thinking that don’t already assume technological standards. Admittedly, this can be a challenging project. It requires our breaking long-engrained habits of thought–habits of thought which are all the more difficult to escape because they take on the cast of common sense. My point here is to suggest that one step in that direction is to get loose of the assumption that any well working, smoothly operating technology is ipso facto a good and unproblematic technology.

If You’re Keeping Score at Home …

… then you know that I have three series of posts in progress right now. Two relate to the “Internet of Things,” automation, and technological enchantment; the third deals with the religious/cultural matrix of technological innovation. As it happens, though, each of these will be on hold for the next week or so during which I’ll be away and without Internet connection–on a digital sabbath of sorts, although more by circumstance than by design. In any case, the blog will be silent for the next several days.

More on Mechanization, Automation, and Animation

As I follow the train of thought that took the dream of a smart home as a point of departure, I’ve come to a fork in the tracks. Down one path, I’ll continue thinking about the distinctions among Mechanization, Automation, and Animation. Down the other, I’ll pursue the technological enchantment thesis that arose incidentally in my mind as a way of either explaining or imaginatively characterizing the evolution of technology along those three stages.

Separating these two tracks is a pragmatic move. It’s easier for me at this juncture to consider them separately, particularly to weigh the merits of the latter. It may be that the two tracks will later converge, or it may be that one or both are dead ends. We’ll see. Right now I’ll get back to the three stages.

In his comment on my last post, Evan Selinger noted that my schema was Borgmannesque in its approach, and indeed it was. If you’ve been reading along for awhile, you know that I think highly of Albert Borgmann’s work. I’ve drawn on it a time or two of late. Borgmann looked for a pattern that might characterize the development of technology, and he came up with what he called the device paradigm. Succinctly put, the device paradigm described the tendency of machines to become simultaneously more commodious and more opaque, or, to put it another way, easier to use and harder to understand.

In my last post, I used heating as an example to walk through the distinctions among mechanization, automation, and animation. Borgmann also uses heating to illustrate the device paradigm: lighting and sustaining a fire is one thing, flipping a switch to turn on the furnace is another. Food and music also serve as recurring illustrations for Borgmann. Preparing a meal from scratch is one thing, popping a TV dinner in the microwave is another. Playing the piano is one thing, listening to an iPod is another. In each case a device made access to the end product–heat, food, music–easier, instantaneous, safer, more efficient. In each case, though, the workings of the device beneath the commodious surface became more complex and opaque. (Note that in the case of food preparation, both the microwave and the TV dinner are devices.) Ease of use also came at the expense of physical engagement, which, in Borgmann’s view, results in an impoverishment of experience and a rearrangement of the social world.

Keep that dynamic in mind as we move forward. The device paradigm does a good job, I think, of helping us think about the transition to mechanization and from mechanization to automation and animation, chiefly by asking us to consider what we’re sacrificing in exchange for the commodiousness offered to us.

Ultimately, we want to avoid the impulse to automate for automation’s sake. As Nick Carr, whose forthcoming book, The Glass Cage: Automation and Us, will be an excellent guide in these matters, recently put it, “What should be automated is not what can be automated but what should be automated.”

That principle came at the end of a short post reflecting on comments made by Google’s “Android guru,” Sundar Pichai. Pichai offered a glimpse at how Google envisions the future when he described how useful it would be if your car could sense that your child was now inside and automatically changed the music playlists accordingly. Here’s part of Carr’s response:

“With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be ‘freed up’ to do something ‘more valuable.’ Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.”

But how do we come to know what should be automated? I’m not sure there’s a short answer to that question, but it’s safe to say that we’re going to need to think carefully about what we do and why we do it. Again, this is why I think Hannah Arendt was ahead of her time when she undertook the intellectual project that resulted in The Human Condition and the unfinished The Life of the Mind. In the first she set out to understand our doing and in the second, our thinking. And all of this in light of the challenges presented by emerging technological systems.

One of the upshots of new technologies, if we accept the challenge, is that they lead us to look again at what we might have otherwise taken for granted or failed to notice altogether. New communication technologies encourage us to think again about the nature of human communication. New medical technologies encourage us to think again about the nature of health. New transportation technologies encourage us to think again about the nature of place. And so on.

I had originally used the word “forced” where I settled for the word “encourage” above. I changed the wording because, in fact, new technologies don’t force us to think again about the realms of life they impact. It is quite easy, too easy perhaps, not to think at all, simply to embrace and adopt the new technology without thinking at all about its consequences. Or, what amounts to the same thing, it is just as easy to reject new technologies out of hand because they are new. In neither case would we be thinking at all. If we accept the challenge to think again about the world as new technologies cast aspects of it in a new light, we might even begin to see this development as a great gift by leading us to value, appreciate, and even love what was before unnoticed.

Returning to the animation schema, we might make a start at thinking by simply asking ourselves what exactly is displaced at each transition. When it comes to mechanization, it seems fairly straightforward. Mechanization, as I’m defining it, ordinarily displaces physical labor.

Capturing what exactly is displaced when it comes to automation is a bit more challenging. In part, this is because the distinctions I’m making between mechanization and automation on the one hand and automation and animation on the other are admittedly fuzzy. In fact, all three are often simply grouped together under the category of automation. This is a simpler move, but I’m concerned that we might not get a good grasp of the complex ways in which technologies interact with human action if we don’t parse things a bit more finely.

So let’s start by suggesting that automation, the stage at which machines operate without the need for constant human input and direction, displaces attention. When something is automated, I can pay much less attention to it, or perhaps, no attention at all. We might also say that automation displaces will or volition. When a process is automated, I don’t have to will its action.

Finally, animation– the stage at which machines not only act without direct human intervention, but also “learn” and begin to “make decisions” for themselves–displaces agency and judgment.

By noting what is displaced we can then ask whether the displaced element was an essential or inessential aspect of the good or end sought by the means, and so we might begin to arrive at some more humane conclusions about what ought to be automated.

I’ll leave things there for now, but more will be forthcoming. Right now I’ll leave you with a couple of questions I’ll be thinking about.

First, Borgmann distinguished between things and devices (see here or here). Once we move from automation to animation, do we need a new category?

Also, coming back to Arendt, she laid out two sets of three categories that overlap in interesting ways with the three stages as I’m thinking of them. In her discussion of human doing, she identifies labor, work, and action. In her discussion of human thinking, she identifies thought, will, and judgment. How can her theorizing of these categories help us understand what’s at stake in drive to automate and animate?

The Facebook Experiment, Briefly Noted

More than likely, you’ve recently heard about Facebook’s experiment in the manipulation of user emotions. I know, Facebook as a whole is an experiment in the manipulation of user emotions. Fair enough, but this was a more pointed experimented that involved the manipulation of what user’s see in their News Feeds. Here is how the article in the Proceedings of the National Academy of Sciences  summarized the significance of the findings:

“We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”

Needless to say (well except to Facebook and the researchers involved), this massive psychological experiment raises all sorts of ethical questions and concerns. Here are some of the more helpful pieces I’ve found on the experiment and its implications:

Update: Here are two more worth considering:

Those four six pieces should give you a good sense of the variety of issues involved in the whole situation, along with a host of links to other relevant material.

I don’t have too much to add except two quick observations. First, I was reminded, especially by Gurstein’s post, of Greg Ulmer’s characterization of the Internet as a “prothesis of the unconscious.” Ulmer means something like this: The Internet has become a repository of the countless ways that culture has imprinted itself upon us and shaped our identity. Prior to the advent of the Internet, most of those memories and experiences would be lost to us even while they may have continued to be a part of who we became. The Internet, however, allows us to access many, if not all, of these cultural traces bringing them to our conscious awareness and allowing us to think about them.

What Facebook’s experiment suggests rather strikingly is that such a prosthesis is, as we should have known, a two-way interface. It not only facilitates our extension into the world, it is also a means by which the world can take hold of us. As a prothesis of our unconscious, the Internet is not only an extension of our unconscious, it also permits the manipulation of the unconscious by external forces.

Secondly, I was reminded of Paul Virilio’s idea of the general or integral accident. Virilio has written extensively about technology, speed, and accidents. The accident is an expansive concept in his work. In his view, accidents are built-in to the nature of any new technology. As he has frequently put it, “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash … Every technology carries its own negativity, which is invented at the same time as technical progress.”

The general or integral accident is made possible by complex technological systems. The accident made possible by a nuclear reactor or air traffic is obviously of a greater scale than that made possible by the invention of the hammer. Complex technological systems create the possibility of cascading accidents of immense scale and destructiveness. Information technologies also introduce the possibility of integral accidents. Virilio’s most common examples of these information accidents include the flash crashes on stock exchanges induced by electronic trading.

All of this is to say that Facebook’s experiment gives us a glimpse of what shape the social media accident might take. An interviewer alludes to this line from Virilio: “the synchronizing of collective emotions that leads to the administration of fear.” I’ve not been able to track down the original context, but it struck me as suggestive in light of this experiment.

Oh, and lastly, Facebook COO Sheryl Sandberg issued an apology of sorts. I’ll let Nick Carr tell you about it.

Links: Jacques Ellul, Luddites, and More

A follow-up to my last post is still forthcoming. In the meantime, here are a few links for your reading pleasure.

At The New Atlantis, Joshua Schulz writes about “Machine Grading and Moral Learning.” The essay is a critique of machine-based grading of student essays, but it ranges widely and deeply in its argument. Here’s an excerpt:

“The functional- and efficiency-centric view of technology, and the moral objections to it, have been around for a long time. Look to the tale of John Henry, the steel-driving man of American folklore who raced a tunnel-boring steam engine in a contest of efficiency, beating the machine but dying in the attempt. The moral of the tale is not, of course, that we will always be able to beat our machines in a fair contest. Rather, the contest is a tragic one, highlighting a cultural hamartia, namely, the belief that competing with the steam engine on its own terms is anything other than degrading.”

A post at Librarian Shipwreck asks, “Whose Afraid of General Ludd?” You may remember that Borg Complex symptoms include, “Uses the term Luddite a-historically and as a casual slur.” That observation is echoed here:

“Whenever the term ‘Luddite’ appears as an insult it acts less as a reflection of the motives of those being slurred and more as a reflection of the fears of the person delivering the insult. But far from undermining Luddism, all that these insults do is underscore the tremendous power that a critique of technology couched in ‘commonality’ can still command.”

Read the whole thing for a historically grounded look at the Luddites and their motives.

Relatedly, here is a video and transcript of an interview with the late Jacques Ellul posted at Second Nature Journal.  His insights resonate still. Here are two from the interview:

“Technology also obliges us to live more and more quickly. Inner reflection is replaced by reflex. Reflection means that, after I have undergone an experience, I think about that experience. In the case of a reflex you know immediately what you must do in a certain situation. Without thinking. Technology requires us no longer to think about the things. If you are driving a car at 150 kilometers an hour and you think you’ll have an accident. Everything depends on reflexes. The only thing technology requires us is: Don’t think about it. Use your reflexes.

Technology will not tolerate any judgment being passed on it. Or rather: technologists do not easily tolerate people expressing an ethical or moral judgment on what they do. But the expression of ethical, moral and spiritual judgments is actually the highest freedom of mankind. So I am robbed of my highest freedom. So whatever I say about technology and the technologists themselves is of no importance to them. It won’t deter them from what they are doing. They are now set in their course. They are so conditioned.”

Keep that last paragraph in mind as you read this last story: “For One Baby, Life Begins with Genome Revealed.”