Critic and humorist Joe Queenan took aim at the Internet of Things in this weekend’s Wall Street Journal. It’s a mildly entertaining consideration of what could go wrong when our appliances, devices, and online accounts are all networked together. For example:
“If the wireless subwoofers are linked to the voice-activated oven, which is linked to the Lexus, which is linked to the PC’s external drive, then hackers in Moscow could easily break in through your kid’s PlayStation and clean out your 401(k). The same is true if the snowblower is linked to the smoke detector, which is linked to the laptop, which is linked to your cash-strapped grandma’s bank account. A castle is only as strong as its weakest portcullis.”
He goes on to imagine hackers reprogramming your smart refrigerator to order “thousands of gallons of banana-flavored soy milk every week,” or your music library to play only “Il Divo, Il Divo, Il Divo, 24 hours a day.” Queenan gives readers a few more of these humorously intoned, marginally plausible scenarios that, with a light touch, point to some of the ways the Internet of Things could go wrong.
In any case, after reading Queenan’s playful lampoon of the Internet of Things, it occurred to me that more often than not our worries about new technology center on the question, “What could go wrong?” In fact, we often ask that sarcastically to suggest that some new technology is obviously fraught with risk. For instance: Geoengineering. Global-scale interventions in the delicate, imperfectly understood workings of the earth’s climate with potentially massive and irreversible consequences … what could go wrong?
Of course, this is a perfectly reasonable question to ask. We ask it, and engineers and technologists respond by assuring us that safety measures are in place, contingencies have been accounted for, precautions have been taken, etc. Or, alternatively, that the risks of doing nothing are greater than the risks of proceeding with some technological project. In other words, asking what could go wrong tends to lock us in the technocratic frame of mind. It invites cost/benefit analysis, rational planning, technological fixes to technological problems, all mixed through and through with sprinklings or heaps of hubris.
Very often, despite some initial failures and, one hopes, not-too-tragic accidents, the kinks do get worked out, disasters are averted (or mostly so), and the new technology stabilizes. The voices of critics who worried about what could go wrong suddenly sound a lot like a chorus of boys crying wolf. Enthusiasts wipe the sweat from their brows, take a deep breath, and confidently proclaim, “I told you so.”
All well and good. There’s only one problem. Maybe asking “What could go wrong?” is a short-sighted way of thinking about new technologies. Maybe we should also be asking, “What could go right?”
What if this new technology worked just as advertised? What if it became a barely-noticed feature of our technological landscape? What if it was seamlessly integrated into our social life? What if it delivered on its promise?
Accidents and disasters get our attention, their possibility makes us anxious. The more spectacular the promise of a new technology, the more nervous we might be about what could go wrong. But, if we are focused exclusively on the accident, we lose sight of the fact that the most consequential technologies are usually those that end up working. They are the ones that reorder our lives, reframe our experience, restructure our social lives, recalibrate our sense of time and place. Etc.
In his recent review of Jordan Ellenberg’s How Not to Be Wrong: The Power of Mathematical Thinking (a title with a mildly hubristic ring, to be sure), Peter Pesic opens with an anecdote about problem solving during World War II. Given the trade-offs involved in placing extra amor on fighter planes and bombers–increased weight, decreased range–where should military airplanes be reinforced? Noticing that returning planes had more bullet holes in the fuselage than in the engine, some suggested reinforcing the fuselage. There was one, seemingly obvious, problem with this line of thinking. As the mathematician Abraham Wald noted, this solution ignored the planes that didn’t make it back, most likely because they had been shot in the engine.
This little anecdote–from what seems like a fascinating book, by the way–reminds us that where you look sometimes makes all the difference. A truism, certainly, but no less true because of it. If in thinking about new technologies (or those old ones, which are no less consequential for having lost the radiance of novelty) we look only at the potential accident, then we may miss what matters most.
As more than a few critics have noted over the years, our thinking about technology is often already compromised by a technocratic frame of mind. We are, in such cases, already evaluating technology on its own terms. What we need then is to recover ways of thinking that don’t already assume technological standards. Admittedly, this can be a challenging project. It requires our breaking long-engrained habits of thought–habits of thought which are all the more difficult to escape because they take on the cast of common sense. My point here is to suggest that one step in that direction is to get loose of the assumption that any well working, smoothly operating technology is ipso facto a good and unproblematic technology.