Critic and humorist Joe Queenan took aim at the Internet of Things in this weekend’s Wall Street Journal. It’s a mildly entertaining consideration of what could go wrong when our appliances, devices, and online accounts are all networked together. For example:
“If the wireless subwoofers are linked to the voice-activated oven, which is linked to the Lexus, which is linked to the PC’s external drive, then hackers in Moscow could easily break in through your kid’s PlayStation and clean out your 401(k). The same is true if the snowblower is linked to the smoke detector, which is linked to the laptop, which is linked to your cash-strapped grandma’s bank account. A castle is only as strong as its weakest portcullis.”
He goes on to imagine hackers reprogramming your smart refrigerator to order “thousands of gallons of banana-flavored soy milk every week,” or your music library to play only “Il Divo, Il Divo, Il Divo, 24 hours a day.” Queenan gives readers a few more of these humorously intoned, marginally plausible scenarios that, with a light touch, point to some of the ways the Internet of Things could go wrong.
In any case, after reading Queenan’s playful lampoon of the Internet of Things, it occurred to me that more often than not our worries about new technology center on the question, “What could go wrong?” In fact, we often ask that sarcastically to suggest that some new technology is obviously fraught with risk. For instance: Geoengineering. Global-scale interventions in the delicate, imperfectly understood workings of the earth’s climate with potentially massive and irreversible consequences … what could go wrong?
Of course, this is a perfectly reasonable question to ask. We ask it, and engineers and technologists respond by assuring us that safety measures are in place, contingencies have been accounted for, precautions have been taken, etc. Or, alternatively, that the risks of doing nothing are greater than the risks of proceeding with some technological project. In other words, asking what could go wrong tends to lock us in the technocratic frame of mind. It invites cost/benefit analysis, rational planning, technological fixes to technological problems, all mixed through and through with sprinklings or heaps of hubris.
Very often, despite some initial failures and, one hopes, not-too-tragic accidents, the kinks do get worked out, disasters are averted (or mostly so), and the new technology stabilizes. The voices of critics who worried about what could go wrong suddenly sound a lot like a chorus of boys crying wolf. Enthusiasts wipe the sweat from their brows, take a deep breath, and confidently proclaim, “I told you so.”
All well and good. There’s only one problem. Maybe asking “What could go wrong?” is a short-sighted way of thinking about new technologies. Maybe we should also be asking, “What could go right?”
What if this new technology worked just as advertised? What if it became a barely-noticed feature of our technological landscape? What if it was seamlessly integrated into our social life? What if it delivered on its promise?
Accidents and disasters get our attention, their possibility makes us anxious. The more spectacular the promise of a new technology, the more nervous we might be about what could go wrong. But, if we are focused exclusively on the accident, we lose sight of the fact that the most consequential technologies are usually those that end up working. They are the ones that reorder our lives, reframe our experience, restructure our social lives, recalibrate our sense of time and place. Etc.
In his recent review of Jordan Ellenberg’s How Not to Be Wrong: The Power of Mathematical Thinking (a title with a mildly hubristic ring, to be sure), Peter Pesic opens with an anecdote about problem solving during World War II. Given the trade-offs involved in placing extra amor on fighter planes and bombers–increased weight, decreased range–where should military airplanes be reinforced? Noticing that returning planes had more bullet holes in the fuselage than in the engine, some suggested reinforcing the fuselage. There was one, seemingly obvious, problem with this line of thinking. As the mathematician Abraham Wald noted, this solution ignored the planes that didn’t make it back, most likely because they had been shot in the engine.
This little anecdote–from what seems like a fascinating book, by the way–reminds us that where you look sometimes makes all the difference. A truism, certainly, but no less true because of it. If in thinking about new technologies (or those old ones, which are no less consequential for having lost the radiance of novelty) we look only at the potential accident, then we may miss what matters most.
As more than a few critics have noted over the years, our thinking about technology is often already compromised by a technocratic frame of mind. We are, in such cases, already evaluating technology on its own terms. What we need then is to recover ways of thinking that don’t already assume technological standards. Admittedly, this can be a challenging project. It requires our breaking long-engrained habits of thought–habits of thought which are all the more difficult to escape because they take on the cast of common sense. My point here is to suggest that one step in that direction is to get loose of the assumption that any well working, smoothly operating technology is ipso facto a good and unproblematic technology.
5 thoughts on “What Could Go Right?”
Great post. You didn’t state it directly (or if you did I missed it, apologies), but you implied that one thing the distinction between the two approaches hinges on is a sense of agency surrounding the technology. An assumption behind “What could go wrong?” is that the technocrats who designed/implemented the technology are acting (or, of course, a determinist assumption that the technology itself is acting) beyond our interests or control. The “what could go right?” approach assumes the opposite. That involves investment and mindfulness about technology, and empowerment, yes, but also a responsibility. The snarky “what could go wrong?” attitude is unfortunately a lot easier.
Really good, thought provoking. I’ve been thinking along these lines recently about how introducing a piece of technology has the power to irreversibly change society. I have a job that involves close understanding of technology and dealing with it’s direct effects on our lives. Today I spoke with a mother who was in conniptions because, due to issues with her phone, she’d been out of communication with her family for almost a day without realizing it. The husband almost brought in the police (really). I realized how impactful this had been, not because she’d been out of communication for so long but because of the expectations of society for instantaneous communication. 20 years ago being out if communication all day wasn’t unusual, 150 years ago being out touch for a whole month or even a year might not have seemed strange. Who knows, but one could easily infer a future where a silence of mere moments half a world away would be felt as deeply as the distant look in a lovers eyes sitting across from you today. Maybe we’re already there…
Michael, I think to assess “what could go x” we should first specify the functional totality of the smart object – whether it be ‘fridge, home, city, nation or world. So is the smart fridge’s function only to manage the household cold food needs, or does it also provide information to others about our cold food habits, and does it attempt to modify these habits (“suggesting” purchases for us), as well as other household habits that are part of the household consuming network.
Whether this kind of thing is good or bad depends on one’s thought and values about a lot of interrelatd stuff and issues. But we can’ come to an informed opinion without considering these things as much as the visible expression of larger networks, much of which is not visible to us, and resides in separate locations.