Over the past few weeks, I’ve drafted about half a dozen posts in my mind that, sadly, I’ve not had the time to write. Among those mental drafts in progress is a response to Evgeny Morozov’s latest essay. The piece is ostensibly a review of Nick Carr’s The Glass Cage, but it’s really a broadside at the whole enterprise of tech criticism (as Morozov sees it). I’m not sure about the other mental drafts, but that is one I’m determined to see through. Look for it in the next few days … maybe.
In the meantime, here’s a quick reaction to a post by Steve Coast that has been making the rounds today.
In “The World Will Only Get Weirder,” Coast opens with some interesting observations about aviation safety. Taking the recent spate of bizarre aviation incidents as his point of departure, Coast argues that rules as a means of managing safety will only get you so far.
The history of aviation safety is the history of rule-making and checklists. Over time, this approach successfully addressed the vast majority of aviation safety issues. Eventually, however, you hit peak rules, as it were, and you enter a byzantine phase of rule making. Here’s the heart of the piece:
“We’ve reached the end of the useful life of that strategy and have hit severely diminishing returns. As illustration, we created rules to make sure people can’t get in to cockpits to kill the pilots and fly the plane in to buildings. That looked like a good rule. But, it’s created the downside that pilots can now lock out their colleagues and fly it in to a mountain instead.
It used to be that rules really helped. Checklists on average were extremely helpful and have saved possibly millions of lives. But with aircraft we’ve reached the point where rules may backfire, like locking cockpit doors. We don’t know how many people have been saved without locking doors since we can’t go back in time and run the experiment again. But we do know we’ve lost 150 people with them.
And so we add more rules, like requiring two people in the cockpit from now on. Who knows what the mental capacity is of the flight attendant that’s now allowed in there with one pilot, or what their motives are. At some point, if we wait long enough, a flight attendant is going to take over an airplane having only to incapacitate one, not two, pilots. And so we’ll add more rules about the type of flight attendant allowed in the cockpit and on and on.”
This struck me as a rather sensible take on the limits of a rule-oriented, essentially bureaucratic approach to problem solving, which is to say the limits of technocracy or technocratic rationality. Limits, incidentally, that apply as well to our increasing dependence on algorithmic automation.
Of course, this is not to say that rule-oriented, bureaucratic reason is useless. Far from it. As a mode of thinking it is, in fact, capable of solving a great number of problems. It is eminently useful, if also profoundly limited.
Problems arise, however, when this one mode of thought crowds out all others, when we can’t even conceive of an alternative.
This dynamic is, I think, illustrated by a curious feature of Coast’s piece. The engaging argument that characterizes the first half or so of the post gives way to a far less cogent and, frankly, troubling attempt at a solution:
“The primary way we as a society deal with this mess is by creating rule-free zones. Free trade zones for economics. Black budgets for military. The internet for intellectual property. Testing areas for drones. Then after all the objectors have died off, integrate the new things in to society.”
So, it would seem, Coast would have us address the limits of rule-oriented, bureaucratic reason by throwing out all rules, at least within certain contexts until everyone gets on board or dies off. This stark opposition is plausible only if you can’t imagine an alternative mode of thought that might direct your actions. We only have one way of thinking seems to be the unspoken premise. Given that premise, once that mode of thinking fails, there’s nothing left to do but discard thinking altogether.
As I was working on this post I came across a story on NPR that also illustrates our unfortunately myopic understanding of what counts as thought. The story discusses a recent study that identifies a tendency the researchers labeled “algorithm aversion”:
“In a paper just published in the Journal of Experimental Psychology: General, researchers from the University of Pennsylvania’s Wharton School of Business presented people with decisions like these. Across five experiments, they found that people often chose a human — themselves or someone else — over a model when it came to making predictions, especially after seeing the model make some mistakes. In fact, they did so even when the model made far fewer mistakes than the human. The researchers call the phenomenon ‘algorithm aversion,’ where ‘algorithm’ is intended broadly, to encompass — as they write — ‘any evidence-based forecasting formula or rule.'”
After considering what might account for algorithm aversion, the author, psychology professor Tania Lombrozo, closes with this:
“I’m left wondering how people are thinking of their own decision process if not in algorithmic terms — that is, as some evidence-based forecasting formula or rule. Perhaps the aversion — if it is that — is not to algorithms per se, but to the idea that the outcomes of complex, human processes can be predicted deterministically. Or perhaps people assume that human ‘algorithms’ have access to additional information that they (mistakenly) believe will aid predictions, such as cultural background knowledge about the sorts of people who select different majors, or about the conditions under which someone might do well versus poorly on the GMAT. People may simply think they’re implementing better algorithms than the computer-based alternatives.
So, here’s what I want to know. If this research reflects a preference for ‘human algorithms’ over ‘nonhuman algorithms,’ what is it that makes an algorithm human? And if we don’t conceptualize our own decisions as evidence-based rules of some sort, what exactly do we think they are?”
May be it’s just me, but it seems Lombrozo can’t quite imagine how people might understand their own thinking if they are not understanding it on the model of an algorithm.
These two pieces raise a series of questions for me, and I’ll leave you with them:
What is thinking? What do we think we are doing when we are thinking? Can we imagine thinking as something more and other than rule-oriented problem solving or cost/benefit analysis? Have we surrendered our thinking to the controlling power of one master metaphor, the algorithm?
(Spoiler alert: I think the work of Hannah Arendt is of immense help in these matters.)
It strikes me that if you’re using quotes from the Joker to support your argument, you may be arguing the wrong side…
Salient point!
I’m thinking that we have to involve a measure of “feeling” with our thinking. I’m also thinking of David Graeber’s newest book about bureaucracies right now.
Read a review of Graeber’s book, sounded interesting.
Reblogged this on Joseph Ratliff's Notepad.
Rules will always come into play, most do not know how to function without them in place.
Here is a thought… As for our current aviation and air travel tragedies, my first thought and comment is rather blunt and can be taken quite barbaric and sinister. If a flight is taken under a hi-jacker or terrorist- shoot it down and after so many times of proving that we no longer are in fear of having collateral damage, losing hundreds of lives on a flight due to a hi-jacking that will result in their deaths no matter the out-come, then hi-jackers and terrorist will no longer have the option, for they will know they will simply be shot down and the lives they think they are taking hostage, are expendable and of no use to them. Just saying! Tell me that rule doesn’t make sense? Cruel to see lives expendable? Indeed but, after all, we are a growing population that is already over populated the Earth to the extent of resources falling and becoming more and more dense in supply as we continue to grow vastly in reproduction by the hour.
What seems to me left out of the purely algorithmic description of thinking is a sense that the brain is not alone and isolated. Someone is thinking. And that someone is much more than just the thoughts. One might say that when one is thinking, one is trying to find some kind of harmony within oneself, or between oneself and something else.
Conceiving thinking in terms of algorithms assumes all the terms to be thought about are well defined enough that one can program a computer to solve the problem, or put them mathematically, perhaps.
Maybe its a bit like saying that all science is physics? It is sort of true in a very stretched, limited, and generally not very useful sense.