What Do We Think We Are Doing When We Are Thinking?

Over the past few weeks, I’ve drafted about half a dozen posts in my mind that, sadly, I’ve not had the time to write. Among those mental drafts in progress is a response to Evgeny Morozov’s latest essay. The piece is ostensibly a review of Nick Carr’s The Glass Cage, but it’s really a broadside at the whole enterprise of tech criticism (as Morozov sees it). I’m not sure about the other mental drafts, but that is one I’m determined to see through. Look for it in the next few days … maybe.

In the meantime, here’s a quick reaction to a post by Steve Coast that has been making the rounds today.

In “The World Will Only Get Weirder,” Coast opens with some interesting observations about aviation safety. Taking the recent spate of bizarre aviation incidents as his point of departure, Coast argues that rules as a means of managing safety will only get you so far.

The history of aviation safety is the history of rule-making and checklists. Over time, this approach successfully addressed the vast majority of aviation safety issues. Eventually, however, you hit peak rules, as it were, and you enter a byzantine phase of rule making. Here’s the heart of the piece:

“We’ve reached the end of the useful life of that strategy and have hit severely diminishing returns. As illustration, we created rules to make sure people can’t get in to cockpits to kill the pilots and fly the plane in to buildings. That looked like a good rule. But, it’s created the downside that pilots can now lock out their colleagues and fly it in to a mountain instead.

It used to be that rules really helped. Checklists on average were extremely helpful and have saved possibly millions of lives. But with aircraft we’ve reached the point where rules may backfire, like locking cockpit doors. We don’t know how many people have been saved without locking doors since we can’t go back in time and run the experiment again. But we do know we’ve lost 150 people with them.

And so we add more rules, like requiring two people in the cockpit from now on. Who knows what the mental capacity is of the flight attendant that’s now allowed in there with one pilot, or what their motives are. At some point, if we wait long enough, a flight attendant is going to take over an airplane having only to incapacitate one, not two, pilots. And so we’ll add more rules about the type of flight attendant allowed in the cockpit and on and on.”

This struck me as a rather sensible take on the limits of a rule-oriented, essentially bureaucratic approach to problem solving, which is to say the limits of technocracy or technocratic rationality. Limits, incidentally, that apply as well to our increasing dependence on algorithmic automation.

Of course, this is not to say that rule-oriented, bureaucratic reason is useless. Far from it. As a mode of thinking it is, in fact, capable of solving a great number of problems. It is eminently useful, if also profoundly limited.

Problems arise, however, when this one mode of thought crowds out all others, when we can’t even conceive of an alternative.

This dynamic is, I think, illustrated by a curious feature of Coast’s piece. The engaging argument that characterizes the first half or so of the post gives way to a far less cogent and, frankly, troubling attempt at a solution:

“The primary way we as a society deal with this mess is by creating rule-free zones. Free trade zones for economics. Black budgets for military. The internet for intellectual property. Testing areas for drones. Then after all the objectors have died off, integrate the new things in to society.”

So, it would seem, Coast would have us address the limits of rule-oriented, bureaucratic reason by throwing out all rules, at least within certain contexts until everyone gets on board or dies off. This stark opposition is plausible only if you can’t imagine an alternative mode of thought that might direct your actions. We only have one way of thinking seems to be the unspoken premise. Given that premise, once that mode of thinking fails, there’s nothing left to do but discard thinking altogether.

As I was working on this post I came across a story on NPR that also illustrates our unfortunately myopic understanding of what counts as thought. The story discusses a recent study that identifies a tendency the researchers labeled “algorithm aversion”:

“In a paper just published in the Journal of Experimental Psychology: General, researchers from the University of Pennsylvania’s Wharton School of Business presented people with decisions like these. Across five experiments, they found that people often chose a human — themselves or someone else — over a model when it came to making predictions, especially after seeing the model make some mistakes. In fact, they did so even when the model made far fewer mistakes than the human. The researchers call the phenomenon ‘algorithm aversion,’ where ‘algorithm’ is intended broadly, to encompass — as they write — ‘any evidence-based forecasting formula or rule.'”

After considering what might account for algorithm aversion, the author, psychology professor Tania Lombrozo, closes with this:

“I’m left wondering how people are thinking of their own decision process if not in algorithmic terms — that is, as some evidence-based forecasting formula or rule. Perhaps the aversion — if it is that — is not to algorithms per se, but to the idea that the outcomes of complex, human processes can be predicted deterministically. Or perhaps people assume that human ‘algorithms’ have access to additional information that they (mistakenly) believe will aid predictions, such as cultural background knowledge about the sorts of people who select different majors, or about the conditions under which someone might do well versus poorly on the GMAT. People may simply think they’re implementing better algorithms than the computer-based alternatives.

So, here’s what I want to know. If this research reflects a preference for ‘human algorithms’ over ‘nonhuman algorithms,’ what is it that makes an algorithm human? And if we don’t conceptualize our own decisions as evidence-based rules of some sort, what exactly do we think they are?”

May be it’s just me, but it seems Lombrozo can’t quite imagine how people might understand their own thinking if they are not understanding it on the model of an algorithm.

These two pieces raise a series of questions for me, and I’ll leave you with them:

What is thinking? What do we think we are doing when we are thinking? Can we imagine thinking as something more and other than rule-oriented problem solving or cost/benefit analysis? Have we surrendered our thinking to the controlling power of one master metaphor, the algorithm?

(Spoiler alert: I think the work of Hannah Arendt is of immense help in these matters.)

Quantify Thyself

A thought in passing this morning. Here’s a screen shot that purports to be from an ad for Microsoft’s new wearable device called Band:

Microsoft_Band__Read_the_backstory_on_the_evolution_and_development_Microsoft_s_new_smart_device___Windows_Central

I say “purports” because I’ve not been able to find this particular shot and caption on any official Microsoft sites. I first encountered it in this story about Band from October of last year, and I also found it posted to a Reddit thread around the same time. You can watch the official ad here.

It may be that this image is hoax or that Microsoft decided it was a bit too disconcerting and pulled it. A more persistent sleuth should be able to determine which. Whether authentic or not, however, it is instructive.

In tweeting a link to the story in which I first saw the image, I commented: “Define ‘know,’ ‘self,’ and ‘human.'” Nick Seaver astutely replied: “that’s exactly what they’re doing, eh?”

Again, the “they” in this case appears to be a bit ambiguous. That said, the picture is instructive because it reminds us, as Seaver’s reply suggests, that more than our physical fitness is at stake in the emerging regime of quantification. If I were to expand my list of 41 questions about technology’s ethical dimensions, I would include this one: How will the use of this technology redefine my moral vocabulary? or What about myself will the use of this technology encourage me to value?

Consider all that is accepted when someone buys into the idea, even if tacitly so, that Microsoft Band will in fact deepen their knowledge of themselves. What assumptions are accepted about the nature of what it means to know and what there is to know and what can be known? What is implied about the nature of the self when we accept that a device like Band can help us understand it more effectively? We are, needless to say, rather far removed from the Delphic injunction, “Know thyself.”

It is not, of course, that I necessarily think users of Band will be so naive that they will consciously believe there is nothing more to their identity than what Band can measure. Rather, it’s that most of us do have a propensity to pay more attention to what we can measure, particularly when an element of competitiveness is introduced.

I’ll go a step further. Not only do we tend to pay more attention to what we can measure, we begin to care more about what can measure. Perhaps that is because measurement affords us a degree of ostensible control over whatever it is that we are able to measure. It makes self-improvement tangible and manageable, but it does so, in part, by a reduction of the self to those dimensions that register on whatever tool or device we happen to be using to take our measure.

I find myself frequently coming back to one line in a poem by Wendell Berry: “We live the given life, not the planned.” Indeed, and we might also say, “We live the given life, not the quantified.”

A certain vigilance is required to remember that our often marvelous tools of measurement always achieve their precision by narrowing, sometimes radically, what they take into consideration. To reveal one dimension of the whole, they must obscure the others. The danger lies in confusing the partial representation for the whole.

Saturday Evening Links

Below are a few links for your reading pleasure this weekend.

Researcher Believes 3D Printing May Lead to the Creation of Superhuman Organs Providing Humans with New Abilities: “This God-like ability will be made possible thanks in part to the latest breakthroughs in bioprinting. If companies and researchers are coming close to having the ability to 3D print and implant entire organs, then why wouldn’t it be possible to create our own unique organs, which provide us with superhuman abilities?”

Future perfect: how the Victorians invented the future: “It was only around the beginning of the 1800s, as new attitudes towards progress, shaped by the relationship between technology and society, started coming together, that people started thinking about the future as a different place, or an undiscovered country – an idea that seems so familiar to us now that we often forget how peculiar it actually is.”

Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised? Paper by John Danaher: “Soon there will be sex robots. The creation of such devices raises a host of social, legal and ethical questions. In this article, I focus in on one of them. What if these sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse? Should the creation and use of such robots be criminalised, even if no person is harmed by the acts performed? I offer an argument for thinking that they should be.” (Link to article provided.)

Enthusiasts and Skeptics Debate Artificial Intelligence: “… the Singularitarians’ belief that we’re biological machines on the verge of evolving into not entirely biological super-machines has a distinctly religious fervor and certainty. ‘I think we are going to start to interconnect as a human species in a fashion that is intimate and magical,’ Diamandis told me. ‘What I would imagine in the future is a meta-intelligence where we are all connected by the Internet [and] achieve a new level of sentience. . . . Your readers need to understand: It’s not stoppable. It doesn’t matter what they want. It doesn’t matter how they feel.'”

Artificial Intelligence Isn’t a Threat—Yet: “The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.”

Is it okay to torture or murder a robot?: “What’s clear is that there is a spectrum of “aliveness” in robots, from basic simulations of cute animal behaviour, to future robots that acquire a sense of suffering. But as Darling’s Pleo dinosaur experiment suggested, it doesn’t take much to trigger an emotional response in us. The question is whether we can – or should – define the line beyond which cruelty to these machines is unacceptable. Where does the line lie for you? If a robot cries out in pain, or begs for mercy? If it believes it is hurting? If it bleeds?”

A couple of housekeeping notes. Reading Frankenstein posts will resume at the start of next week. Also, you may have noticed that an Index for the blog is in progress. I’ve always wanted to find a way to make older posts more accessible, so I’ve settled on an selective index for People and Topics. You can check it out by clicking the “Index” tab above.

Cheers!

Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”

[….]

“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.'” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.

Reframing Technological Phenomena

I’d not ever heard of Michael Heim until I stumbled upon his 1987 book, Electric Language: A Philosophical Study of Word Processing, at a used book store a few days ago; but, after reading the Introduction, I’m already impressed by the concerns and methodology that inform his analysis.

Yesterday, I passed along his defense of philosophizing about a technology at the time of its appearance. It is at this juncture, he explains, before the technology has been rendered an ordinary feature of our everyday experience, that it is uniquely available to our thinking. And it is with our ability to think about technology that Heim is chiefly concerned in his Introduction. Without too much additional comment on my part, I want to pass along a handful of excerpts that I found especially valuable.

Here is Heim’s discussion of reclaiming phenomena for philosophy. By this I take it that he means learning to think about cultural phenomena, in this case technology, without leaning on the conventional framings of the problem. It is a matter of learning to see the phenomena for what it is by first unseeing the a variety of habitual perspectives.

“By taking over pregiven problems, an illusion is created that cultural phenomena are understood philosophically, while in fact certain narrow conventional assumptions are made about what the problem is and what alternate solutions to it might be. Philosophy is then confused with policy, and the illumination of phenomena is exchanged for argumentation and debate [….] Reclaiming the phenomena for philosophy today means not assuming that a phenomenon has been perceived philosophically unless it has first been transformed thoroughly by reflection; we cannot presume to perceive a phenomenon philosophically if it is merely taken up ready-made as the subject of public debate. We must first transform it thoroughly by a reflection that is remote from partisan political debate and from the controlled rhetoric of electronic media. Nor can we assume we have grasped a phenomenon by merely locating its relationship to our everyday scientific mastery of the world. The impact of cultural phenomena must be taken up and reshaped by speculative theory.”

At one point, Heim offered some rather prescient anticipations of the future of writing and computer technology:

“Writing will increasingly be freed from the constraints of paper-print technology; texts will be stored electronically, and vast amounts of information, including further texts, will be accessible immediately below the electronic surface of a piece of writing. The electronically expanding text will no longer be constrained by paper as the telephone and the microcomputer become more intimately conjoined and even begin to merge. The optical character reader will scan and digitize hard-copy printed texts; the entire tradition of books will be converted into information on disk files that can be accessed instantly by computers. By connecting a small computer to a phone, a professional will be able to read ‘books’ whose footnotes can be expanded into further ‘books’ which in turn open out onto a vast sea of data bases systemizing all of human cognition. The networking of written language will erode the line between private and public writings.”

And a little later on, Heim discusses the manner in which we ordinarily (fail to) apprehend the technologies we rely on to make our way in the world:

“We denizens of the late twentieth century are seldom aware of our being embedded in systematic mechanisms of survival. The instruments providing us with technological power seldom appear directly as we carry out the personal tasks of daily life. Quotidian survival brings us not so much to fear autonomous technological systems as to feel a need to acquire and use them. During most of our lives our tools are not problematic–save that we might at a particular point feel need for or lack of a particular technical solution to solve a specific human problem. Having become part of our daily needs, technological systems seem transparent, opening up a world where we can do more, see more, and achieve more.

Yet on occasion we do transcend this immersion in the technical systems of daily life. When a technological system threatens our physical life or threatens the conditions of planetary life, we then turn to regard the potential agents of harm or hazard. We begin to sense that the mechanisms which previously provided, innocently as it were, the conditions of survival are in fact quasi-autonomous mechanisms possessing their own agency, an agency that can drift from its provenance in human meanings and intentions.”

In these last two excerpts, Heim describes two polarities that tend to frame our thinking about technology.

“In a position above the present, we glimpse hopefully into the future and glance longingly at the past. We see how the world has been transformed by our creative inventions, sensing–more suspecting than certain–that it is we who are changed by the things we make. The ambivalence is resolved when we revert to one or another of two simplistic attitudes: enthusiastic depiction of technological progress or wholesale distress about the effects of a mythical technology.”

And,

“Our relationship to technological innovations tends to be so close that we either identify totally with the new extensions of ourselves–and then remain without the concepts and terms for noticing what we risk in our adaption to a technology–or we react so suspiciously toward the technology that we are later engulfed by the changes without having developed critical countermeasures by which to compensate for the subsequent losses in the life of the psyche.”

Heim practices what he preaches. His book is divided into three major sections: Approaching the Phenomenon, Describing the Phenomenon, and Evaluating the Phenomenon. The three chapters of the first section are “designed to gain some distance,” to shake loose the ready-made assumptions so as to clearly perceive the technological phenomenon in question. And this he does by framing word processing within longstanding trajectories of historical and philosophical of inquiry. Only then can the work of description and analysis begin. Finally, this analysis grounds our evaluations. That, it seems, to me is a useful model for our thinking about technology.

(P.S. Frankenstein blogging should resume tomorrow.)