Quantify Thyself

A thought in passing this morning. Here’s a screen shot that purports to be from an ad for Microsoft’s new wearable device called Band:


I say “purports” because I’ve not been able to find this particular shot and caption on any official Microsoft sites. I first encountered it in this story about Band from October of last year, and I also found it posted to a Reddit thread around the same time. You can watch the official ad here.

It may be that this image is hoax or that Microsoft decided it was a bit too disconcerting and pulled it. A more persistent sleuth should be able to determine which. Whether authentic or not, however, it is instructive.

In tweeting a link to the story in which I first saw the image, I commented: “Define ‘know,’ ‘self,’ and ‘human.'” Nick Seaver astutely replied: “that’s exactly what they’re doing, eh?”

Again, the “they” in this case appears to be a bit ambiguous. That said, the picture is instructive because it reminds us, as Seaver’s reply suggests, that more than our physical fitness is at stake in the emerging regime of quantification. If I were to expand my list of 41 questions about technology’s ethical dimensions, I would include this one: How will the use of this technology redefine my moral vocabulary? or What about myself will the use of this technology encourage me to value?

Consider all that is accepted when someone buys into the idea, even if tacitly so, that Microsoft Band will in fact deepen their knowledge of themselves. What assumptions are accepted about the nature of what it means to know and what there is to know and what can be known? What is implied about the nature of the self when we accept that a device like Band can help us understand it more effectively? We are, needless to say, rather far removed from the Delphic injunction, “Know thyself.”

It is not, of course, that I necessarily think users of Band will be so naive that they will consciously believe there is nothing more to their identity than what Band can measure. Rather, it’s that most of us do have a propensity to pay more attention to what we can measure, particularly when an element of competitiveness is introduced.

I’ll go a step further. Not only do we tend to pay more attention to what we can measure, we begin to care more about what can measure. Perhaps that is because measurement affords us a degree of ostensible control over whatever it is that we are able to measure. It makes self-improvement tangible and manageable, but it does so, in part, by a reduction of the self to those dimensions that register on whatever tool or device we happen to be using to take our measure.

I find myself frequently coming back to one line in a poem by Wendell Berry: “We live the given life, not the planned.” Indeed, and we might also say, “We live the given life, not the quantified.”

A certain vigilance is required to remember that our often marvelous tools of measurement always achieve their precision by narrowing, sometimes radically, what they take into consideration. To reveal one dimension of the whole, they must obscure the others. The danger lies in confusing the partial representation for the whole.

Saturday Evening Links

Below are a few links for your reading pleasure this weekend.

Researcher Believes 3D Printing May Lead to the Creation of Superhuman Organs Providing Humans with New Abilities: “This God-like ability will be made possible thanks in part to the latest breakthroughs in bioprinting. If companies and researchers are coming close to having the ability to 3D print and implant entire organs, then why wouldn’t it be possible to create our own unique organs, which provide us with superhuman abilities?”

Future perfect: how the Victorians invented the future: “It was only around the beginning of the 1800s, as new attitudes towards progress, shaped by the relationship between technology and society, started coming together, that people started thinking about the future as a different place, or an undiscovered country – an idea that seems so familiar to us now that we often forget how peculiar it actually is.”

Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised? Paper by John Danaher: “Soon there will be sex robots. The creation of such devices raises a host of social, legal and ethical questions. In this article, I focus in on one of them. What if these sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse? Should the creation and use of such robots be criminalised, even if no person is harmed by the acts performed? I offer an argument for thinking that they should be.” (Link to article provided.)

Enthusiasts and Skeptics Debate Artificial Intelligence: “… the Singularitarians’ belief that we’re biological machines on the verge of evolving into not entirely biological super-machines has a distinctly religious fervor and certainty. ‘I think we are going to start to interconnect as a human species in a fashion that is intimate and magical,’ Diamandis told me. ‘What I would imagine in the future is a meta-intelligence where we are all connected by the Internet [and] achieve a new level of sentience. . . . Your readers need to understand: It’s not stoppable. It doesn’t matter what they want. It doesn’t matter how they feel.'”

Artificial Intelligence Isn’t a Threat—Yet: “The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.”

Is it okay to torture or murder a robot?: “What’s clear is that there is a spectrum of “aliveness” in robots, from basic simulations of cute animal behaviour, to future robots that acquire a sense of suffering. But as Darling’s Pleo dinosaur experiment suggested, it doesn’t take much to trigger an emotional response in us. The question is whether we can – or should – define the line beyond which cruelty to these machines is unacceptable. Where does the line lie for you? If a robot cries out in pain, or begs for mercy? If it believes it is hurting? If it bleeds?”

A couple of housekeeping notes. Reading Frankenstein posts will resume at the start of next week. Also, you may have noticed that an Index for the blog is in progress. I’ve always wanted to find a way to make older posts more accessible, so I’ve settled on an selective index for People and Topics. You can check it out by clicking the “Index” tab above.


Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”


“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.'” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.

Reframing Technological Phenomena

I’d not ever heard of Michael Heim until I stumbled upon his 1987 book, Electric Language: A Philosophical Study of Word Processing, at a used book store a few days ago; but, after reading the Introduction, I’m already impressed by the concerns and methodology that inform his analysis.

Yesterday, I passed along his defense of philosophizing about a technology at the time of its appearance. It is at this juncture, he explains, before the technology has been rendered an ordinary feature of our everyday experience, that it is uniquely available to our thinking. And it is with our ability to think about technology that Heim is chiefly concerned in his Introduction. Without too much additional comment on my part, I want to pass along a handful of excerpts that I found especially valuable.

Here is Heim’s discussion of reclaiming phenomena for philosophy. By this I take it that he means learning to think about cultural phenomena, in this case technology, without leaning on the conventional framings of the problem. It is a matter of learning to see the phenomena for what it is by first unseeing the a variety of habitual perspectives.

“By taking over pregiven problems, an illusion is created that cultural phenomena are understood philosophically, while in fact certain narrow conventional assumptions are made about what the problem is and what alternate solutions to it might be. Philosophy is then confused with policy, and the illumination of phenomena is exchanged for argumentation and debate [….] Reclaiming the phenomena for philosophy today means not assuming that a phenomenon has been perceived philosophically unless it has first been transformed thoroughly by reflection; we cannot presume to perceive a phenomenon philosophically if it is merely taken up ready-made as the subject of public debate. We must first transform it thoroughly by a reflection that is remote from partisan political debate and from the controlled rhetoric of electronic media. Nor can we assume we have grasped a phenomenon by merely locating its relationship to our everyday scientific mastery of the world. The impact of cultural phenomena must be taken up and reshaped by speculative theory.”

At one point, Heim offered some rather prescient anticipations of the future of writing and computer technology:

“Writing will increasingly be freed from the constraints of paper-print technology; texts will be stored electronically, and vast amounts of information, including further texts, will be accessible immediately below the electronic surface of a piece of writing. The electronically expanding text will no longer be constrained by paper as the telephone and the microcomputer become more intimately conjoined and even begin to merge. The optical character reader will scan and digitize hard-copy printed texts; the entire tradition of books will be converted into information on disk files that can be accessed instantly by computers. By connecting a small computer to a phone, a professional will be able to read ‘books’ whose footnotes can be expanded into further ‘books’ which in turn open out onto a vast sea of data bases systemizing all of human cognition. The networking of written language will erode the line between private and public writings.”

And a little later on, Heim discusses the manner in which we ordinarily (fail to) apprehend the technologies we rely on to make our way in the world:

“We denizens of the late twentieth century are seldom aware of our being embedded in systematic mechanisms of survival. The instruments providing us with technological power seldom appear directly as we carry out the personal tasks of daily life. Quotidian survival brings us not so much to fear autonomous technological systems as to feel a need to acquire and use them. During most of our lives our tools are not problematic–save that we might at a particular point feel need for or lack of a particular technical solution to solve a specific human problem. Having become part of our daily needs, technological systems seem transparent, opening up a world where we can do more, see more, and achieve more.

Yet on occasion we do transcend this immersion in the technical systems of daily life. When a technological system threatens our physical life or threatens the conditions of planetary life, we then turn to regard the potential agents of harm or hazard. We begin to sense that the mechanisms which previously provided, innocently as it were, the conditions of survival are in fact quasi-autonomous mechanisms possessing their own agency, an agency that can drift from its provenance in human meanings and intentions.”

In these last two excerpts, Heim describes two polarities that tend to frame our thinking about technology.

“In a position above the present, we glimpse hopefully into the future and glance longingly at the past. We see how the world has been transformed by our creative inventions, sensing–more suspecting than certain–that it is we who are changed by the things we make. The ambivalence is resolved when we revert to one or another of two simplistic attitudes: enthusiastic depiction of technological progress or wholesale distress about the effects of a mythical technology.”


“Our relationship to technological innovations tends to be so close that we either identify totally with the new extensions of ourselves–and then remain without the concepts and terms for noticing what we risk in our adaption to a technology–or we react so suspiciously toward the technology that we are later engulfed by the changes without having developed critical countermeasures by which to compensate for the subsequent losses in the life of the psyche.”

Heim practices what he preaches. His book is divided into three major sections: Approaching the Phenomenon, Describing the Phenomenon, and Evaluating the Phenomenon. The three chapters of the first section are “designed to gain some distance,” to shake loose the ready-made assumptions so as to clearly perceive the technological phenomenon in question. And this he does by framing word processing within longstanding trajectories of historical and philosophical of inquiry. Only then can the work of description and analysis begin. Finally, this analysis grounds our evaluations. That, it seems, to me is a useful model for our thinking about technology.

(P.S. Frankenstein blogging should resume tomorrow.)

A Thought About Thinking

Several posts in the last few months have touched on the idea of thinking, mostly with reference to the work of Hannah Arendt. “Thinking what we are doing” was a recurring theme in her writing, and it could very easily serve as a slogan, along with the line from McLuhan below the blog’s title, for what I am trying to do here.

Thinking, though, is one of those things that we do naturally, or so we believe, so it is therefore one of those things for which we have a hard time imagining an alternative mode. Let me try putting that another way. The more “natural” a fact about the world seems to us, the harder it is for us to imagine that it could be otherwise. What’s more, thinking about our own thinking is a dynamic best captured by trying to imagine jumping over our own shadow, although, finally, not impossible in the same way.

We all think, if by “thinking” we simply mean our stream of consciousness, our unending internal monologue. But, having thoughts does not necessarily equal thinking. That’s neither a terribly profound observation nor a controversial one. But what, then, does constitute thinking?

Here’s one line of thought in partial response. It’s tempting to associate thinking with “problem solving.” Thinking in these cases takes as its point of departure some problem that needs to be solved. Our thinking then sets out to understand the problem, perhaps by identifying its causes, before proceeding to propose solutions, solutions which usually involve the weighing of pros and cons.

This is the sort of thinking that we tend to prize, and for obvious reasons. When there are problems, we want solutions. We might call this sort of thinking technocratic thinking, or thinking on the model of engineering. By calling it this I don’t intend to disparage it. We need this sort of thinking, no doubt. But if this is the only sort of thinking we do, then we’ve impoverished the category.

But what’s the alternative?

The technocratic mode of thinking makes the assumption that all problems have solutions and all questions have answers. Or, what’s worse, that the only problems worth thinking about are those we can solve and the only questions worth asking are those that we can definitively answer. The corollary temptation is that we begin to look at life merely as a series of problems in search of a solution. We might call this the engineered life.

All of this further assumes that thinking itself is not inherently valuable; it is valuable only as a means to an end: in this case, either the solution or the answer.

We need, instead, to insist on the value of thinking as an end in itself. We might make a start by distinguishing between questions we answer and questions we live with–that is, questions we may never fully answer, but whose contemplation enriches our lives. We may further distinguish between problems we solve and problems we simply inhabit as a condition of being human.

This needs to be further elaborated, but I’ll leave that to your own thinking. I’ll also leave you with another line that has meant a lot to me over the years. It’s taken from a poem by Wendell Berry:

“We live the given life, not the planned.”