Data-Driven Regimes of Truth

Below are excerpts from three items that came across my browser this past week. I thought it useful to juxtapose them here.

The first is Andrea Turpin’s review in The Hedgehog Review of Science, Democracy, and the American University: From the Civil War to the Cold War, a new book by Andrew Jewett about the role of science as a unifying principle in American politics and public policy.

“Jewett calls the champions of that forgotten understanding ‘scientific democrats.’ They first articulated their ideas in the late nineteenth century out of distress at the apparent impotence of culturally dominant Protestant Christianity to prevent growing divisions in American politics—most violently in the Civil War, then in the nation’s widening class fissure. Scientific democrats anticipated educating the public on the principles and attitudes of scientific practice, looking to succeed in fostering social consensus where a fissiparous Protestantism had failed. They hoped that widely cultivating the habit of seeking empirical truth outside oneself would produce both the information and the broader sympathies needed to structure a fairer society than one dominated by Gilded Age individualism.

Questions soon arose: What should be the role of scientific experts versus ordinary citizens in building the ideal society? Was it possible for either scientists or citizens to be truly disinterested when developing policies with implications for their own economic and social standing? Jewett skillfully teases out the subtleties of the resulting variety of approaches in order to ‘reveal many of the insights and blind spots that can result from a view of science as a cultural foundation for democratic politics.’”

The second piece, “When Fitbit is the Expert,” appeared in The Atlantic. In it, Kate Crawford discusses how data gathered by wearable devices can be used for and against its users in court.

“Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.”

[....]

“Ultimately, the Fitbit case may be just one step in a much bigger shift toward a data-driven regime of ‘truth.’ Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.”

The final excerpt comes from an interview with Mathias Döpfner in the Columbia Journalism Review. Döfner is the CEO of the largest publishing company in Europe and has been outspoken in his criticisms of American technology firms such as Google and Facebook.

“It’s interesting to see the difference between the US debate on data protection, data security, transparency and how this issue is handled in Europe. In the US, the perception is, ‘What’s the problem? If you have nothing to hide, you have nothing to fear. We can share everything with everybody, and being able to take advantage of data is great.’ In Europe it’s totally different. There is a huge concern about what institutions—commercial institutions and political institutions—can do with your data. The US representatives tend to say, ‘Those are the back-looking Europeans; they have an outdated view. The tech economy is based on data.’”

Döpfner goes out of his way to indicate that he is a regulatory minimalist and that he deeply admires American-style tech-entrepreneurship. But ….

“In Europe there is more sensitivity because of the history. The Europeans know that total transparency and total control of data leads to totalitarian societies. The Nazi system and the socialist system were based on total transparency. The Holocaust happened because the Nazis knew exactly who was a Jew, where a Jew was living, how and at what time they could get him; every Jew got a number as a tattoo on his arm before they were gassed in the concentration camps.”

Perhaps that’s a tad alarmist, I don’t know. The thing about alarmism is that only in hindsight can it be definitively identified.

Here’s the thread that united these pieces in my mind. Jewett’s book, assuming the reliability of Turpin’s review, is about an earlier attempt to find a new frame of reference for American political culture. Deliberative democracy works best when citizens share a moral framework from which their arguments and counter-arguments derive their meaning. Absent such a broadly shared moral framework, competing claims can never really be meaningfully argued for or against, they can only be asserted or denounced. What Jewett describes, it seems, is just the particular American case of a pattern that is characteristic of secular modernity writ large. The eclipse of traditional religious belief leads to a search for new sources of unity and moral authority.

For a variety of reasons, the project to ground American political culture in publicly accessible science did not succeed. (It appears, by the way, that Jewett’s book is an attempt to revive the effort.) It failed, in part, because it became apparent that science itself was not exactly value free, at least not as it was practice by actual human beings. Additionally, it seems to me, the success of the project assumed that all political problems, that is all problems that arise when human beings try to live together, were subject to scientific analysis and resolution. This strikes me as an unwarranted assumption.

In any case, it would seem that proponents of a certain strand Big Data ideology now want to offer Big Data as the framework that unifies society and resolves political and ethical issues related to public policy. This is part of what I read into Crawford’s suggestion that we are moving into “a data-driven regime of ‘truth.’” “Science says” replaced “God says”; and now “Science says” is being replaced by “Big Data says.”

To put it another way, Big Data offers to fill the cultural role that was vacated by religious belief. It was a role that, in their turn, Reason, Art, and Science have all tried to fill. In short, certain advocates of Big Data need to read Nietzsche’s Twilight of the Idols. Big Data may just be another God-term, an idol that needs to be sounded with a hammer and found hollow.

Finally, Döfner’s comments are just a reminder of the darker uses to which data can and has been put, particularly when thoughtfulness and judgement have been marginalized.

Jaron Lanier Wants to Secularize AI

In 2010, one of the earliest posts on this blog noted an op-ed in the NY Times by Jaron Lanier titled “The First Church of Robotic.” In it, Lanier lamented the rise quasi-religious aspirations animating many among the Silicon Valley elite. Describing the tangle of ideas and hopes usually associated with the Singularity and/or Transhumanism, Lanier concluded, “What we are seeing is a new religion, expressed through an engineering culture.” The piece wraps up rather straightforwardly: “We serve people best when we keep our religious ideas out of our work.”

In fact, the new religion Lanier has in view has a considerably older pedigree than what he imagines. Historian David Noble traced the roots of what he called the religion of technology back to the start of the last millennium. What Lanier identified was only the latest iteration of that venerable techno-religious tradition.

A couple of days ago, Edge posted a video (and transcript) of an extended discussion by Lanier, which was sparked by recent comments made by Stephen Hawking and Elon Musk about the existential threat to humanity AI may pose in the not-to-distant future. Lanier’s talk ranges impressively over a variety of related issues and registers a number of valuable insights. Consider, for instance, this passing critique of Big Data:

“I want to get to an even deeper problem, which is that there’s no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you’re getting big data by observing a lot of people who make choices, and then you’re doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there’s not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There’s no way to know.”

To which he adds a few moments later, “It’s not so much a rise of evil as a rise of nonsense. It’s a mass incompetence, as opposed to Skynet from the Terminator movies. That’s what this type of AI turns into.” Big Data as banal evil, perhaps.

Lanier is certainly not the only one pointing out that Big Data doesn’t magically render pure or objective sociological data. A host of voices have made some variation of this point in their critique of the ideology surrounding Big Data experiments conducted by the likes of Facebook and OkCupid. The point is simple enough: observation/measurement alters the observed/measured phenomena. It’s a paradox that haunts most forms of human knowledge, perhaps especially our knowledge of ourselves, and it seems to me that we are better off abiding the paradox rather than seeking to transcend it.

Lanier also scores an excellent point when he asks us to imagine two scenarios involving the possibility of 3-D printed killer drones that can be used to target individuals. In the first scenario, they are developed and deployed by terrorists; in the second they are developed and deployed by some sort of rogue AI along the lines that Musk and others have worried about. Lanier’s question is this: what difference does it make whether terrorists or rogue AI is to blame? The problem remains the same.

“The truth is that the part that causes the problem is the actuator. It’s the interface to physicality. It’s the fact that there’s this little killer drone thing that’s coming around. It’s not so much whether it’s a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there’s enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it’s a massive misdirection.”

It is a misdirection that entails an evasion of responsibility and a failure of political imagination.

All of this is well put, and there’s more along the same lines. Lanier’s chief concern, however, is to frame this as a problem of religious thinking infecting the work of technology. Early on, for instance, he says, “what I’m proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing. What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field.”

And toward the conclusion of his talk, Lanier elaborates:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

What Lanier proposes in response to this state of affairs is something like a wall of separation, not between the church and the state, but between religion and technology:

“To me, what would be ridiculous is for somebody to say, ‘Oh, you mustn’t study deep learning networks,’ or ‘you mustn’t study theorem provers,’ or whatever technique you’re interested in. Those things are incredibly interesting and incredibly useful. It’s the mythology that we have to become more self-aware of. This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease that apart and say this is the part that’s great, and this is the part that’s self-defeating. We have to do it exactly the same thing with AI now.”

I’m sure Lanier would admit that this is easier said than done. In fact, he suggests as much himself a few lines later. But it’s worth asking whether the kind of sorting out that Lanier proposes is not merely challenging, but, perhaps, unworkable. Just as mid-twentieth century theories of secularization have come on hard times owing to a certain recalcitrant religiosity (or spirituality, if you prefer), we might also find that the religion of technology cannot simply be wished away or bracketed.

Paradoxically, we might also say that something like the religion of technology emerges precisely to the (incomplete) degree that the process of secularization unfolded in the West. To put this another way, imagine that there is within Western consciousness a particular yearning for transcendence. Suppose, as well, that this yearning is so ingrained that it cannot be easily eradicated. Consequently, you end up having something like a whack-a-mole effect. Suppress one expression of this yearning, and it surfaces elsewhere. The yearning for transcendence never quite dissipates, it only transfigures itself. So the progress of secularization, to the degree that it successfully suppresses traditional expressions of the quest for transcendence, manages only to channel it into other cultural projects, namely techno-science. I certainly don’t mean to suggest that the entire techno-scientific project is an unmitigated expression of the religion of technology. That’s certainly not the case. But, as Noble made clear, particularly in his chapter on AI, the techno-religious impulse is hardly negligible.

One last thought, for now. Mary Shelley seemed to understand that one cannot easily disentangle the noble from the corrupt in human affairs: both are rooted in the same faculties and desires. Attempt to eradicate the baser elements altogether, and you may very well eliminate all that is admirable also. The heroic tendency is not safe. I don’t think we’ve been well-served by our discarding of this essentially tragic vision in favor of a more cheery techno-utopianism.

Reframing Technological Phenomena

I’d not ever heard of Michael Heim until I stumbled upon his 1987 book, Electric Language: A Philosophical Study of Word Processing, at a used book store a few days ago; but, after reading the Introduction, I’m already impressed by the concerns and methodology that inform his analysis.

Yesterday, I passed along his defense of philosophizing about a technology at the time of its appearance. It is at this juncture, he explains, before the technology has been rendered an ordinary feature of our everyday experience, that it is uniquely available to our thinking. And it is with our ability to think about technology that Heim is chiefly concerned in his Introduction. Without too much additional comment on my part, I want to pass along a handful of excerpts that I found especially valuable.

Here is Heim’s discussion of reclaiming phenomena for philosophy. By this I take it that he means learning to think about cultural phenomena, in this case technology, without leaning on the conventional framings of the problem. It is a matter of learning to see the phenomena for what it is by first unseeing the a variety of habitual perspectives.

“By taking over pregiven problems, an illusion is created that cultural phenomena are understood philosophically, while in fact certain narrow conventional assumptions are made about what the problem is and what alternate solutions to it might be. Philosophy is then confused with policy, and the illumination of phenomena is exchanged for argumentation and debate [....] Reclaiming the phenomena for philosophy today means not assuming that a phenomenon has been perceived philosophically unless it has first been transformed thoroughly by reflection; we cannot presume to perceive a phenomenon philosophically if it is merely taken up ready-made as the subject of public debate. We must first transform it thoroughly by a reflection that is remote from partisan political debate and from the controlled rhetoric of electronic media. Nor can we assume we have grasped a phenomenon by merely locating its relationship to our everyday scientific mastery of the world. The impact of cultural phenomena must be taken up and reshaped by speculative theory.”

At one point, Heim offered some rather prescient anticipations of the future of writing and computer technology:

“Writing will increasingly be freed from the constraints of paper-print technology; texts will be stored electronically, and vast amounts of information, including further texts, will be accessible immediately below the electronic surface of a piece of writing. The electronically expanding text will no longer be constrained by paper as the telephone and the microcomputer become more intimately conjoined and even begin to merge. The optical character reader will scan and digitize hard-copy printed texts; the entire tradition of books will be converted into information on disk files that can be accessed instantly by computers. By connecting a small computer to a phone, a professional will be able to read ‘books’ whose footnotes can be expanded into further ‘books’ which in turn open out onto a vast sea of data bases systemizing all of human cognition. The networking of written language will erode the line between private and public writings.”

And a little later on, Heim discusses the manner in which we ordinarily (fail to) apprehend the technologies we rely on to make our way in the world:

“We denizens of the late twentieth century are seldom aware of our being embedded in systematic mechanisms of survival. The instruments providing us with technological power seldom appear directly as we carry out the personal tasks of daily life. Quotidian survival brings us not so much to fear autonomous technological systems as to feel a need to acquire and use them. During most of our lives our tools are not problematic–save that we might at a particular point feel need for or lack of a particular technical solution to solve a specific human problem. Having become part of our daily needs, technological systems seem transparent, opening up a world where we can do more, see more, and achieve more.

Yet on occasion we do transcend this immersion in the technical systems of daily life. When a technological system threatens our physical life or threatens the conditions of planetary life, we then turn to regard the potential agents of harm or hazard. We begin to sense that the mechanisms which previously provided, innocently as it were, the conditions of survival are in fact quasi-autonomous mechanisms possessing their own agency, an agency that can drift from its provenance in human meanings and intentions.”

In these last two excerpts, Heim describes two polarities that tend to frame our thinking about technology.

“In a position above the present, we glimpse hopefully into the future and glance longingly at the past. We see how the world has been transformed by our creative inventions, sensing–more suspecting than certain–that it is we who are changed by the things we make. The ambivalence is resolved when we revert to one or another of two simplistic attitudes: enthusiastic depiction of technological progress or wholesale distress about the effects of a mythical technology.”

And,

“Our relationship to technological innovations tends to be so close that we either identify totally with the new extensions of ourselves–and then remain without the concepts and terms for noticing what we risk in our adaption to a technology–or we react so suspiciously toward the technology that we are later engulfed by the changes without having developed critical countermeasures by which to compensate for the subsequent losses in the life of the psyche.”

Heim practices what he preaches. His book is divided into three major sections: Approaching the Phenomenon, Describing the Phenomenon, and Evaluating the Phenomenon. The three chapters of the first section are “designed to gain some distance,” to shake loose the ready-made assumptions so as to clearly perceive the technological phenomenon in question. And this he does by framing word processing within longstanding trajectories of historical and philosophical of inquiry. Only then can the work of description and analysis begin. Finally, this analysis grounds our evaluations. That, it seems, to me is a useful model for our thinking about technology.

(P.S. Frankenstein blogging should resume tomorrow.)

The Best Time to Take the Measure of a New Technology

In defense of brick and mortar bookstores, particularly used book stores, advocates frequently appeal to the virtue of serendipity and the pleasure of an unexpected discovery. You may know what you’re looking for, but you never know what you might find. Ostensibly, recommendation algorithms serve the same function in online contexts, but the effect is rather the opposite of serendipity and the discoveries are always expected.

Take, for instance, this book I stumbled on at a local used book store: Electric Language: A Philosophical Study of Word Processing by Michael Heim. The book is currently #3,577,358 in Amazon’s Bestsellers Ranking, and it has been bought so infrequently that no other book is linked to it. My chances of ever finding this book were vanishingly small, but on Amazon they were slimmer still.

I’m quite glad, though, that Electric Language did cross my path. Heim’s book is a remarkably rich meditation on the meaning of word processing, something we now take for granted and do not think about at all. Heim wrote his book in 1987. The article in which he first explored the topic appeared in 1984. In other words, Heim was contemplating word processing while the practice was still relatively new. Heim imagines that some might object that it was still too early to take the measure of word processing. Heim’s rejoinder is worth quoting at length:

“Yet it is precisely this point in time that causes us to become philosophical. For it is at the moment of such transitions that the past becomes clear as a past, as obsolescent, and the future becomes clear as destiny, a challenge of the unknown. A philosophical study of digital writing made five or ten years from now would be better than one written now in the sense of being more comprehensive, more fully certain in its grasp of the new writing. At the same time, however, the felt contrast with the older writing technology would have become faded by the gradually increasing distance from typewritten and mechanical writing. Like our involvement with the automobile, that with processing texts will grow in transparency–until it becomes a condition of our daily life, taken for granted.

But what is granted to us in each epoch was at one time a beginning, a start, a change that was startling. Though the conditions of daily living do become transparent, they still draw upon our energies and upon the time of our lives; they soon become necessary conditions and come to structure our lives. It is incumbent on us then to grow philosophical while we can still be startled, for philosophy, if Aristotle can be trusted, begins in wonder, and, as Heraclitus suggests, ‘One should not act or speak as if asleep.’”

It is when a technology is not yet taken for granted that it is available to thought. It is only when a living memory of the “felt contrast” remains that the significance of the new technology is truly evident. Counterintuitive conclusions, perhaps, but I think he’s right. There’s a way of understanding a new technology that is available only to those who live through its appearance and adoption, and who know, first hand, what it displaced. As I’ve written before, this explains, in part, why it is so tempting to view critics of new technologies as Chicken Littles:

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of myunsolicited points of advice to tech writers.

Now, while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

Heim wrote Electric Language on a portable Tandy 100.

Heim wrote Electric Language on a portable Tandy 100.

Reading Frankenstein: Chapters 9 and 10

Earlier posts in this series: Walton’s Letters, Chapters 1 & 2, Chapters 3 & 4, Chapter 5Chapter 6, Chapters 7 & 8

_____________________________________________________

A little over a week ago, a Virgin Galactic spaceship crashed during a test flight, leaving one pilot dead and the other badly injured. The SpaceShipTwo model craft, designed to ferry paying customers to the edge of space and back, was still in its testing phase. It appears from the latest reports that the crash was the result of pilot error.

Regardless of the cause, the crash was a tragedy, and it has elicited pointed criticism of the burgeoning private space flight industry. Writing for Time, Jeffrey Kluger’s offered some especially biting commentary:

“But it’s hard too not to be angry, even disgusted, with Branson himself. He is, as today’s tragedy shows, a man driven by too much hubris, too much hucksterism and too little knowledge of the head-crackingly complex business of engineering. For the 21st century billionaire, space travel is what buying a professional sports team was for the rich boys of an earlier era: the biggest, coolest, most impressive toy imaginable. Amazon.com zillionaire Jeff Bezos has his own spacecraft company—because what can better qualify a man to build machines able to travel to space than selling books, TVs and lawn furniture online? Paul Allen, co-founder of Microsoft, has a space operation too because, well, spacecraft have computers and that’s sort of the same thing, right?”

Kluger’s piece in turn prompted a response from Rand Simberg at The New Atlantis. Simberg’s piece, “In Defense of Daring,” offers the counter-example of James E. Webb, another “amateur” who nonetheless directed NASA during the heady days of the Apollo program. Mr. Simberg has written a book titled Safe Is Not An Option about how an “obsession” with “getting everyone back alive” is “killing” the space program. Obviously, this is man with a high tolerance for risk. The account of the crash Simberg referenced in his piece was a blog post at Reason which included the following counsel: “Risk is part of innovation, and we should let people continue to put their lives on the line if they do so with full understanding of those risks.”

One man’s hubris is another man’s daring, it would seem. I don’t mean to be glib. In fact, I think the line between hubris and daring runs right through the heart of civilization. Both are quintessentially human qualities, and, while hubris is dangerous, a dearth of daring is not without its own set of problems. Wisdom is knowing one from the other.

I say all of that by way of getting back around to Frankenstein, a story centered on just this tension between daring and hubris. As we come to Frankenstein’s encounter with his Creature and hear the Creature’s account of how he has spent the first two years of his existence, we begin to pick up on Shelley’s tragic theory of civilization. The tragedy lies in the seemingly inextricable link between daring and hubris symbolized by the symbiotic relationship between Frankenstein and his Creature.

In chapter nine, Frankenstein describes the guilt and misery that enveloped him in the months after William’s death and Justine’s execution. Despite the force with which he expresses his sorrow and seeming depth of his regret, however, it remains difficult for the reader, at least for this reader, to take him at his word. For instance, consider the following passage:

“… I had committed deeds of mischief beyond description horrible, and more much more (I persuaded myself), was yet behind. Yet my heart overflowed with kindness, and the love of virtue. I had begun life with benevolent intentions, and thirsted for the moment when I should put them into practice, and make myself useful to my fellow-beings. Now all was blasted: instead of that serenity of conscience, which allowed me to look back upon the past with self-satisfaction, and from thence to gather promise of new hopes, I was seized by remorse and the sense of guilt, which hurried me away to a hell of intense tortures, such as no language can describe.”

So, yes, “a hell of intense tortures”–but does this not all seem rather self-absorbed? There is still a certain blindness at work here. There is a fixation on the depth of his own suffering, on how the course of events have stripped him of the satisfactions of a clean conscience. Moreover, it seems as if he has not yet questioned the motives and ambitions that led him to bring the Creature into existence in the first place. Nor is there any sense of guilt about his abandonment of the Creature.

But there is fear: fear that the disaster would strike again, fear which mingled with and contaminated whatever love he felt, for that love also constituted its objects as potential targets for the Creature’s violence. And this fear yielded hate and a thirst for revenge. Here’s how Frankenstein expresses this cycle that turns love into hate:

“I had been the author of unalterable evils; and I lived in daily fear, lest the monster whom I had created should perpetrate some new wickedness …. There was always scope for fear, so long as any thing I loved remained behind …. When I reflected on his crimes and malice, my hatred and revenge burst all bounds of moderation.”

I can’t help but hear echoes of St. Augustine in these lines. The misery of the human condition is rooted in a profound disordering of our loves such that love is plagued by fear and even twisted into hate. And it is not hate which is love’s opposite, but rather fear. Hate is simply the form that love takes when it has been deformed by fear. But, precisely for this reason, Frankenstein cannot rightly interpret his own motives. He believes his hate is justified because it is rooted in his love for his friends and family. Hate, then, is the shape that love takes when it is threatened and vulnerable. Although this suggests that the love in view is ultimately self-love, and self-love cannot be brought to recognize it’s own failures. Consequently, guilt must be externalized or projected; it cannot be allowed to call into question one’s own motives and desires. In this case, Frankenstein’s hatred of the Creature is directly proportional to the guilt he experiences. But he and the Creature are one, so his hatred is a form of self-loathing, it is self-destructive. And, I would suggest, Shelley would have us read the relationship between Frankenstein and his creature as a microcosm of human civilization.

Elizabeth is more clear-sighted. She is blind to the existence of the Creature and to Frankenstein’s complicity in William’s and Justine’s deaths, but her perception of the world is nearer the mark. No longer are vice and injustice distant, abstract realities. “Now misery has come home,” she admits, “and men appear to me as monsters thirsting for each other’s blood.” There again the word monster is used to describe someone other than the Creature, in this case human society as a whole. Earlier we’d read how Justine, under pressure from her confessor, had almost come to believe herself the monster others thought her to be.

But while Victor cannot help but project the guilt that is properly his onto the Creature, Elizabeth’s nature is such that she can’t help but internalize the corruption she rightly perceives in the world. “Yet I am certainly unjust,” is how she follows up her indictment of humanity. She interprets Victor’s agitation as the lingering manifestation of his sorrow over William’s murder and his righteous indignation at the injustice of Justine’s death. Of course, nothing could be further from the truth. But we are reminded of how perception is a function of love. Elizabeth’s love for Frankenstein leads her to interpret his demeanor and actions sympathetically. This contrasts sharply, of course, with Frankenstein’s loveless perception of his own Creature.

Throughout the remainder of the chapter we read about Frankenstein’s journey into the Alps in search of the peace that Nature might bring. Much of what follows is a literary representation of the natural sublime with a touch of the gothic, “ruined castles hanging on the precipices of piny mountains,” for instance. But the “kindly influence” of “maternal nature” had no effect; it failed to overturn Frankenstein’s restless misery.

The tenth chapter opens with Frankenstein on the second day of his excursion and another invocation of the natural sublime. The snow-topped mountains, the ravines, the woods–”they all gathered round me,” Frankenstein remembers, “and bade me be at peace.” But when he awoke the next morning, it was as if nature had hid herself from him. A rain storm had moved in and “thick mists hid the summits of the mountains.” Frankenstein’s response is telling: “Still I would penetrate their misty veil, and seek them in their cloudy retreats. What were rain and storm to me?” Shelley would have us see that Frankenstein is unchanged. He is still intent on peering behind the veils that nature raises around herself and ignoring her warnings.

Because he was familiar with the path, he forgoes a guide as he prepares to ascend the peak of Montanvert. Not only might we read this decision as yet another manifestation of Frankenstein’s hubris, his explanation for his decision is also telling: “the presence of another would destroy the solitary grandeur of the scene.” This is telling, I think, because isolation has been part of Frankenstein’s undoing all along. He isolated himself from the “regulating” influence of his friends and disaster followed. Interestingly, we are about to discover through the Creature’s own narrative, that he longs for nothing more than companionship. Frankenstein, on the other hand, pursues isolation–but he fails to find it. With “superhuman speed” he sees the Creature bounding toward him.

Their reunion is a bit, how shall we put it … tense. Frankenstein lashes out at the Creature, whom he addresses as “Devil” and “vile insect” and then threatens to kill. The Creature’s reaction, at least as I hear it, is almost humorously deadpanned: “I expected this reaction.” But immediately thereafter he launches into an eloquent statement of his case against Frankenstein:

“All men hate the wretched; how, then, must I be hated, who am miserable beyond all living things! Yet, you my creator, detest and spurn me, thy creature, to whom thou art bound by ties only dissoluble by the annihilation of one of us. You purpose to kill me. How dare you sport thus with life? Do your duty towards me, and I will do mine towards you and the rest of mankind.”

The Creature here makes explicit what has already been implicit: Frankenstein and the Creature are bound to one another till death. Also, Frankenstein sports thus with life because he has already done so in bringing the Creature to life. He judges himself competent to create life and to take life.

Frankenstein does not take the Creature’s entreaty kindly. In fact, he is unhinged by rage. He lunges at the Creature, but the Creature easily evades him. “Be calm!” the Creature urges. Indeed, in this exchange, it is the Creature who appears to be the more rational of the pair. He kindly reminds Frankenstein that he made the Creature larger and stronger than himself. “But I will not be tempted to set myself in opposition to thee,” he adds.

“I am thy creature, and I will be even mild and docile to my natural lord and king, if thou wilt also perform thy part, that which thou owest me. Oh, Frankenstein, be not equitable to every other, and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember, that I am thy creature; I ought to be thy Adam; but I am rather the fallen angel whom thou drivest from joy for no misdeed. Every where I see bliss, from which I alone am irrevocably excluded. I was benevolent and good; misery made me a fiend. Make me happy, and I shall again be virtuous.”

He goes on in a similar vein until finally he urges Frankenstein to hear his tale. Then, he adds, Frankenstein can decide whether or not he still wants to kill him.

It is, initially anyway, quite easy to sympathize with the Creature as he pleads his case. Indeed, his appeals are moving. “Believe me, Frankenstein,” he continues, ” I was benevolent; my soul glowed with love and humanity.” He is an unfallen Adam that is nonetheless punished; a Satan who has not rebelled and is nonetheless cast down. It is here that we begin to hear something of Frankenstein’s own voice in the Creature. Like Frankenstein, the Creature asserts his own innocence, an innocence of which he was stripped by external forces. Like Frankenstein, although with perhaps greater plausibility, he frames himself as a victim of circumstances. (We’ll see in time whether or not we can fully credit the Creature’s own account.) To Frankenstein’s accusations, the Creature retorts with biting sarcasm, “You accuse me of murder; and yet you would, with a satisfied conscience, destroy your own creature. Oh, praise the eternal justice of man!”

This announces the Creature’s case against, not only Frankenstein, but the human race as a whole. Frankenstein continues to resist. He curses the day he made the Creature as well as his own responsible hands. “Begone! relieve me from the sight of your detested form,” he demands. In a curiously playful moment the creatures covers Frankenstein’s eyes with his hand and says, “Thus I relieve thee, my creator.” Frankenstein is not amused. He flings the Creature’s hand away from his face.

But the Creature finally prevails on Frankenstein to follow him to his cave so that he might hear his “long and strange tale.” Responding to faint stirrings of his conscience, Frankenstein agrees to follow the Creature. They sit down with a fire between them, and the Creature begins to tell his story. In the following chapter, the narration is handed over to the Creature, and we hear his account of the last two years, filtered through Frankenstein’s recollection.