What Do We Want, Really?

I was in Amish country last week. Several times a day I heard the clip-clop of horse hooves and the whirring of buggy wheels coming down the street and then receding into the distance–a rather soothing Doppler effect. While there, I was reminded of an anecdote about the Amish relayed by a reader in the comments to a recent post:

I once heard David Kline tell of Protestant tourists sight-seeing in an Amish area. An Amishman is brought on the bus and asked how Amish differ from other Christians. First, he explained similarities: all had DNA, wear clothes (even if in different styles), and like to eat good food.

Then the Amishman asked: “How many of you have a TV?”

Most, if not all, the passengers raised their hands.

“How many of you believe your children would be better off without TV?”

Most, if not all, the passengers raised their hands.

“How many of you, knowing this, will get rid of your TV when you go home?”

No hands were raised.

“That’s the difference between the Amish and others,” the man concluded.

I like the Amish. As I’ve said before, the Amish are remarkably tech-savvy. They understand that technologies have consequences, and they are determined to think very hard about how different technologies will affect the life of their communities. Moreover, they are committed to sacrificing the benefits a new technology might bring if they deem the costs too great to bear. This takes courage and resolve. We may not agree with all of the choices made by Amish communities, but it seems to me that we must admire both their resolution to think about what they are doing and their willingness to make the sacrifices necessary to live according to their principles.

Image via Wikicommons

Image via Wikicommons

The Amish are a kind of sign to us, especially as we come upon the start of a new year and consider, again, how we might better live our lives. Let me clarify what I mean by calling the Amish a sign. It is not that their distinctive way of life points the way to the precise path we must all follow. Rather, it is that they remind us of the costs we must be prepared to incur and the resoluteness we must be prepared to demonstrate if we are to live a principled life.

It is perhaps a symptom of our disorder that we seem to believe that all can be made well merely by our making a few better choices along the way. Rarely do we imagine that what might be involved in the realization of our ideals is something more radical and more costly. It is easier for us to pretend that all that is necessary are a few simple tweaks and minor adjustments to how we already conduct our lives, nothing that will makes us too uncomfortable. If and when it becomes impossible to sustain that fiction, we take comfort in fatalism: nothing can ever change, really, and so it is not worth trying to change anything at all.

What is often the case, however, is that we have not been honest with ourselves about what it is that we truly value. Perhaps an example will help. My wife and I frequently discuss what, for lack of a better way of putting it, I’ll call the ethics of eating. I will not claim to have thought very deeply, yet, about all of the related issues, but I can say that we care about what has been involved in getting food to our table. We care about the labor involved, the treatment of animals, and the use of natural resources. We care, as well, about the quality of the food and about the cultural practices of cooking and eating. I realize, of course, that it is rather fashionable to care about such things, and I can only hope that our caring is not merely a matter of fashion. I do not think it is.

But it is another thing altogether for us to consider how much we really care about these things. Acting on principle in this arena is not without its costs. Do we care enough to bear those costs? Do we care enough to invest the time necessary to understand all the relevant complex considerations? Are we prepared to spend more money? Are we willing to sacrifice convenience? And then it hits me that what we are talking about is not simply making a different consumer choice here and there. If we really care about the things we say we care about, then we are talking about changing the way we live our lives.

In cases like this, and they are many, I’m reminded of a paragraph in sociologist James Hunter’s book about varying approaches to moral education in American schools. “We say we want the renewal of character in our day,” Hunter writes,

“but we do not really know what to ask for. To have a renewal of character is to have a renewal of a creedal order that constrains, limits, binds, obligates, and compels. This price is too high for us to pay. We want character without conviction; we want strong morality but without the emotional burden of guilt or shame; we want virtue but without particular moral justifications that invariably offend; we want good without having to name evil; we want decency without the authority to insist upon it; we want moral community without any limitations to personal freedom. In short, we want what we cannot possibly have on the terms that we want it.”

You may not agree with Hunter about the matter of moral education, but it is his conclusion that I want you to note: we want what we cannot possibly have on the terms that we want it.

This strikes me as being a widely applicable diagnosis of our situation. Across so many different domains of our lives, private and public, this dynamic seems to hold. We say we want something, often something very noble and admirable, but in reality we are not prepared to pay the costs required to obtain the thing we say we want. We are not prepared to be inconvenienced. We are not prepared to reorder our lives. We may genuinely desire that noble, admirable thing, whatever it may be; but we want some other, less noble thing more.

At this point, I should probably acknowledge that many of the problems we face as individuals and as a society are not the sort that would be solved by our own individual thoughtfulness and resolve, no matter how heroic. But very few problems, private or public, will be solved without an honest reckoning of the price to be paid and the work to be done.

So what then? I’m presently resisting the temptation to now turn this short post toward some happy resolution, or at least toward some more positive considerations. Doing so would be disingenuous. Mostly, I simply wanted to draw our attention, mine no less than yours, toward the possibly unpleasant work of counting the costs. As we thought about the new year looming before us and contemplated how we might live it better than the last, I wanted us to entertain the possibility that what will be required of us to do so might be nothing less than a fundamental reordering of our lives. At the very least, I wanted to impress upon myself the importance of finding the space to think at length and the courage to act.

Do Artifacts Have Ethics?

Writing about “technology and the moral dimension,” tech writer and Gigaom founder, Om Malik made the following observation:

“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”

I’m not sure how many people in the tech industry would concur with Malik’s claim, but it is a remarkably telling admission from at least one well-placed individual. Happily, Malik realizes that “it is time to add an emotional and moral dimension to products.” But what exactly does it mean to add an emotional and moral dimension to products?

Malik’s own ensuing discussion is brief and deals chiefly with using data ethically and producing clear, straightforward terms of service. This suggests that Malik is mostly encouraging tech companies to treat their customers in an ethically responsible manner. If so, it’s rather disconcerting that Malik takes this to be a discovery that he feels compelled to announce, prophetically, to his colleagues. Leaving that unfortunate indictment of the tech community aside, I want to suggest that there is no need to add a moral dimension to technology.

Years ago, Langdon Winner famously asked, “Do artifacts have politics?” In the article that bears that title, Winner went on to argue that they most certainly do. We might also ask, “Do artifacts have ethics?” I would argue that they do indeed. The question is not whether technology has a moral dimension, the question is whether we recognize it or not. In fact, technology’s moral dimension is inescapable, layered, and multi-faceted.

When we do think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about a technology on this view is the use to which it is put. This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash someones head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?

But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses?

Below are a few other questions that we might ask in order to get at the wide-ranging “moral dimension” of our technologies. There are, of course, many others that we could ask, but this is a start.

  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy?
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?

Silencing the Heretics: How the Faithful Respond to Criticism of Technology

I started to write a post about a few unhinged reactions to an essay published by Nicholas Carr in this weekend’s WSJ, “Automation Makes Us Dumb.”  Then I realized that I already wrote that post back in 2010. I’m republishing “A God that Limps” below, with slight revisions, and adding a discussion of the reactions to Carr. 

Our technologies are like our children: we react with reflexive and sometimes intense defensiveness if either is criticized. Several years ago, while teaching at a small private high school, I forwarded an article to my colleagues that raised some questions about the efficacy of computers in education. This was a mistake. The article appeared in a respectable journal, was judicious in its tone, and cautious in its conclusions. I didn’t think then, nor do I now, that it was at all controversial. In fact, I imagined that given the setting it would be of at least passing interest. However, within a handful of minutes (minutes!)—hardly enough time to skim, much less read, the article—I was receiving rather pointed, even angry replies.

I was mystified, and not a little amused, by the responses. Mostly though, I began to think about why this measured and cautious article evoked such a passionate response. Around the same time I stumbled upon Wendell Berry’s essay titled, somewhat provocatively, “Why I am Not Going to Buy a Computer.” More arresting than the essay itself, however, were the letters that came in to Harper’s. These letters, which now typically appear alongside the essay whenever it is anthologized, were caustic and condescending. In response, Berry wrote,

The foregoing letters surprised me with the intensity of the feelings they expressed. According to the writers’ testimony, there is nothing wrong with their computers; they are utterly satisfied with them and all that they stand for. My correspondents are certain that I am wrong and that I am, moreover, on the losing side, a side already relegated to the dustbin of history. And yet they grow huffy and condescending over my tiny dissent. What are they so anxious about?

Precisely my question. Whence the hostility, defensiveness, agitation, and indignant, self-righteous anxiety?

I’m typing these words on a laptop, and they will appear on a blog that exists on the Internet.  Clearly I am not, strictly speaking, a Luddite. (Although, in light of Thomas Pynchon’s analysis of the Luddite as Badass, there may be a certain appeal.) Yet, I do believe an uncritical embrace of technology may prove fateful, if not Faustian.

The stakes are high. We can hardly exaggerate the revolutionary character of certain technologies throughout history:  the wheel, writing, the gun, the printing press, the steam engine, the automobile, the radio, the television, the Internet. And that is a very partial list. Katherine Hayles has gone so far as to suggest that, as a species, we have “codeveloped with technologies; indeed, it is no exaggeration,” she writes in Electronic Literature, “to say modern humans literally would not have come into existence without technology.”

We are, perhaps because of the pace of technological innovation, quite conscious of the place and power of technology in our society and in our own lives. We joke about our technological addictions, but it is sometimes a rather nervous punchline. It makes sense to ask questions. Technology, it has been said, is a god that limps. It dazzles and performs wonders, but it can frustrate and wreak havoc. Good sense seems to suggest that we avoid, as Thoreau put it, becoming tools of our tools. This doesn’t entail burning the machine; it may only require a little moderation. At a minimum, it means creating, as far as we are able, a critical distance from our toys and tools, and that requires searching criticism.

And we are back where we began. We appear to be allergic to just that kind of searching criticism. So here is my question again:  Why do we react so defensively when we hear someone criticize our technologies?

And so ended my earlier post. Now consider a handful of responses to Carr’s article, “Automation Makes Us Dumb.” Better yet, read the article, if you haven’t already, and then come back for the responses.

Let’s start with a couple of tweets by Joshua Gans, a professor of management at the University of Toronto.

Then there was this from entrepreneur, Marc Andreessen:

Even better are some of the replies attached to Andreessen’s tweet. I’ll transcribe a few of those here for your amusement.

“Why does he want to be stuck doing repetitive mind-numbing tasks?”

“‘These automatic jobs are horrible!’ ‘Stop killing these horrible jobs with automation!'” [Sarcasm implied.]

“by his reasoning the steam engine makes us weaklings, yet we’ve seen the opposite. so maybe the best intel is ahead”

“Let’s forget him, he’s done so much damage to our industry, he is just interested in profiting from his provocations”

“Nick clearly hasn’t understood the true essence of being ‘human’. Tech is an ‘enabler’ and aids to assist in that process.”

“This op-ed is just a Luddite screed dressed in drag. It follows the dystopian view of ‘Wall-E’.”

There you have it. I’ll let you tally up the logical fallacies.

Honestly, I’m stunned by the degree of apparently willful ignorance exhibited by these comments. The best I can say for them is that they are based on a glance at the title of Carr’s article and nothing more. It would be much more worrisome if these individuals had actually read the article and still managed to make these comments that betray no awareness of what Carr actually wrote.

More than once, Carr makes clear that he is not opposed to automation in principle. The last several paragraphs of the article describe how we might go forward with automation in a way that avoids some serious pitfalls. In other words, Carr is saying, “Automate, but do it wisely.” What a Luddite!

When I wrote in 2010, I had not yet formulated the idea of a Borg Complex, but this inability to rationally or calmly abide any criticism of technology is surely pure, undistilled Borg Complex, complete with Luddite slurs!

I’ll continue to insist that we are in desperate need of serious thinking about the powers that we are gaining through our technologies. It seems, however, that there is a class of people who are hell-bent on shutting down any and all criticism of technology. If the criticism is misguided or unsubstantiated, then it should be refuted. Dismissing criticism while giving absolutely no evidence of having understood it, on the other hand, helps no one at all.

I come back to David Noble’s description of the religion of technology often, but only because of how useful it is as a way of understanding techno-scientific culture. When technology is a religion, when we embrace it with blind faith, when we anchor our hope in it, when we love it as ourselves–then any criticism of technology will be understood as either heresy or sacrilege. And that seems to be a pretty good way of characterizing the responses to tech criticism I’ve been discussing: the impassioned reactions of the faithful to sacrilegious heresy.

Jaron Lanier Wants to Secularize AI

In 2010, one of the earliest posts on this blog noted an op-ed in the NY Times by Jaron Lanier titled “The First Church of Robotic.” In it, Lanier lamented the rise quasi-religious aspirations animating many among the Silicon Valley elite. Describing the tangle of ideas and hopes usually associated with the Singularity and/or Transhumanism, Lanier concluded, “What we are seeing is a new religion, expressed through an engineering culture.” The piece wraps up rather straightforwardly: “We serve people best when we keep our religious ideas out of our work.”

In fact, the new religion Lanier has in view has a considerably older pedigree than what he imagines. Historian David Noble traced the roots of what he called the religion of technology back to the start of the last millennium. What Lanier identified was only the latest iteration of that venerable techno-religious tradition.

A couple of days ago, Edge posted a video (and transcript) of an extended discussion by Lanier, which was sparked by recent comments made by Stephen Hawking and Elon Musk about the existential threat to humanity AI may pose in the not-to-distant future. Lanier’s talk ranges impressively over a variety of related issues and registers a number of valuable insights. Consider, for instance, this passing critique of Big Data:

“I want to get to an even deeper problem, which is that there’s no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you’re getting big data by observing a lot of people who make choices, and then you’re doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there’s not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There’s no way to know.”

To which he adds a few moments later, “It’s not so much a rise of evil as a rise of nonsense. It’s a mass incompetence, as opposed to Skynet from the Terminator movies. That’s what this type of AI turns into.” Big Data as banal evil, perhaps.

Lanier is certainly not the only one pointing out that Big Data doesn’t magically render pure or objective sociological data. A host of voices have made some variation of this point in their critique of the ideology surrounding Big Data experiments conducted by the likes of Facebook and OkCupid. The point is simple enough: observation/measurement alters the observed/measured phenomena. It’s a paradox that haunts most forms of human knowledge, perhaps especially our knowledge of ourselves, and it seems to me that we are better off abiding the paradox rather than seeking to transcend it.

Lanier also scores an excellent point when he asks us to imagine two scenarios involving the possibility of 3-D printed killer drones that can be used to target individuals. In the first scenario, they are developed and deployed by terrorists; in the second they are developed and deployed by some sort of rogue AI along the lines that Musk and others have worried about. Lanier’s question is this: what difference does it make whether terrorists or rogue AI is to blame? The problem remains the same.

“The truth is that the part that causes the problem is the actuator. It’s the interface to physicality. It’s the fact that there’s this little killer drone thing that’s coming around. It’s not so much whether it’s a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there’s enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it’s a massive misdirection.”

It is a misdirection that entails an evasion of responsibility and a failure of political imagination.

All of this is well put, and there’s more along the same lines. Lanier’s chief concern, however, is to frame this as a problem of religious thinking infecting the work of technology. Early on, for instance, he says, “what I’m proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing. What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field.”

And toward the conclusion of his talk, Lanier elaborates:

“There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.”

What Lanier proposes in response to this state of affairs is something like a wall of separation, not between the church and the state, but between religion and technology:

“To me, what would be ridiculous is for somebody to say, ‘Oh, you mustn’t study deep learning networks,’ or ‘you mustn’t study theorem provers,’ or whatever technique you’re interested in. Those things are incredibly interesting and incredibly useful. It’s the mythology that we have to become more self-aware of. This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease that apart and say this is the part that’s great, and this is the part that’s self-defeating. We have to do it exactly the same thing with AI now.”

I’m sure Lanier would admit that this is easier said than done. In fact, he suggests as much himself a few lines later. But it’s worth asking whether the kind of sorting out that Lanier proposes is not merely challenging, but, perhaps, unworkable. Just as mid-twentieth century theories of secularization have come on hard times owing to a certain recalcitrant religiosity (or spirituality, if you prefer), we might also find that the religion of technology cannot simply be wished away or bracketed.

Paradoxically, we might also say that something like the religion of technology emerges precisely to the (incomplete) degree that the process of secularization unfolded in the West. To put this another way, imagine that there is within Western consciousness a particular yearning for transcendence. Suppose, as well, that this yearning is so ingrained that it cannot be easily eradicated. Consequently, you end up having something like a whack-a-mole effect. Suppress one expression of this yearning, and it surfaces elsewhere. The yearning for transcendence never quite dissipates, it only transfigures itself. So the progress of secularization, to the degree that it successfully suppresses traditional expressions of the quest for transcendence, manages only to channel it into other cultural projects, namely techno-science. I certainly don’t mean to suggest that the entire techno-scientific project is an unmitigated expression of the religion of technology. That’s certainly not the case. But, as Noble made clear, particularly in his chapter on AI, the techno-religious impulse is hardly negligible.

One last thought, for now, arising out of my recent blogging through Frankenstein. Mary Shelley seemed to understand that one cannot easily disentangle the noble from the corrupt in human affairs: both are rooted in the same faculties and desires. Attempt to eradicate the baser elements altogether, and you may very well eliminate all that is admirable too. The heroic tendency is not safe, but neither is the attempt to tame it. I don’t think we’ve been well-served by our discarding of this essentially tragic vision in favor of a more cheery techno-utopianism.

Reading Frankenstein: Chapters 11–13

Earlier posts in this series: Walton’s Letters, Chapters 1 & 2, Chapters 3 & 4, Chapter 5Chapter 6, Chapters 7 & 8, 9 & 10

_____________________________________________________

I’ve been a bit delinquent with the Frankenstein posts of late, but I intend to make up some ground by covering chapters eleven through sixteen in this post and the next. These chapters are the heart of the book, structurally and thematically. In them, the Creature assumes control of the narrative, sort of. Throughout these chapters it is his voice that we hear narrating the two years between the moment of his creation and the present encounter with Frankenstein; but we should remember that the Creature’s words are still being reported by Frankenstein to Walton. It is still, in a sense, a filtered account, even though it is presented to the reader in the first person. I don’t think this should throw into question every detail of the Creature’s account, supposing that Frankenstein has necessarily misrepresented him; but it may be wise to read the Creature’s story with a certain suspicious attentiveness.

Had Shelly chosen to narrate her story from a more conventional third person perspective, we might imagine that the moral of the story would have been more straightforward, or that our sympathies would have more readily coalesced around one of the two central characters. The multiple first person perspectives complicate matters and inject a certain moral ambiguity into the story. As in our own real-world experience, hearing multiple accounts of the same sequence of events from motivated witnesses forces us to assume the responsibility of making judgments about whom to believe and to what degree. Often, we find that there is no obvious way of arriving at an “objective” account of the events and, knowingly or not, we fall back on our own proclivities and sympathies. We may also find, given our access to multiple perspectives, that the sequence of events unfolded with a kind of tragic unnecessary necessity. Things need not have transpired as they did, different decisions could have been made; but, given the limited perspective of the interested parties, it is hard to see how they could have done otherwise.

In his discussion of tragic plays, Aristotle observed that the tragic hero cannot be either wholly deserving or wholly undeserving of his fate. The emotional force of the tragedy depends on this ambivalence. If we think the hero entirely deserving of their fate, the play amounts to a comedy in which justice is served. If we think the hero entirely undeserving of their fate, then we will think the play a farce. Aristotle offers Sophocles’s Oedipus as the perfect embodiment of this tragic ambivalence of character. In my view, Shelley achieves a similar effect with both Frankenstein and the Creature, hence the emotional force of her story. And this effect she achieves principally by allowing us to hear each of them tell us their own stories. This isn’t merely a matter of emotional payoff, though; the meaning of Shelley’s story is inextricable from this tragic form. The meaning of the story, on my reading, also hinges on recognizing the Creature’s experience as a microcosm of human civilization, and that becomes apparent very early on in the Creature’s story.

In chapter eleven, the Creature describes the earliest hours and days of his existence, during which he comes to terms with the physicality of his being. Over the course of several days, his ability to perceive his surroundings is sharpened, as is his ability to navigate the world with his body. As he acclimates to having a body, the Creature also begins to express himself with “uncouth and inarticulate sounds,” the beginnings of language. While still in this state, he encounters a fire left by wandering beggars. The fire fascinates him and its usefulness is immediately apparent to him. Like a hunter-gatherer, he soon finds that he must abandon his fire in search of food. He does so and subsists on berries and nuts until he stumbles upon the abode of a shepherd where he finds bread, cheese, milk, and wine. The shepherd symbolizes a more settled life than that of the hunter-gathers, and the foods the Creature enjoys are all the product of human cultivation, none of them are naturally occurring. Finally, he moves on and enters a village. He is awed by the homes and their gardens. But, in a pattern that will recur unfailingly, this place that is at once an expression of humanity’s skill and ingenuity is also the setting for the Creature’s first encounter, apart from his initial abandonment, with “the barbarity of man.” Having innocently entered a home and frightened its inhabitants, the Creature is chased out of the town by a barrage of blows and projectiles.

The Creature then comes upon a modest cottage in the woods and he crawls into a hovel attached to one of the cottage walls. Here he is able to live unnoticed, and, through a crack in the wall of the cottage, he is able to observe the family that inhabits it. This family consists of an elderly blind father and his two grown children, Felix and Agatha. We learn later that they are exiles from France living in Switzerland. At this point, the Creature regarded them a saintly, if also melancholy, brood. Watching the sacrificial kindness Felix and Agatha display toward their father, the Creature’s emotional life is awakened. “I felt sensations of a peculiar and overpowering nature,” he recounts, “they were a mixture of pain and pleasure, such as I had never before experienced, either from hunger or cold, warmth or food; and I withdrew from the window, unable to bear these emotions.” The chapter closes with the Creature seeing the family read together before turning in for the night. At the time, however, he knew nothing of the “science of words or letters.”

Through this perhaps too-convenient plot device, Shelley will account for the Creature’s continuing education, intellectual and moral. To this point, though, we might read Shelley’s portrayal of the Creature’s life as an early nineteenth century mashup of Maslow’s hierarchy of needs, Erikson’s stages of psycho-social development, and the history of human civilization. The Creature, then, is a symbol of human civilization. Better yet, Frankenstein and the Creature together symbolize the dual and tragic nature of human civilization.

Throughout chapter twelve, the Creature continues to watch and learn from the family that he begins to affectionately refer to as his “friends.” There is an innocence to the Creature’s early observations. He is confused by a sadness that he perceives alongside their amiable and caring manner. Felix, whose name means “happy” in Latin, was “the saddest of the group.” To his simple mind, they had all that he could possibly wish for. They had a warm home, food, and their mutual companionship. But after a considerable period of time passes, he realizes that one source of their sadness is, in fact, their poverty. They were often hungry, and the Creature often witnessed Agatha and Felix go without food so that their father might eat.

Witnessing that act of self-sacrifice awakens the Creature’s conscience. He had till then been stealing from their stores in the night, but now he felt the pain that he was unwittingly causing them and learns to make do with whatever food he can gather from the surrounding woods. Moreover, he is moved to act in kindness toward his friends. Noticing that Felix spent the better part of the day gathering wood, the Creature begins to gather wood in the night and deposit it on their doorstep. He then watches their reaction with pleasure and is glad for the better use that Felix is able to make of his time.

In much of what follows, the Creature becomes increasingly aware of the “godlike science” of language, in both its spoken and then its written form. By observation and imitation, he acquired a rudimentary vocabulary, and he decides that he will not present himself to his friends until he has mastered the ability to speak with words. During this time, the Creature had also become aware, by seeing his reflection in a pool of water, of his physical deformity. An anti-Narcissus, he was convinced “that he was in reality that Monster that I am” and he was filled with feelings of “despondence and mortification.”

But he continues to imagine, foolishly by his own admission, that he might be able to help his benefactors overcome their sadness and that he might even be accepted by them despite his deformity. Reviving a theme in Frankenstein’s narrative, the Creature is also comforted and encouraged by the onset of spring and the reawakening of nature. Spring also brings a new member of the household, whose story reveals the other source of the family’s sadness.

Chapter thirteen introduces a young Arabian woman named Safie. Her arrival cheers the family, especially Felix. And in another just-so plot turn, she does not yet speak French. As she is taught to speak and read by the family, the Creature, observing her lessons from the fortuitous crack in the wall, finally learns to speak fluently and to read. He also gets a survey of human history via Volney’s Ruins of Empires, a radical critique of prevailing governments and religions written in the aftermath of the French Revolution. He learns about the ancient empires of the Middle East, the Greeks, the Romans, and the Christian Empires of the medieval age. He also learns of the discovery of America, and he “wept with Safie over the hapless fate of its original inhabitants.” Reflecting on what he had learned, the Creature offers the following meditation that expresses the same tragic duality that he and Frankenstein embody:

“Was man, indeed, at once so powerful, so virtuous, and magnificent, yet so vicious and base? He appeared at one time a mere scion of the evil principle, and at another, as all that can be conceived of noble and godlike. To be a great and virtuous man appeared the highest honor that can befall a sensitive being; to be base and vicious, as many on record have been, appeared the lowest degradation, a condition more abject than that of the blind mole or harmless worm. For a long time I could not conceive how one man could go forth to murder his fellow, or even why there were laws and governments; but when I heard details of the vice and bloodshed, my wonder ceased, and I turned away with disgust and loathing.”

Not only do Frankenstein and the Creature both symbolize and embody this tragic paradox, neither of them fully realize the degree to which this tragic paradox runs through both their beings even though they both express guilt and sorrow for their actions. This blindness is their tragic flaw; it is the blindness induced by their own peculiar forms of hubris. For Frankenstein, it is a hubris born of knowledge; for the Creature, it is the hubris born of a self-righteousness that stems from victimhood. But all of this is not quite obvious yet.

Frankenstein also gets a lesson in political economy via Felix’s lectures to Safie: “I heard of the division of property, of immense wealth and squalid poverty; of rank, descent, and noble blood.” He realizes that human civilization values nothing so much as the combination of noble lineage and great wealth. One of these two will get one by in life, but, having neither, a person is ordinarily “doomed to waste his powers for the profits of the chosen few!”

All of this leads the Creature to lament his pitiable situation. He was uniquely powerless and alone: “no money, no friends, no property” and hideously deformed for good measure. Then we get a remarkably Pascalian comment:

“I cannot describe to you the agony that these reflections inflicted upon me: I tried to dispel them, but sorrow only increased with knowledge. Oh, that I had for ever remained in my native wood, nor known nor felt beyond the sensations of hunger thirst, and heat! Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock.”

Our ability to imagine ourselves other than we are is both our greatest virtue and the source of all our misery. Knowledge and desire are both a curse and a blessing. Again, a note of tragic paradox is sounded. The only escape from this condition, this thoroughly human condition, was death–a state, the Creature feared, he did not yet understand.

The more he learned through his observations of the family, a family he came to love, the more miserable he became. He became increasingly aware of all that he did not have and all he could never have. He was without friends and relations, without mother and father. He was alone and plagued by one question: “What was I?”