Technology Will Not Save Us

A day after writing about technology, culture, and innovation, I’ve come across two related pieces.

At Walter Mead’s blog, the novel use of optics to create a cloaking effect provided a springboard into a brief discussion of technological innovation. Here’s the gist of it:

“Today, Big Science is moving ahead faster than ever, and the opportunities for creative tinkerers and home inventors are greater than ever. but the technology we’ve got today is more dynamic than what people had in the 19th and early 20th centuries. IT makes it possible to invent new services and not just new gadgets, though smarter gadgets are also part of the picture.

Unleashing the creativity of a new generation of inventors may be the single most important educational and policy task before us today.”

And …

“Technology in today’s world has run way ahead of our ability to exploit its riches to enhance our daily lives. That’s OK, and there’s nothing wrong with more technological progress. But in the meantime, we need to think much harder about how we can cultivate and reward the kind of innovative engineering that can harness the vast potential of the tech riches around us to lift our society and ultimately the world to the next stage of human social development.”

Then in this weekend’s WSJ, Walter Isaacson has a feature essay titled, “Where Innovation Comes From.” The essay is in part a consideration of the life of Alan Turing and his approach to AI. Isaacson’s point, briefly stated, is that, in the future, innovation will not come from so-called intelligent machines. Rather, in Isaacson’s view, innovation will come from the coupling of human intelligence and machine intelligence, each of them possessed of unique powers. Here is a representative paragraph:

“Perhaps the latest round of reports about neural-network breakthroughs does in fact mean that, in 20 years, there will be machines that think like humans. But there is another possibility, the one that Ada Lovelace envisioned: that the combined talents of humans and computers, when working together in partnership and symbiosis, will indefinitely be more creative than any computer working alone.”

I offer these two you for your consideration. As I read them, I thought again about what I had posted yesterday. Since the post was more or less stream of consciousness, thinking by writing as it were, I realized that an important qualification remained implicit. I am not qualified to speak about technological innovation from the perspective of the technologist or the entrepreneur. Quite frankly, I’m not sure I’m qualified to speak about technological innovation from any vantage point. Perhaps it is simply better to say that my interests in technological innovation are historical, sociological, and ethical.

For what it is worth, then, what I was after in my previous post was something like the cultural sources of technological innovation. Assuming that technological innovation does not unfold in a value-neutral vacuum, then what cultural forces shape technological innovation? Many, of course, but perhaps we might first say that while technological innovation is certainly driven by cultural forces, these cultural forces are not the only relevant factor. Those older philosophers of technology who focused on what we might, following Aristotle, call the formal and material causes of technological development were not altogether misguided. The material nature of technology imposes certain limits upon the shape of innovation. From this angle, perhaps it is the case that if innovation has stalled, as Peter Thiel among others worry, it is because all of the low-hanging fruit has been plucked.

When we consider the efficient and final causes of technological innovation, however, we enter the complex and messy realm human desires and cultural dynamics. It is in this realm that the meaning of technology and the direction of its unfolding is shaped. (As an aside, we might usefully frame the perennial debate between the technological determinists and the social constructivists as a failure to hold together and integrate Aristotle’s four causes into our understanding of technology.) It is this cultural matrix of technological innovation that most interests me, and it was at this murky target that my previous post was aimed.

Picking up on the parenthetical comment above, one other way of framing the problem of technological determinism is by understanding it as type of self-fulfilling prophecy. Or, perhaps it is better to put it this way: What we call technological determinism, the view that technology drives history, is not itself a necessary characteristic of technology. Rather, technological determinism is the product of cultural capitulation. It is a symptom of social fragmentation.

Allow me to borrow from what I’ve written in another context to expand on this point via a discussion of the work of Jacques Ellul.

Ellul defined technique (la technique) as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.” This is an expansive definition that threatens, as Langdon Winner puts it, to make everything technology and technology everything. But Winner is willing to defend Ellul’s usage against its critics. In Winner’s view, Ellul’s expansive definition of technology rightly points to a “vast, diverse, ubiquitous totality that stands at the center of modern culture.”

Although Winner acknowledges the weaknesses of Ellul’s sprawling work, he is, on the whole, sympathetic to Ellul’s critique of technological society. Ellul believed that technology was autonomous in the sense that it dictated its own rules and was resistant to critique. “Technique has become autonomous,” Ellul concluded, “it has fashioned an omnivorous world which obeys its own laws and which has renounced all tradition.”

Additionally, Ellul claimed that technique “tolerates no judgment from without and accepts no limitation.” Moreover, “The power and autonomy of technique are so well secured that it, in its turn, has become the judge of what is moral, the creator of a new morality.” Ellul’s critics have noted that in statements such as these, he has effectively personified technology/technique. Winner thinks that this is exactly the case, but in his view this is not an unintended flaw in Ellul’s argument, it is his argument: “Technique is entirely anthropomorphic because human beings have become thoroughly technomorphic. Man has invested his life in a mass of methods, techniques, machines, rational-productive organizations, and networks. They are his vitality. He is theirs.”

And here is the relevant point for the purposes of this post: Elllul claims that he is not a technological determinist.

By this he means that technology did not always hold society hostage, and society’s relationship to technology did not have to play out the way that it did. He is merely diagnosing what is now the case. He points to ancient Greece and medieval Europe as two societies that kept technology in its place as it were, as means circumscribed and directed by independent ends. Now as he sees it, the situation is reversed. Technology dictates the ends for which it alone can be the means. Among the factors contributing to this new state of affairs, Ellul points to the rise of individualism in Western societies. The collapse of mediating institutions fractured society, leaving individuals exposed and isolated. Under these conditions, society was “perfectly malleable and remarkably flexible from both the intellectual and material points of view,” consequently “the technical phenomenon had its most favorable environment since the beginning of history.”

This last consideration is often forgotten by critics of Ellul’s work. In any case, it is in my view, a point that is tremendously relevant to our contemporary discussions of technological innovation. As I put it yesterday, our focus on technological innovation as the key to the future is a symptom of a society in thrall to technique. Our creative and imaginative powers are thus constrained and caught in a loop of diminishing returns.

I hasten to add that this is surely not the whole picture, but it is, I think, an important aspect of it.

One final point related to my comments about our Enlightenment heritage. It is part of that heritage that we transformed technology into an idol of the god we named Progress. It was a tangible manifestation of a concept we deified, took on faith, and in which we invested our hope. If there is a palpable anxiety and reactionary defensiveness in our discussions about the possible stalling of technological innovation, it is because, like the prophets of Baal, we grow ever more frantic and feverish as it becomes apparent that the god we worshipped was false and our hopes are crushed. And it is no small things to have your hopes crushed. But idols always break the hearts of their worshippers, as C.S. Lewis has put it.

Technology will not save us. Paradoxically, the sooner we realize that, the sooner we might actually begin to put it to good use.

Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

A Few Items for Your Consideration

Here are a few glimpses of the future ranging from the near and plausible, to the distant and uncertain. In another world–one, I suppose, in which I get paid to write these posts–I’d write more about each. In this world, I simply pass them along for your consideration.

Google Glass App Reads Your Emotions

“A new Glassware App for Google Glass will uncover a person’s emotion, age range and gender just by facial recognition technology ….

Facial recognition has always been seen with nervousness, as people tend to prefer privacy over the ability to see a stranger’s age or gender. But these two apps prove sometimes letting a robot know you’re sad can help for a better relationship between fellow humans. Letting the robot lead has proven to increase human productivity and better the ebb and flow of a work space, a partnership, any situation dealing with human communication.

The SHORE app is currently not available for download, but you can try US+ now. May the robots guide us to a more humane future.”

GM Cars to Monitor Drivers

“General Motors, the largest US auto manufacturer by sales, is preparing to launch the world’s first mass-produced cars with eye- and head-tracking technology that can tell whether drivers are distracted, according to people with knowledge of the plans ….

The company is investing in technology that will be able to tell how hard a driver is thinking by monitoring the dilation of the pupils, and combines facial information with sensors for vital signs such as blood alcohol levels and heart rate.”

Electrical Brain Stimulation

“Transcranial direct current stimulation (TDCS), which passes small electrical currents directly on to the scalp, stimulates the nerve cells in the brain (neurons). It’s non-invasive, extremely mild and the US military even uses TDCS in an attempt to improve the performance of its drone pilots.

The idea is that it makes the neurons more likely to fire and preliminary research suggests electrical simulation can improve attention as well as have a positive impact on people with cognitive impairments and depression ….

And more worryingly for him, people are also increasingly making brain stimulation kits themselves. This easily ‘puts the technology in the realms of clever teenagers,’ adds Dr Davis.

An active forum on reddit is devoted to the technology, and people there have complained of ‘burning to the scalp’. Another user wrote that they ‘seemed to be getting angry frequently’ after using TDCS.”

Preparing for Superintelligent AI

“Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.

The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.”

We’ve got some thinking to do, folks, careful, patient thinking. Happily, we don’t have to do that thinking alone and in isolation. Here is Evan Selinger helping us think clearly about our digital tools with his usual, thoughtful analysis: “Why Your Devices Shouldn’t Do the Work of Being You.”

Here, too, is a critical appraisal of the religiously intoned hopes of the cult of the Singularity.

Finally, Nick Carr invites us to cautiously consider the potential long-term consequences of the recently unveiled Apple Watch since “never before have we had a tool that promises to be so intimate a companion and so diligent a monitor as the Apple Watch.”

Waiting for Socrates … So We Can Kill Him Again and Post the Video on Youtube

It will come as no surprise, I’m sure, if I tell you that the wells of online discourse are poisoned. It will come as no surprise because critics have complained about the tone of online discourse for as long as people have interacted with one another online. In fact, we more or less take the toxic, volatile nature of online discourse for granted. “Don’t read the comments” is about as routine a piece of advice as “look both ways before crossing the street.” And, of course, it is also true that complaints about the coarsening of public discourse in general have been around for a lot longer than the Internet and digital media.

That said, I’ve been intrigued, heartened actually, by a recent round of posts bemoaning the state of online rhetoric from some of the most thoughtful people whose work I follow. Here is Freddie deBoer lamenting the rhetoric of the left, and here is Matthew Anderson noting much of the same on the right. Here is Alan Jacobs on why he’s stepping away from Twitter. Follow any of those links and you’ll find another series of links to thoughtful, articulate writers all telling us, more or less, that they’ve had enough. This piece urges civility and it suggests, hopefully (naively?), that the “Internet” will learn soon enough to police itself, but the evidence it cites along the way seems rather to undermine such hopefulness. I won’t bother to point you to some of the worst of what I’ve regrettably encountered online in recent weeks.

Why is this the case? Why, as David Sessions recently put it, is the state of the Internet awful?

Like everyone else, I have scattered thoughts about this. For one thing, the nature of the medium seems to encourage rancor, incivility, misunderstanding, and worse. Anonymity has something to do with this, and so does the abstraction of the body from the context of communication.

Along the same media-ecological lines, Walter Ong noted that oral discourse tends to be agonistic and literate discourse tends to be irenic. Online discourse tends to be conducted in writing, which might seem to challenge Ong’s characterization. But just as television and radio constituted what Ong called secondary orality, so might we say that social media is a form of secondary literacy, blurring the distinctions between orality and literacy. It is text based, but, like oral discourse, it brings people into a context of relative communicative immediacy. That is to say that through social media people are responding to one another in public and in short order, more as they would in a face-to-face encounter, for example, than in private letters exchanged over the course of months.

In theory, writing affords us the temporal space to be more thoughtful and precise in expressing our ideas, but, in practice, the expectations of immediacy in digital contexts collapse that space. So we lose the strengths of each medium: we get none of the meaning-making cues of face-to-face communication nor any of the time for reflection that written communication ordinarily grants. The media context, then, ends up being rife with misunderstanding and agonistic; it encourages performative pugilism.

Also, as the moral philosopher Alasdair MacIntyre pointed out some time ago, we no longer operate with a set of broadly shared assumptions about what is good and what shape a good life should take. Our ethical reasoning tends not to be built on the same foundation. Because we are reasoning from incompatible moral premises, the conclusions reached by two opposing parties tend to be interpreted as sheer stupidity or moral obtuseness. In other words, because our arguments, proceeding as they do from such disparate moral frameworks, fail to convince and persuade, we begin to assume that those who will not yield to our moral vision must thus be fools or worse. Moreover, we conclude, fools and miscreants cannot be argued with; they can only be shamed, shouted down, or otherwise silenced.

Digital dualism is also to blame. Some people seem to operate under the assumption that they are not really racists, misogynists, anti-Semites, etc.–they just play one on Twitter. It really is much too late in the game to play that tired card.

Perhaps, too, we’ve conflated truth and identity in such a way that we cannot conceive of a challenge to our views as anything other than a challenge to our humanity. Conversely, it seems that in some highly-charged contexts being wrong can cost you the basic respect one might be owed as a fellow human being.

Finally, the Internet is awful because, frankly, people are awful. We all are; at least we all can be under the right circumstances. As Solzhenitsyn put it, “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”

To that list, I want to offer just one more consideration: a little knowledge is a dangerous thing and there are few things the Internet does better than giving everyone a little knowledge. A little knowledge is a dangerous thing because it is just enough to give us the illusion of mastery and a sense of authority. This illusion, encouraged by the myth of having all the world’s information at our finger tips, has encouraged us to believe that by skimming an article here or reading the summary of a book there we thus become experts who may now liberally pontificate about the most complex and divisive issues with unbounded moral and intellectual authority. This is the worst kind of insufferable foolishness, that which mistakes itself for wisdom without a hint of irony.

Real knowledge on the other hand is constantly aware of all that it does not know. The more you learn, the more you realize how much you don’t know, and the more hesitant you’ll be to speak as if you’ve got everything figured out. Getting past that threshold of “a little knowledge” tends to breed humility and create the conditions that make genuine dialogue possible. But that threshold will never be crossed if all we ever do is skim the surface of reality, and this seems to be the mode of engagement encouraged by the information ecosystem sustained by digital media.

We’re in need of another Socrates who will teach us once again that the way of wisdom starts with a deep awareness of our own ignorance. Of course, we’d kill him too, after a good skewering on Twitter, and probably without the dignity of hemlock. A posthumous skewering would follow, naturally, after the video of his death got passed around on Reddit and Youtube.

I don’t want to leave things on that cheery note, but the fact is that I don’t have a grand scheme for making online discourse civil, informed, and thoughtful. I’m pretty sure, though, that things will not simply work themselves out for the better without deliberate and sustained effort. Consider how W.H. Auden framed the difference between traditional cultures and modernity:

“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues which were once nursed unconsciously by the forces of nature must now be recovered and fostered by a deliberate effort of the will and the intelligence. In the future, societies will not grow of themselves. They will be either made consciously or decay.”

For better or worse, or more likely both, this is where we find ourselves–either we deploy deliberate effort of will and intelligence or face perpetual decay. Who knows, maybe the best we can do is to form and maintain enclaves of civility and thoughtfulness amid the rancor, communities of discourse where meaningful conversation can be cultivated. These would probably remain small communities, but their success would be no small thing.


Update: After publishing, I read Nick Carr’s post on the revival of blogs and the decline of Big Internet. “So, yeah, I’m down with this retro movement,” Carr writes, “Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.” I thought that was good news in light of my closing paragraph.

And, just in case you need more by way of diagnosis, there’s this: “A Second Look At The Giant Garbage Pile That Is Online Media, 2014.”

Our Little Apocalypses

An incoming link to my synopsis of Melvin Kranzberg’s Six Laws of Technology alerted me to a short post on Quartz about a new book by an author named Michael Harris. The book, The End of Absence: Reclaiming What We’’ve Lost in a World of Constant Connection, explores the tradeoffs induced by the advent of the Internet. Having not read the book, I obviously can’t say much about it, but I was intrigued by one angle Harris takes that comes across in the Quartz piece.

Harris’s book is focused on the generation, a fuzzy category to be sure, that came of age just before the Internet exploded onto the scene in the early 90s. Here’s Harris:

“If you were born before 1985, then you know what life is like both with the internet and without. You are making the pilgrimage from Before to After.”

“If we’re the last people in history to know life before the internet, we are also the only ones who will ever speak, as it were, both languages. We are the only fluent translators of Before and After.”

It would be interesting to read what Harris does with this framing. In any case, it’s something I’ve thought about often. This is my fifteenth year teaching. Over the years I’ve noticed, with each new class, how the world that I knew as a child and as a young adult recedes further and further into the murky past. As you might guess, digital technology has been one of the most telling indicators.

Except for a brief flirtation with Prodigy on an MS-DOS machine with a monochrome screen, the Internet did not come into my life until I was a freshman in college. I’m one of those people Harris is writing about, one of the Last Generation to know life before the Internet. Putting it that way threatens to steer us into a rather unseemly romanticism, and, knowing that I’m temperamentally drawn to dying lights, I want to make sure I don’t give way to it. That said, it does seem to me that those who’ve known the Before and After, as Harris puts it, are in a unique position to evaluate the changes. Experience, after all, is irreducible and incommunicable.

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of my unsolicited points of advice to tech writers.

Now while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

To be very clear and to avoid the pedantic, sanctimonious reply–although, in all honesty, I’ve gotten so little of that on this blog that I’ve come to think that a magical filter of civility vets all those who come by–let me affirm that yes, of course, I certainly would’ve made many trade-offs along the way, too. To recognize costs and losses does not mean that you always refuse to incur them, it simply means that you might incur them in something other than a naive, triumphalist spirit.

Around this time last year, an excerpt from Jonathan Franzen’s then-forthcoming edited work on Karl Krauss was published in the Guardian; it was panned, frequently and forcefully, and deservedly so in some respects. But the conclusion of the essay struck me then as being on to something.

“Maybe … apocalypse is, paradoxically, always individual, always personal,” Franzen wrote,

“I have a brief tenure on earth, bracketed by infinities of nothingness, and during the first part of this tenure I form an attachment to a particular set of human values that are shaped inevitably by my social circumstances. If I’d been born in 1159, when the world was steadier, I might well have felt, at fifty-three, that the next generation would share my values and appreciate the same things I appreciated; no apocalypse pending.”

But, of course, he wasn’t. He was born in the modern world, like all of us, and this has meant change, unrelenting change. Here is where the Austrian writer Karl Kraus, whose life straddled the turn of the twentieth century, comes in: “Kraus was the first great instance of a writer fully experiencing how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for personal apocalypse.” Perhaps. I’m tempted to quibble with this claim. The words of John Donne, “Tis all in pieces, all coherence gone,” come to mind. Yet, even if Franzen is not quite right about the historical details, I think he’s given honest voice to a common experience of modernity:

“The experience of each succeeding generation is so different from that of the previous one that there will always be people to whom it seems that the key values have been lost and there can be no more posterity. As long as modernity lasts, all days will feel to someone like the last days of humanity. Kraus’s rage and his sense of doom and apocalypse may be the anti-thesis of the upbeat rhetoric of Progress, but like that rhetoric, they remain an unchanging modality of modernity.”

This is, perhaps, a bit melodramatic, and it is certainly not all that could be said on the matter, or all that should be said. But Franzen is telling us something about what it feels like to be alive these days. It’s true, Franzen is not the best public face for those who are marginalized and swept aside by the tides of technological change, tides which do not lift all boats, tides which may, in fact, sink a great many. But there are such people, and we do well to temper our enthusiasm long enough to enter, so far as it is possible, into their experience. In fact, precisely because we do not have a common culture to fall back on, we must work extraordinarily hard to understand one another.

Franzen is still working on the assumption that these little personal apocalypses are a generational phenomenon. I’d argue that he’s underestimated the situation. The rate of change may be such that the apocalypses are now intra-generational. It is not simply that my world is not my parents’ world; it is that my world now is not what my world was a decade ago. We are all exiles now, displaced from a world we cannot reach because it fades away just as its contours begin to materialize. This explains why, as I wrote earlier this year, nostalgia is not so much a desire for a place or a time as it is a desire for some lost version of ourselves. We are like Margaret, who in Hopkins’ poem, laments the passing of the seasons, Margaret to whom the poet’s voice says kindly, “It is Margaret you mourn for.”

Although I do believe that certain kinds of change ought to be resisted–I’d be a fool not to–none of what I’ve been trying to get at in this post is about resisting change in itself. Rather, I think all I’ve been trying to say is this: we must learn to take account of how differently we experience the changing world so that we might best help one another as we live through the change that must come. That is all.