A Technological History of Modernity

I’m writing chiefly to commend to you what Alan Jacobs has recently called his “big fat intellectual project.”

The topic that has driven his work over the last few years Jacobs describes as follows: “The ways that technocratic modernity has changed the possibilities for religious belief, and the understanding of those changes that we get from studying the literature that has been attentive to them.” He adds,

“But literature has not been merely an observer of these vast seismic tremors; it has been a participant, insofar as literature has been, for many, the chief means by which a disenchanted world can be re-enchanted — but not fully — and by which buffered selves can become porous again — but not wholly. There are powerful literary responses to technocratic modernity that serve simultaneously as case studies (what it’s like to be modern) and diagnostic (what’s to be done about being modern).”

To my mind, such a project enjoys a distinguished pedigree, at least in some important aspects. I think, for example, of Leo Marx’s classic, The Machine in the Garden: Technology and the Pastoral Ideal in America, or the manner in which Katherine Hayles weaves close readings of contemporary fiction into her explorations of digital technology. Not that he needs me to say this, but I’m certain Jacobs’ work along these lines, particularly with its emphasis on religious belief, will be valuable and timely. You should click through to find links to a handful of essays Jacobs has already written in this vein.

On his blog, Text Patterns, Jacobs has, over the last few weeks, been describing one important thread of this wider project, a technological history of modernity, which, naturally, I find especially intriguing and necessary.

The first post in which Jacobs articulates the need for a technological history of modernity began as a comment on Matthew Crawford’s The World Beyond Your Head. In it, Jacobs repeats his critique of the “ideas have consequences” model of history, one in which the ideas of philosophers drive cultural change.

Jacobs took issue with the “ideas have consequences” model of cultural change in his critique of Neo-Thomist accounts of modernity, i.e., those that pin modernity’s ills on the nominalist challenge to the so-called medieval/Thomist synthesis of faith and reason. He finds that Crawford commits a similar error in attributing the present attention economy, in large measure, to conclusions about the will and the individual arrived at by Enlightenment thinkers.

Beyond the criticisms specific to the debate about the historical consequences of nominalism and the origins of our attention economy, Jacobs articulated concerns that apply more broadly to any account of cultural change that relies too heavily on the work of philosophers and theologians while paying too little attention to the significance of the material conditions of lived experience.

Moving toward the need for a technological history of modernity, Jacobs writes, “What I call the Oppenheimer Principle — ‘When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success’ — has worked far more powerfully to shape our world than any of our master thinkers. Indeed, those thinkers are, in ways we scarcely understand, themselves the product of the Oppenheimer Principle.”

Or, as Ken Myers, a cultural critic that Jacobs and I both hold in high esteem, often puts it: ideas may have consequences, but ideas also have antecedents. These antecedents may be described as unarticulated assumptions derived from the bodily, emotional, and, yes, cognitive consequences of society’s political, economic, and technological infrastructure. I’m not sure if Jacobs would endorse this move, but I find it helpful to talk about these assumptions by borrowing the concept of “plausibility structures” first articulated by the sociologist Peter Berger.

For Berger, plausibility structures are those chiefly social realities that render certain ideas plausible, compelling, or meaningful apart from whatever truth value they might be independently or objectively assigned. Or, as Berger has frequently quipped, the factors that make it easier to be a Baptist in Texas than it would be in India.

Again, Berger has in mind interpersonal relationships and institutional practices, but I think we may usefully frame our technological milieu similarly. In other words, to say that our technological milieu, our material culture constitutes a set of plausibility structures is to say that we derive tacit assumptions about what is possible, what is good, what is valuable from merely carrying on about our daily business with and through our tools. These implicit valuations and horizons of the possible are the unspoken context within which we judge and evaluate explicit ideas and propositions.

Consequently, Jacobs is quite right to insist that we understand the emergence of modernity as more than the triumph of a set of ideas about individuals, democracy, reason, progress, etc. And, as he puts it,

“Those of us who — out of theological conviction or out of some other conviction — have some serious doubts about the turn that modernity has taken have been far too neglectful of this material, economic, and technological history. We need to remedy that deficiency. And someone needs to write a really comprehensive and ambitious technological history of modernity. I don’t think I’m up to that challenge, but if no one steps up to the plate….”

All of this to say that I’m enthusiastic about the project Jacobs has presented and eager to see how it unfolds. I have a few more thoughts about it that I hope to post in the coming days–why, for example, Jacobs project is more appealing than Evgeny Morozov’s vision for tech criticism–but that may or may not materialize. Whatever the case, I think you’ll do well to tune in to Jacobs’ work on this as it progresses.

Do Things Want?

Alan Jacobs’ 79 Theses on Technology were offered in the spirit of a medieval disputation, and they succeeded in spurring a number of stimulating responses in a series of essays posted to the Infernal Machine over the last two weeks. Along with my response to Jacobs’ provocations, I wanted to engage a debate between Jacobs and Ned O’Gorman about whether or not we may meaningfully speak of what technologies want. Here’s a synopsis of the exchange with my own commentary along the way.

O’Gorman’s initial response focused on the following theses from Jacobs:

40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.

46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.

O’Gorman framed these theses by saying that he found it “perplexing” that Jacobs “is so seemingly unsympathetic to the meaningfulness of things, the class to which technologies belong.” I’m not sure, however, that Jacobs was denying the meaningfulness of things; rather, as I read him, he is contesting the claim that it is from technology that our lives derive their meaning. That may seem a fine distinction, but I think it is an important one. In any case, a little clarification about what exactly “meaning” entails, may go a long way in clarifying that aspect of the discussion.

A little further on, O’Gorman shifts to the question of agency: “Our technological artifacts aren’t wholly distinct from human agency; they are bound up with it.” It is on this ground that the debate mostly unfolds, although there is more than a little slippage between the question of meaning and the question of agency.

O’Gorman appealed to Mary Carruthers’ fascinating study of the place of memory in medieval culture, The Book of Memory: A Study of Memory in Medieval Culture, to support his claim, but I’m not sure the passage he cites supports his claim. He is seeking to establish, as I read him, two claims. First, that technologies are things and things are meaningful. Second, that we may properly attribute agency to technology/things. Now here’s the passage he cites from Carruthers’ work (brackets and ellipses are O’Gorman’s):

“[In the middle ages] interpretation is not attributed to any intention of the man [the author]…but rather to something understood to reside in the text itself.… [T]he important “intention” is within the work itself, as its res, a cluster of meanings which are only partially revealed in its original statement…. What keeps such a view of interpretation from being mere readerly solipsism is precisely the notion of res—the text has a sense within it which is independent of the reader, and which must be amplified, dilated, and broken-out from its words….”

“Things, in this instance manuscripts,” O’Gorman adds, “are indeed meaningful and powerful.” But in this instance, the thing (res) in view is not, in fact, the manuscripts. As Carruthers explains at various other points in The Book of Memory, the res in this context is not a material thing, but something closer to the pre-linguistic essence or idea or concept that the written words convey. It is an immaterial thing.

That said, there are interesting studies that do point to the significance of materiality in medieval context. Ivan Illich’s In the Vineyard of the Text, for example, dwells at length on medieval reading as a bodily experience, an “ascetic discipline focused by a technical object.” Then there’s Caroline Bynum’s fascinating Christian Materiality: An Essay on Religion in Late Medieval Europe, which explores the multifarious ways matter was experienced and theorized in the late middle ages.

Bynum concludes that “current theories that have mostly been used to understand medieval objects are right to attribute agency to objects, but it is an agency that is, in the final analysis, both too metaphorical and too literal.” She adds that insofar as modern theorizing “takes as self-evident the boundary between human and thing, part and whole, mimesis and material, animate and inanimate,” it may be usefully unsettled by an encounter with medieval theories and praxis, which “operated not from a modern need to break down such boundaries but from a sense that they were porous in some cases, nonexistent in others.”

Of course, taking up Bynum’s suggestion does not entail a re-imagining of our smartphone as a medieval relic, although one suspects that there is but a marginal difference in the degree of reverence granted to both objects. The question is still how we might best understand and articulate the complex relationship between our selves and our tools.

In his reply to O’Gorman, Jacobs focused on O’Gorman’s penultimate paragraph:

“Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve ‘wanting’ for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.”

It’s an interesting feature of the exchange from this point forward that O’Gorman and Jacobs at once emphatically disagree, and yet share very similar concerns. The disagreement is centered chiefly on the question of whether or not it is helpful or even meaningful to speak of technologies “wanting.” Their broad agreement, as I read their exchange, is about the inadequacy of what O’Gorman calls “philosophical volunatarianism” and “technological instrumentalism.”

In other words, if you begin by assuming that the most important thing about us is our ability to make rational and unencumbered choices, then you’ll also assume that technologies are neutral tools over which we can achieve complete mastery.

If O’Gorman means what I think he means by this–and what Jacobs takes him to mean–then I share his concerns as well. We cannot think well about technology if we think about technology as mere tools that we use for good or evil. This is the “guns don’t kill people, people kill people” approach to the ethics of technology, and it is, indeed, inadequate as a way of thinking about the ethical status of artifacts, as I’ve argued repeatedly.

Jacobs grants these concerns, but, with a nod to the Borg Complex, he also thinks that we do not help ourselves in facing them if we talk about technologies “wanting.” Here’s Jacobs’ conclusion:

“It seems that [O’Gorman] thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility, and an inability to confront the political dimensions of technological modernity.”

This seems basically right to me, but it prompted a second reply from O’Gorman that brought some further clarity to the debate. O’Gorman identified three distinct “directions” his disagreement with Jacobs takes: rhetorical, ontological, and ethical.

He frames his discussion of these three differences by insisting that technologies are meaningful by virtue of their “structure of intention,” which entails a technology’s affordances and the web of practices and discourse in which the technology is embedded. So far, so good, although I don’t think intention is the best choice of word. From here O’Gorman goes on to show why he thinks it is “rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.”

Rhetorically, O’Gorman appears to be advocating a Wittgenstein-ian, “look and see” approach. Let’s see how people are using language before we rush to delimit a word’s semantic range. To a certain degree, I can get behind this. I’ve advocated as much when it comes to the way we use the word “technology,” itself a term that abstracts and obfuscates. But I’m not sure that once we look we will find much. While our language may animate or personify our technology, I’m less sure that we typically speak about technology “wanting” anything.  We do not ordinarily say things like “my iPhone wants to be charged,” “the car wants to go out for a drive,” “the computer wants to play.” Although, I can think of an exception or two. I have heard, for example, someone explain to an anxious passenger that the airplane “wants” to stay in the air. The phrase, “what technology wants,” owes much of its currency, such as it is, to the title of Kevin Kelly’s book, and I’m pretty sure Kelly means more by it than what O’Gorman might be prepared to endorse.

Ontologically, O’Gorman is “skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.” “What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of ‘wanting’ that do not seem to be a product of our will?” he wonders. Fair enough, but all of the examples he cites are connected with beings that are, in a literal sense, alive. Of course I can’t attribute all of my desires to my conscious will, sure my dog wants to eat, and maybe in some sense my plant wants water. But there’s still a leap involved in saying that my clock wants to tell time. Wanting may not be neatly tied to willing, but I don’t see how it is not tied to sentience.

There’s one other point worth making at this juncture. I’m quite sympathetic to what is basically a phenomenological account of how our tools quietly slip into our subjective, embodied experience of the world. This is why I can embrace so much of O’Gorman’s case. Thinking back many years, I can distinctly remember a moment when I held a baseball in my hand and reflected on how powerfully I felt the urge to throw it, even though I was standing inside my home. This feeling is, I think, what O’Gorman wants us to recognize. The baseball wanted to be thrown! But how far does this kind of phenomenological account take us?

I think it runs into limits when we talk about technologies that do not enter quite so easily into the circuit of mind, body, and world. The case for the language of wanting is strongest the closer I am to my body; it weakens the further away we get from it. Even if we grant that the baseball in hand feels like it wants to be thrown, what exactly does the weather satellite in orbit want? I think this strongly suggests the degree to which the wanting is properly ours, even while acknowledging the degree to which it is activated by objects in our experience.

Finally, O’Gorman thinks that it is “perfectly legitimate and indeed ethically good and right to speak of technologies as ‘wanting.'” He believes this to be so because “wanting” is not only a matter of willing, it is “more broadly to embody a structure of intention within a given context or set of contexts.” Further, “Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.”

“It is good and right,” O’Gorman insists, “to call this ‘wanting’ because ‘wanting’ suggests that things, even machine things, have an active presence in our life—they are intentional” and, what’s more, their “active presence cannot be neatly traced back to their design and, ultimately, some intending human.”

I agree with O’Gorman that the ethical considerations are paramount, but I’m finally unpersuaded that we are on firmer ground when we speak of technologies wanting, even though I recognize the undeniable importance of the dynamics that O’Gorman wants to acknowledge by speaking so.

Consider what O’Gorman calls the “structure of intention.” I’m not sure intention is the best word to use here. Intentionality resides in the subjective experience of the “I,” but it is true, as phenomenologists have always recognized, that intentionality is not unilaterally directed by the self-consciously willing “I.” It has conscious and non-conscious dimensions, and it may be beckoned and solicited by the world that it simultaneously construes through the workings of perception.

I think we can get at what O’Gorman rightly wants us to acknowledge without attributing “wanting” to objects. We may say, for instance, that objects activate our wanting as they are intended to do by design and also in ways that are unintended by any person. But it’s best to think of this latter wanting as an unpredictable surplus of human intentionality rather than inject a non-human source of wanting. The wanting is always mine, but it may be prompted, solicited, activated, encouraged, fostered, etc. by aspects of the non-human world. So, we may correctly talk about a structure of desire that incorporates non-human aspects of the world and thereby acknowledge the situated nature of our own wanting. Within certain contexts, if we were so inclined, we may even call it a structure of temptation.

To fight the good fight, as it were, we must acknowledge how technology’s consequences exceed and slip loose of our cost/benefit analysis and our rational planning and our best intentions. We must take seriously how their use shapes our perception of the world and both enable and constrain our thinking and acting. But talk about what technology wants will ultimately obscure moral responsibility. “What the machine/algorithm wanted” too easily becomes the new “I was just following orders.” I believe this to be true because I believe that we have a proclivity to evade responsibility. Best, then, not to allow our language to abet our evasions.

The Spectrum of Attention

Late last month, Alan Jacobs presented 79 Theses on Technology at a seminar hosted by the Institute for Advanced Studies in Culture at the University of Virginia. The theses, dealing chiefly with the problem of attention in digital culture, were posted to the Infernal Machine, a terrific blog hosted by the Institute and edited by Chad Wellmon, devoted to reflection on technology, ethics, and the human person. I’ve long thought very highly of both Jacobs and the Institute, so when Wellmon kindly extended an invitation to attend the seminar, I gladly and gratefully accepted.

Wellmon has also arranged for a series of responses to Jacobs’ theses, which have appeared on The Infernal Machine. Each of these is worth considering. In my response, “The Spectrum of Attention,” I took the opportunity to work out a provisional taxonomy of attention that considers the difference our bodies and our tools make to what we generally call attention.

Here’s a quick excerpt:

We can think of attention as a dance whereby we both lead and are led. This image suggests that receptivity and directedness do indeed work together. The proficient dancer knows when to lead and when to be led, and she also knows that such knowledge emerges out of the dance itself. This analogy reminds us, as well, that attention is the unity of body and mind making its way in a world that can be solicitous of its attention. The analogy also raises a critical question: How ought we conceive of attention given that we are  embodied creatures?

Click through to read the rest.

The Treadmill Always Wins

I recently suggested that it’s good to occasionally ask ourselves, “Why do we read?” That question was prompted in part by the unhealthy tendencies that I find myself struggling to resist in the context of online reading. These tendencies are best summed up by the phrase “reading to have read,” a phrase I borrowed from Alan Jacobs’ excellent The Pleasures of Reading in an Age of Distraction. Incidentally, telling you to read this book is only the first piece of advice that I’ll offer in this post, but it might be the best.

As it turns out, Jacobs revisited his comments on this score in a post discussing a relatively new speed reading app called Spritz. The app sells itself as “reading reimagined,” rather brutally so if you ask me. The app flashes words at rates up to 600 words per minute featuring its “patent-pending Redicle” technology. In any case, you can follow the links to learn more about it if you’re so inclined. Near the close of his post, after citing some trenchant observations by Julie Sedivy, Jacobs observed,

It’s all too easy to imagine people who are taken with Spritz making decisions about what to read based on what’s amenable to ‘spritzing.’ But that’s inevitable as long as [they are thinking] of reading as something to have done, something to get through.”

Sedivy and Jacobs both are pointing out one of the more insidious temptations technology poses, the temptation to fit ourselves to the tool. In this case, the temptation is to prefer the kind of reading that lends itself to the questionable efficiencies afforded by Spritz. As Sedivy puts it, these are “texts with simpler sentences, sentences in which the complexity of information is evenly distributed, sentences that avoid unexpected twists or turns, and sentences which explicitly lay out their intended meanings, without requiring readers to mentally fill in the blanks through inference.”

In his post, Jacobs also linked to Ian Bogost’s insightful take on Spritz which was titled, interestingly enough, “Reading to Have Read.” Bogost questions the supposedly scientific claims made for Spritz by its creators. More importantly, though, he takes Spritz to be symptomatic of a larger problem:

“In today’s attention economy, reading materials (we call it ‘content’ now) have ceased to be created and disseminated for understanding. Instead, they exist first (and primarily) for mere encounter. This condition doesn’t necessarily signal the degradation of reading; it also arises from the surplus of content we are invited and even expected to read. But it’s a Sisyphean task. We can no longer reasonably hope to read all our emails, let alone our friends’ Facebook updates or tweets or blog posts, let alone the hundreds of daily articles and listicles and quizzes and the like. Longreads may offer stories that are best enjoyed away from your desk, but what good are such moments when the #longreads queue is so full? Like books bought to be shelved, articles are saved for a later that never comes.”

Exactly. And a little further on, Bogost adds,

“Spritzing is reading to get it over with. It is perhaps no accident that Spritze means injection in German. Like a medical procedure, reading has become an encumbrance that is as necessary as it is undesirable. “Oh God,” we think. “Another office email thread. Another timely tumblr. Another Atlantic article.” We want to read them—really to read them, to incorporate them—but the collective weight of so much content goes straight to the thighs and guts and asses of our souls. It’s too much to bear. Who wouldn’t want it to course right through, to pass unencumbered through eyeballs and neurons just to make way for the deluge behind it?”

bob-brown-reading-machine
Bob Brown’s Reading Machine

That paragraph eloquently articulates, better than I could, the concerns that motivated my earlier post. I have nothing to add to what Sedivy, Jacobs, and Bogost have already said about Spritz except to mention that I’m surprised no one, to my knowledge, has alluded to Bob Brown’s Readies. In his 1930 manifesto, Brown declared, “The written word hasn’t kept up with the age,” and he developed a mechanical reading device to meet that challenge. Brown’s reading machine, which you can read about here, was envisioned as an escape from the page, not unlike Spritz. But as Abigail Thomas puts it, “It is evident that through the materiality of the page acting as the imagined machine, that the reader becomes the machine themselves.” Of course, I wouldn’t know of Brown were it not that one of my grad school profs, Craig Saper, was deeply interested in Brown’s work.

That said, I do have one more thing to add. Spritz illustrates yet another temptation posed by modern technologies. We might call it the challenge of the treadmill. When I was in my early twenties and still in my more athletic phase, I took a stress test on a treadmill. The cardiologist told me to keep pace as long as I could, but, he added, “the treadmill always wins.” Of course, being modestly competitive and not a little prideful, I took that as a challenge. I ran hard on that machine, but, no surprise, the treadmill won.

So much of our response to the quickening pace induced by modern technologies is to quicken our own stride in response or to find other technologies that will help us do things more quickly, more efficiently. But again, the treadmill always wins. Maybe the answer to the challenge of the treadmill is simply to get off the thing.

But that decision doesn’t come easily for us. We have a hard time acknowledging our limitations. In fact, so much of the rhetoric surrounding technology in the western tradition involves precisely the promise of transcending our bodily limitations. Exhibit A, of course, is the transhumanist project.

In response, however, I submit the more humane vision of the agrarian and poet Wendell Berry. In “Faustian Economics,” Berry, speaking of the “fantasy of human limitlessness” that animates so much of our political and economic life, reminds us that we are “coming under pressure to understand ourselves as limited creatures in a limited world.” But this, he adds, should not be cause for despair:

“[O]ur human and earthly limits, properly understood, are not confinements but rather inducements to formal elaboration and elegance, to fullness of relationship and meaning. Perhaps our most serious cultural loss in recent centuries is the knowledge that some things, though limited, are inexhaustible. For example, an ecosystem, even that of a working forest or farm, so long as it remains ecologically intact, is inexhaustible. A small place, as I know from my own experience, can provide opportunities of work and learning, and a fund of beauty, solace, and pleasure — in addition to its difficulties — that cannot be exhausted in a lifetime or in generations.”

I would suggest that Berry’s wisdom is just as applicable to the realm of reading and the intellectual life as it is to our economic life.

With regards to reading, the first step may be coming to the realization, once again perhaps, that we cannot read it all. According to Joseph Epstein, “Gertrude Stein said that the happiest moment of her life was that moment in which she realized that she wouldn’t be able to read all the books in the world.” May Stein be our model, although I admit the happiness on this score is sometimes hard to muster.

I’ll leave you with two questions to consider. The first is from Len Kendall: “Is your day composed of reading 10% of 100 articles or 100% of 10 articles?”

The second is a set of related questions from Adam Gurri:

“Ask yourself: what conversations matter to you? Which are relevant to your life, and which are relevant to your interests? After figuring that out, be stricter about excluding stories that fall outside of those conversations. Be selective about the publications you read regularly, and seek to go deeper rather than broader in the conversations you follow.”