Digital Devices and Learning to Grow Up

Last week the NY Times ran the sort of op-ed on digital culture that the cultured despisers love to ridicule. In it, Jane Brody made a host of claims about the detrimental consequences of digital media consumption on children, especially the very young. She had the temerity, for example, to call texting the “next national epidemic.” Consider as well the following paragraphs:

“Two of my grandsons, ages 10 and 13, seem destined to suffer some of the negative effects of video-game overuse. The 10-year-old gets up half an hour earlier on school days to play computer games, and he and his brother stay plugged into their hand-held devices on the ride to and from school. ‘There’s no conversation anymore,’ said their grandfather, who often picks them up. When the family dines out, the boys use their devices before the meal arrives and as soon as they finish eating.

‘If kids are allowed to play ‘Candy Crush’ on the way to school, the car ride will be quiet, but that’s not what kids need,’ Dr. Steiner-Adair said in an interview. ‘They need time to daydream, deal with anxieties, process their thoughts and share them with parents, who can provide reassurance.’

Technology is a poor substitute for personal interaction.”

Poor lady, I thought, and a grandmother no less. She was in for the kind of thrashing from the digital sophisticates that is usually reserved for Sherry Turkle.

In truth, I didn’t catch too many reactions to the piece, but one did stand out. At The Awl, John Hermann summed up the critical responses with admirable brevity:

“But the argument presented in the first installment is also proudly unsophisticated, and doesn’t attempt to preempt obvious criticism. Lines like ‘technology is a poor substitute for personal interaction,’ and non-sequitur quotes from a grab-bag of experts, tee up the most common and effective response to fears of Screen Addiction: that what’s happening on all these screens is not, as the writer suggests, an endless braindead Candy Crush session, but a rich social experience of its own. That screen is full of friends, and its distraction is no less valuable or valid than the distraction of a room full of buddies or a playground full of fellow students. Screen Addiction is, in this view, nonsensical: you can no more be addicted to a screen than to windows, sounds, or the written word.”

But Hermann does not quite leave it at that: “This is an argument worth making, probably. But tell it to an anxious parent or an alienated grandparent and you will sense that it is inadequate.” The argument may be correct, but, Hermann explains, “Screen Addiction is a generational complaint, and generational complaints, taken individually, are rarely what they claim to be. They are fresh expressions of horrible and timeless anxieties.”

Hermann goes on to make the following poignant observations:

“The grandparent who is persuaded that screens are not destroying human interaction, but are instead new tools for enabling fresh and flawed and modes of human interaction, is left facing a grimmer reality. Your grandchildren don’t look up from their phones because the experiences and friendships they enjoy there seem more interesting than what’s in front of them (you). Those experiences, from the outside, seem insultingly lame: text notifications, Emoji, selfies of other bratty little kids you’ve never met. But they’re urgent and real. What’s different is that they’re also right here, always, even when you thought you had an attentional claim. The moments of social captivity that gave parents power, or that gave grandparents precious access, are now compromised. The TV doesn’t turn off. The friends never go home. The grandkids can do the things they really want to be doing whenever they want, even while they’re sitting five feet away from grandma, alone, in a moving soundproof pod.

To see a more celebratory presentation of these dynamics, recall this Facebook ad from 2013:

Hermann, of course, is less sanguine.

Screen Addiction is a new way for kids to be blithe and oblivious; in this sense, it is empowering to the children, who have been terrible all along. The new grandparent’s dilemma, then, is both real and horribly modern. How, without coming out and saying it, do you tell that kid that you have things you want to say to them, or to give them, and that you’re going to die someday, and that they’re going to wish they’d gotten to know you better? Is there some kind of curiosity gap trick for adults who have become suddenly conscious of their mortality?”

“A new technology can be enriching and exciting for one group of people and create alienation for another;” Hermann concludes, “you don’t have to think the world is doomed to recognize that the present can be a little cruel.”

Well put.

I’m tempted to leave it at that, but I’m left wondering about the whole “generational complaint” business.

To say that something is a generational complaint suggests that we are dealing with old men yelling, “Get off my lawn!” It conjures up the image of hapless adults hopelessly out of sync with the brilliant exuberance of the young. It is, in other words, to dismiss whatever claim is being made. Granted, Hermann has given us a more sensitive and nuanced discussion of the matter, but even in his account too much ground is ceded to this kind of framing.

If we are dealing with a generational complaint, what exactly do we mean by that? Ostensibly that the old are lodging a predictable kind of complaint against the young, a complaint that amounts to little more than an unwillingness to comprehend the new or a desperate clinging to the familiar. Looked at this way, the framing implies that the old, by virtue of their age, are the ones out of step with reality.

But what if the generational complaint is framed rather as a function of coming into responsible adulthood. Hermann approaches this perspective when he writes, “Screen Addiction is a new way for kids to be blithe and oblivious; in this sense, it is empowering to the children, who have been terrible all along.” So when a person complains that they are being ignored by someone enthralled by their device, are they showing their age or merely demanding a basic degree of decency?

Yes, children are wont to be blithe and oblivious, often cruelly indifferent to the needs of others. Traditionally, we have sought to remedy that obliviousness and self-centeredness. Indeed, coming into adulthood more or less entails gaining some measure of control over our naturally self-centered impulses for our own good and for the sake of others. In this light, asking a child–whether age seven or thirty-seven–to lay their device aside long enough to acknowledge the presence of another human being is simply to ask them to grow up.

Others have taken a different tack in response to Brody and Hermann. Jason Kottke arrives at this conclusion:

“People on smartphones are not anti-social. They’re super-social. Phones allow people to be with the people they love the most all the time, which is the way humans probably used to be, until technology allowed for greater freedom of movement around the globe. People spending time on their phones in the presence of others aren’t necessarily rude because rudeness is a social contract about appropriate behavior and, as Hermann points out, social norms can vary widely between age groups. Playing Minecraft all day isn’t necessarily a waste of time. The real world and the virtual world each have their own strengths and weaknesses, so it’s wise to spend time in both.”

Of course. But how do we allocate the time we spend in each–that’s the question. Also, I’m not quite sure what to make of his claim about rudeness and the social contract except that it seems to suggest that it’s not rudeness if you decide you don’t like the terms of the social contract that renders it so. Sorry Grandma, I don’t recognize the social contract by which I’m supposed to acknowledge your presence and render to you a modicum of my attention and affection.

Yes, digital devices have given us the power to decide who is worthy of our attention minute by minute. Advocates of this constant connectivity–many of them, like Facebook, acting out of obvious self-interest–want us to believe this is an unmitigated good and that we should exercise this power with impunity. But–how to say this without sounding alarmist–encouraging people to habitually render other human beings unworthy of their attention seems like a poor way to build a just and equitable society.

Humanist Technology Criticism

“Who are the humanists, and why do they dislike technology so much?”

That’s what Andrew McAfee wants to know. McAfee, formerly of Harvard Business School, is now a researcher at MIT and the author, with Erik Brynjolfsson, of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. At his blog, hosted by the Financial Times, McAfee expressed his curiosity about the use of the terms humanism or humanist in “critiques of technological progress.” “I’m honestly not sure what they mean in this context,” McAfee admitted.

Humanism is a rather vague and contested term with a convoluted history, so McAfee asks a fair question–even if his framing is rather slanted. I suspect that most of the critics he has in mind would take issue with the second half of McAfee’s compound query. One of the examples he cites, after all, is Jaron Lanier, who, whatever else we might say of him, can hardly be described as someone who “dislikes technology.”

That said, what response can we offer McAfee? It would be helpful to sketch a history of the network of ideas that have been linked to the family of words that include humanism, humanist, and the humanities. The journey would take us from the Greeks and the Romans, through (not excluding) the medieval period to the Renaissance and beyond. But that would be a much larger project, and I wouldn’t be your best guide. Suffice it to say that near the end of such a journey, we would come to find the idea of humanism splintered and in retreat; indeed, in some quarters, we would find it rejected and despised.

But if we forego the more detailed history of the concept, can we not, nonetheless, offer some clarifying comments regarding the more limited usage that has perplexed McAfee? Perhaps.

I’ll start with an observation made by Wilfred McClay in a 2008 essay in the Wilson Quarterly, “The Burden of the Humanities.” McClay suggested that we define the humanities as “the study of human things in human ways.”¹ If so, McClay continues, “then it follows that they function in culture as a kind of corrective or regulative mechanism, forcing upon our attention those features of our complex humanity that the given age may be neglecting or missing.” Consequently, we have a hard time defining the humanities–and, I would add, humanism–because “they have always defined themselves in opposition.”

McClay provides a brief historical sketch showing that the humanities have, at different historical junctures, defined themselves by articulating a vision of human distinctiveness in opposition to the animal, the divine, and the rational-mechanical. “What we are as humans,” McClay adds, “is, in some respects, best defined by what we are not: not gods, not angels, not devils, not machines, not merely animals.”

In McClay’s historical sketch, humanism and the humanities have lately sought to articulate an understanding of the human in opposition to the “rational-mechanical,” or, in other words, in opposition to the technological, broadly speaking. In McClay’s telling, this phase of humanist discourse emerges in early nineteenth century responses to the Enlightenment and industrialization. Here we have the beginnings of a response to McAfee’s query. The deployment of humanist discourse in the context of technology criticism is not exactly a recent development.

There may have been earlier voices of which I am unaware, but we may point to Thomas Carlyle’s 1829 essay, “Sign of the Times,” as an ur-text of the genre.² Carlyle dubbed his era the “Mechanical Age.” “Men are grown mechanical in head and heart, as well as in hand,” Carlyle complained. “Not for internal perfection,” he added, “but for external combinations and arrangements for institutions, constitutions, for Mechanism of one sort or another, do they hope and struggle.”

Talk of humanism in relation to technology also flourished in the early and mid-twentieth century. Alan Jacobs, for instance, is currently working on a book project that examines the response of a set of early 20th century Christian humanists, including W.H. Auden, Simone Weil, and Jacques Maritain, to total war and the rise of technocracy. “On some level each of these figures,” Jacobs explains, “intuited or explicitly argued that if the Allies won the war simply because of their technological superiority — and then, precisely because of that success, allowed their societies to become purely technocratic, ruled by the military-industrial complex — their victory would become largely a hollow one. Each of them sees the creative renewal of some form of Christian humanism as a necessary counterbalance to technocracy.”

In a more secular vein, Paul Goodman asked in 1969, “Can Technology Be Humane?” In his article (h/t Nicholas Carr), Goodman observed that popular attitudes toward technology had shifted in the post-war world. Science and technology could no longer claim the “unblemished and justified reputation as a wonderful adventure” they had enjoyed for the previous three centuries. “The immediate reasons for this shattering reversal of values,” in Goodman’s view, “are fairly obvious.

Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied subsequent developments, the deterioration of the physical environment and the destruction of the biosphere, the catastrophes impending over the cities because of technological failures and psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield diminishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely only increase human woe.”

For his part, Goodman advocated a more prudential and, yes, humane approach to technology. “Whether or not it draws on new scientific research,” Goodman argued, “technology is a branch of moral philosophy, not of science.” “As a moral philosopher,” Goodman continued in a remarkable passage, “a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.” The whole essay is well-worth your time. I bring it up merely as another instance of the genre of humanistic technology criticism.

More recently, in an interview cited by McAfee, Jaron Lanier has advocated the revival of humanism in relation to the present technological milieu. “I’m trying to revive or, if you like, resuscitate, or rehabilitate the term humanism,” Lanier explained before being interrupted by a bellboy cum Kantian, who breaks into the interview to say, “Humanism is humanity’s adulthood. Just thought I’d throw that in.” When he resumes, Lanier expanded on what he means by humanism:

“And pragmatically, if you don’t treat people as special, if you don’t create some sort of a special zone for humans—especially when you’re designing technology—you’ll end up dehumanising the world. You’ll turn people into some giant, stupid information system, which is what I think we’re doing. I agree that humanism is humanity’s adulthood, but only because adults learn to behave in ways that are pragmatic. We have to start thinking of humans as being these special, magical entities—we have to mystify ourselves because it’s the only way to look after ourselves given how good we’re getting at technology.”

In McAfee’s defense, this is an admittedly murky vision. I couldn’t tell you what exactly Lanier is proposing when he says that we have to “mystify ourselves.” Earlier in the interview, however, he gave an example that might help us understand his concerns. Discussing Google Translate, he observes the following: “What people don’t understand is that the translation is really just a mashup of pre-existing translations by real people. The current set up of the internet trains us to ignore the real people who did the first translations, in order to create the illusion that there is an electronic brain. This idea is terribly damaging. It does dehumanise people; it does reduce people.”

So Lanier’s complaint here seems to be that this particular configuration of technology and obscures an essential human element. Furthermore, Lanier is concerned that people are reduced in this process. This is, again, a murky concept, but I take it to mean that some important element of what constitutes the human is being ignored or marginalized or suppressed. Like the humanities in McClay’s analysis, Lanier’s humanism draws our attention to “those features of our complex humanity that the given age may be neglecting or missing.”

One last example. Some years ago, historian of science George Dyson wondered if the cost of machines that think will be people who don’t. Dyson’s quip suggests the problem that Evan Selinger has dubbed the outsourcing of our humanity. We outsource our humanity when we allow an app or device to do for us what we ought to be doing for ourselves (naturally, that ought needs to be established). Selinger has developed his critique in response to a variety of apps but especially those that outsource what we may call our emotional labor.

I think it fair to include the outsourcing critique within the broader genre of humanist technology criticism because it assumes something about the nature of our humanity and finds that certain technologies are complicit in its erosion. Not surprisingly, in a tweet of McAfee’s post, Selinger indicated that he and Brett Frischmann had plans to co-author a book analyzing the concept of dehumanizing technology in order to bring clarity to its application. I have no doubt that Selinger and Frishchmann’s work will advance the discussion.

While McAfee was puzzled by humanist discourse with regards to technology criticism, others have been overtly critical. Evgeny Morozov recently complained that most technology critics default to humanist/anti-humanist rhetoric in their critiques in order to evade more challenging questions about politics and economics. For my part, I don’t see why both approaches cannot each contribute to a broader understanding of technology and its consequences while also informing our personal and collective responses.

Of course, while Morozov is critical of humanizing/dehumanizing approach to technology on more or less pragmatic grounds–it is ultimately ineffective in his view–others oppose it on ideological or theoretical grounds. For these critics, humanism is part of the problem not the solution. Technology has been all too humanistic, or anthropocentric, and has consequently wreaked havoc on the global environment. Or, they may argue that any deployment of humanism as an evaluative category also implies a policing of the boundaries of the human with discriminatory consequences. Others will argue that it is impossible to make a hard ontological distinction among the natural, the human, and the technological. We have always been cyborgs in their view. Still others argue that there is no compelling reason to privilege the existing configuration of what we call the human. Humanity is a work in progress and technology will usher in a brave, new post-human world.

Already, I’ve gone on longer than a blog post should, so I won’t comment on each of those objections to humanist discourse. Instead, I’ll leave you with a few considerations about what humanist technology criticism might entail. I’ll do so while acknowledging that these considerations undoubtedly imply a series of assumptions about what it means to be a human being and what constitutes human flourishing.

That said, I would suggest that a humanist critique of technology entails a preference for technology that (1) operates at a humane scale, (2) works toward humane ends, (3) allows for the fullest possible flourishing of a person’s capabilities, (4) does not obfuscate moral responsibility, and (5) acknowledges certain limitations to what we might quaintly call the human condition.

I realize these all need substantial elaboration and support–the fifth point is especially contentious–but I’ll leave it at that for now. Take that as a preliminary sketch. I’ll close, finally, with a parting observation.

A not insubstantial element within the culture that drives technological development is animated by what can only be described as a thoroughgoing disgust with the human condition, particularly its embodied nature. Whether we credit the wildest dreams of the Singulatarians, Extropians, and Post-humanists or not, their disdain as it finds expression in a posture toward technological power is reason enough for technology critics to strive for a humanist critique that acknowledges and celebrates the limitations inherent in our frail, yet wondrous humanity.

This gratitude and reverence for the human as it is presently constituted, in all its wild and glorious diversity, may strike some as an unpalatably religious stance to assume. And, indeed, for many of us it stems from a deeply religious understanding of the world we inhabit, a world that is, as Pope Francis recently put it, “our common home.” Perhaps, though, even the secular citizen may be troubled by, as Hannah Arendt has put it, such a “rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking).”

________________________

¹ Here’s a fuller expression of McClay’s definition from earlier in the essay: “The distinctive task of the humanities, unlike the natural sciences and social sciences, is to grasp human things in human terms, without converting or reducing them to something else: not to physical laws, mechanical systems, biological drives, psychological disorders, social structures, and so on. The humanities attempt to understand the human condition from the inside, as it were, treating the human person as subject as well as object, agent as well as acted-upon.”

² Shelley’s “In Defense of Poetry” might qualify.

Tech Criticism! What is it Good For?

Earlier this year, Evgeny Morozov published a review essay of Nicholas Carr’s The Glass Cage.The review also doubled as a characteristically vigorous, and uncharacteristically confessional, censure of the contemporary practice of technology criticism. In what follows, I’ll offer a bit of unsolicited commentary on Morozov’s piece, and in a follow-up post I’ll link it to Alan Jacobs’ proposal for a technological history of modernity.

Morozov opened by asking two questions: “What does it mean to be a technology critic in today’s America? And what can technology criticism accomplish?” Some time ago, I offered my own set of reflections on the practice of technology criticism, and, as I revisit those reflections, I find that they overlap, somewhat, with a few of Morozov’s concerns. I’m going to start on this point of agreement.

“That radical critique of technology in America has come to a halt,” Morozov maintains, “is in no way surprising: it could only be as strong as the emancipatory political vision to which it is attached. No vision, no critique.” Which is to say that technology criticism, like technology, must always be for something other than itself. It must be animated and framed by a larger concern. In my own earlier reflections on technology criticism, I put the matter thus:

The critic of technology is a critic of artifacts and systems that are always for the sake of something else. The critic of technology does not love technology because technology rarely exists for its own sake …. So what does the critic of technology love? Perhaps it is the environment. Perhaps it is an ideal of community or friendship. Perhaps it is an ideal civil society. Perhaps it is health and vitality. Perhaps it is sound education. Perhaps liberty. Perhaps joy. Perhaps a particular vision of human flourishing. The critic of technology is animated by a love for something other than the technology itself. [Or should be … too many tech critics are, in fact, far too enamored of the technologies themselves.]

[Moreover,] criticism of technology, if it moves beyond something like mere description and analysis, implies making what amount to moral and ethical judgments. The critic of technology, if they reach conclusions about the consequences of technology for the lives of individual persons and the health of institutions and communities, will be doing work that rests on ethical principles and carries ethical implications.

Naturally, such ethical evaluations are not arrived at in a moral vacuum or from some ostensibly neutral position. According to what standards, then, and from within which tradition does tech criticism proceed? Well, it depends on the critic in question. More from my earlier post:

The libertarian critic, the Marxist critic, the Roman Catholic critic, the posthumanist critic, and so on — each advances their criticism of technology informed by their ethical commitments. Their criticism of technology flows from their loves. Each criticizes technology according to the larger moral and ethical framework implied by the movements, philosophies, and institutions that have shaped their identity. And, of course, so it must be. We are limited beings whose knowledge is always situated within particular contexts. There is no avoiding this, and there is nothing particularly undesirable about this state of affairs. The best critics will be self-aware of their commitments and work hard to sympathetically entertain divergent perspectives. They will also work patiently and diligently to understand a given technology before reaching conclusions about its moral and ethical consequences. But I suspect this work of understanding, precisely because it can be demanding, is typically driven by some deeper commitment that lends urgency and passion to the critic’s work.

For his part, if I may frame his essay with the categories I’ve sketched above, Morozov is deeply motivated by what he calls an “emancipatory political vision.” Consequently, he concludes that any technology criticism that does not work to advance this vision is a waste of time, at best. Tech criticism, divorced from political and economic considerations, cannot, in Morozov’s view, accomplish the lofty goal of advancing the progressive emancipatory vision he prizes.

While I feel the force of Morozov’s argument, I wouldn’t put the matter quite so starkly. There are, as I had suggested in my earlier post, a variety of perspectives from which one might launch a critique of technological society. Morozov’s piece pushes critics of all stripes to grapple with the effectiveness of their work (is that already a technocratic posture to take?), but each will define what constitutes effectiveness on their own terms.

I’d also suggest that a revolution or bust model of engagement with technology is not entirely helpful. For one thing, is there really nothing at all to be gained by arriving at better understandings of the personal and social consequences of our technologies? I think I’ll take marginal improvements for some to none at all. Does this amount to fighting a rear guard action. Perhaps. In any case, I don’t see why we shouldn’t present a broad front. Let the phenomenologists do their work and the Marxists theirs. Better yet, let their work mingle promiscuously. Indeed, let the Pope himself do his part.

It also seems to me that, if there is to be a political response to technological society, then it should be democratic in nature; and if democratic, then is must arise out deliberation and consent. If so, then whatever work helps advance public understanding of the stakes can be valuable, even if it gives us only a partial analysis.

Morozov would reply, as he argued against Carr, that this assumes the problem is one of an ill-informed citizenry in need of illumination when, in fact, the problem is rather that economic and social forces are limiting the ability of the average person to act in line with their preferences. In his recent piece arguing for an “attentional commons,” Matthew Crawford identified one instance of a recurring pattern:

“Silence is now offered as a luxury good. In the business-class lounge at Charles de Gaulle Airport, I heard only the occasional tinkling of a spoon against china. I saw no advertisements on the walls. This silence, more than any other feature, is what makes it feel genuinely luxurious. When you step inside and the automatic doors whoosh shut behind you, the difference is nearly tactile, like slipping out of haircloth into satin. Your brow unfurrows, your neck muscles relax; after 20 minutes you no longer feel exhausted.

Outside, in the peon section, is the usual airport cacophony. Because we have allowed our attention to be monetized, if you want yours back you’re going to have to pay for it.”

The pattern is this: where the technologically enhanced market intrudes, what used to be a public good is repackaged as a luxury item that now only the few can afford. I think this well illustrates Morozov’s point, and it is an important one. It suggests that tech criticism may risk turning into therapy or life-coaching for the wealthy. One can observe this same concern in an earlier piece from Morozov on “the mindfulness racket.”

That said and acknowledged, I’m not sure all didactic efforts are wholly wasted. Morozov is an intensely smart critic. He knows a lot. He’s thought long and hard about the problems of technological society. He is remarkably well read. Most of us aren’t. As I a teacher I’ve come to realize that it is easy to forget what you, too, had to learn at one point. It is easy to assume that your audience knows everything that you’ve learned over the years, particularly in whatever field you happen to specialize. While the delimiting forces of present economic and political configurations should not be ignored, I think it is much too early to give up the task of propagating a serious understanding of technology and its consequences.

______________________

“Why, then, aspire to practice any kind of technology criticism at all?” Morozov asks. His reply was less than sanguine:

“I am afraid I do not have a convincing answer. If history has, in fact, ended in America—with venture capital (represented by Silicon Valley) and the neoliberal militaristic state (represented by the NSA) guarding the sole entrance to its crypt—then the only real task facing the radical technology critic should be to resuscitate that history. But this surely can’t be done within the discourse of technology, and given the steep price of admission, the technology critic might begin most logically by acknowledging defeat.”

Or, they might begin to reimagine the tech critical project. How deeply do we need to dig to “resuscitate that history”? How can we escape the discourse of technology? What if Morozov hasn’t pushed quite far enough? Morozov wants us to frame technology in light of economics and politics, but what if politics and economics, as they presently exist, are already compromised, already encircled by technology?

In a follow-up post, I’ll explain why I think Alan Jacobs’ project to understand the technological history of modernity, as I understand it, may help us answer some of these questions.

A Technological History of Modernity

I’m writing chiefly to commend to you what Alan Jacobs has recently called his “big fat intellectual project.”

The topic that has driven his work over the last few years Jacobs describes as follows: “The ways that technocratic modernity has changed the possibilities for religious belief, and the understanding of those changes that we get from studying the literature that has been attentive to them.” He adds,

“But literature has not been merely an observer of these vast seismic tremors; it has been a participant, insofar as literature has been, for many, the chief means by which a disenchanted world can be re-enchanted — but not fully — and by which buffered selves can become porous again — but not wholly. There are powerful literary responses to technocratic modernity that serve simultaneously as case studies (what it’s like to be modern) and diagnostic (what’s to be done about being modern).”

To my mind, such a project enjoys a distinguished pedigree, at least in some important aspects. I think, for example, of Leo Marx’s classic, The Machine in the Garden: Technology and the Pastoral Ideal in America, or the manner in which Katherine Hayles weaves close readings of contemporary fiction into her explorations of digital technology. Not that he needs me to say this, but I’m certain Jacobs’ work along these lines, particularly with its emphasis on religious belief, will be valuable and timely. You should click through to find links to a handful of essays Jacobs has already written in this vein.

On his blog, Text Patterns, Jacobs has, over the last few weeks, been describing one important thread of this wider project, a technological history of modernity, which, naturally, I find especially intriguing and necessary.

The first post in which Jacobs articulates the need for a technological history of modernity began as a comment on Matthew Crawford’s The World Beyond Your Head. In it, Jacobs repeats his critique of the “ideas have consequences” model of history, one in which the ideas of philosophers drive cultural change.

Jacobs took issue with the “ideas have consequences” model of cultural change in his critique of Neo-Thomist accounts of modernity, i.e., those that pin modernity’s ills on the nominalist challenge to the so-called medieval/Thomist synthesis of faith and reason. He finds that Crawford commits a similar error in attributing the present attention economy, in large measure, to conclusions about the will and the individual arrived at by Enlightenment thinkers.

Beyond the criticisms specific to the debate about the historical consequences of nominalism and the origins of our attention economy, Jacobs articulated concerns that apply more broadly to any account of cultural change that relies too heavily on the work of philosophers and theologians while paying too little attention to the significance of the material conditions of lived experience.

Moving toward the need for a technological history of modernity, Jacobs writes, “What I call the Oppenheimer Principle — ‘When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success’ — has worked far more powerfully to shape our world than any of our master thinkers. Indeed, those thinkers are, in ways we scarcely understand, themselves the product of the Oppenheimer Principle.”

Or, as Ken Myers, a cultural critic that Jacobs and I both hold in high esteem, often puts it: ideas may have consequences, but ideas also have antecedents. These antecedents may be described as unarticulated assumptions derived from the bodily, emotional, and, yes, cognitive consequences of society’s political, economic, and technological infrastructure. I’m not sure if Jacobs would endorse this move, but I find it helpful to talk about these assumptions by borrowing the concept of “plausibility structures” first articulated by the sociologist Peter Berger.

For Berger, plausibility structures are those chiefly social realities that render certain ideas plausible, compelling, or meaningful apart from whatever truth value they might be independently or objectively assigned. Or, as Berger has frequently quipped, the factors that make it easier to be a Baptist in Texas than it would be in India.

Again, Berger has in mind interpersonal relationships and institutional practices, but I think we may usefully frame our technological milieu similarly. In other words, to say that our technological milieu, our material culture constitutes a set of plausibility structures is to say that we derive tacit assumptions about what is possible, what is good, what is valuable from merely carrying on about our daily business with and through our tools. These implicit valuations and horizons of the possible are the unspoken context within which we judge and evaluate explicit ideas and propositions.

Consequently, Jacobs is quite right to insist that we understand the emergence of modernity as more than the triumph of a set of ideas about individuals, democracy, reason, progress, etc. And, as he puts it,

“Those of us who — out of theological conviction or out of some other conviction — have some serious doubts about the turn that modernity has taken have been far too neglectful of this material, economic, and technological history. We need to remedy that deficiency. And someone needs to write a really comprehensive and ambitious technological history of modernity. I don’t think I’m up to that challenge, but if no one steps up to the plate….”

All of this to say that I’m enthusiastic about the project Jacobs has presented and eager to see how it unfolds. I have a few more thoughts about it that I hope to post in the coming days–why, for example, Jacobs project is more appealing than Evgeny Morozov’s vision for tech criticism–but that may or may not materialize. Whatever the case, I think you’ll do well to tune in to Jacobs’ work on this as it progresses.

Et in Facebook ego

Today is the birthday of the friend whose death elicited this post two years ago. I republish it today for your consideration. 

In Nicolas Poussin’s mid-seventeenth century painting, Et in Arcadia ego, shepherds have stumbled upon an ancient tomb on which the titular words are inscribed. Understood to be the voice of death, the Latin phrase may be roughly translated, “Even in Arcadia there am I.” Because Arcadia symbolized a mythic pastoral paradise, the painting suggested the ubiquity of death. To the shepherds, the tomb was a momento mori: a reminder of death’s inevitability.

Nicolas Poussin, Et in Arcadia ego, 1637-38

Nicolas Poussin, Et in Arcadia ego, 1637-38

Poussin was not alone among artists of the period in addressing the certainty of death. During the seventeenth and eighteenth century, vanitas art flourished. The designation stems from the Latin phrase vanitas vanitatum omni vanitas, a recurring refrain throughout the biblical book of Ecclesiastes: ”vanity of vanities, all is vanity,” in the King James translation. Paintings in the genre were still lifes depicting an assortment of objects which represented all that we might pursue in this life: love, power, fame, fortune, happiness. In their midst, however, one might also find a skull or an hour glass. These were symbols of death and the brevity of life. The idea, of course, was to encourage people to make the most of their living years.

Edwart Collier, 1690

Edwart Collier, 1690

For the most part, we don’t go in for this sort of thing anymore. Few people, if any, operate under the delusion that we might escape death (excepting, perhaps, the Singularity crowd), but we do a pretty good job of forgetting what we know about death. We keep death out of sight and, hence, out of mind. We’re certainly not going out of our way to remind ourselves of death’s inevitability. And, who knows, maybe that’s for the better. Maybe all of those skulls and hourglasses were morbidly unhealthy.

But while vanitas art has gone out of fashion, a new class of memento mori has emerged: the social media profile.

I’m one of those on again, off again Facebook users. Lately, I’ve been on again, and recently I noticed one of those birthday reminders Facebook places in the column where it puts all of the things Facebook would like you to click on. It was for a high school friend who I had not spoken to in over eight years. It was in that respect a very typical Facebook friendship:  the sort that probably wouldn’t exist at all were it not for Facebook. And that’s not necessarily a knock on the platform. For the most part, I appreciate being able to maintain at least minimal ties to old friends. In this case, though, it demonstrated just how weak those ties can be.

Upon clicking over to their profile, I read a few odd notes, and very quickly it became disconcertingly clear that my friend had died over a year ago. Naturally, I was taken a back and saddened. He died while I was off Facebook, and news had not reached me by any other channel. But there it was. Out of nowhere and without warning my browser was haunted by the very real presence of death. Momento mori.

Just a few days prior I logged on to Facebook and was greeted by the tragic news of a former student’s sudden passing. Because we had several mutual connections, photographs of the young man found their way into my news feed for several days. It was odd and disconcerting and terribly sad all at once. I don’t know what I think of social media mourning. It makes me uneasy, but I won’t criticize what might bring others solace. In any case, it is, like death itself, an unavoidable reality of our social media experience. Death is no digital dualist.

Facebook sometimes feels like a modern-day Arcadia. It is a carefully cultivated space in which life appears Edenic. The pictures are beautiful, the events exciting, the faces always smiling, the children always amusing, the couples always adoring. Some studies even suggest that comparing our own experience to these immaculately curated slices of life leads to envy, discontent, and unhappiness. Understandably so … if we assume that these slices of life are comprehensive representations of the lives people acutally lead. Of course, they are not.

Lest we be fooled, however, there, alongside the pets and witty status updates and wedding pictures and birth announcements, we will increasingly find our virtual Arcadias haunted by the digital, disembodied presence of the dead. Our digital memento mori.

Et in Facebook ego.